LandSlide Seekers

High-Level Project Summary

LandSlide Seekers is an application to help in predicting and assessing the risks of future landslides by using local data from the app user and global observations from satellites. The key component of LandSlide seekers is to gather as much as possible local information from pictures taken by users (often missing) and try to couple them with current research on land slides (models, AI, observations, ...)

Link to Project "Demo"

Detailed Project Description

LandSlide Seekers is divided into 3 major components (shown in Figure 1):

  • The application interfacing with the user (wrote in Fluttr)
  • The data analysis and AI models training (wrote in jupyter notebook)
  • The data transfer from EOSDIS/GES DISC, NASA EARTH API, NASA EOTNET API, monitoring stations (if having APIs) and the application (wrote in Python)

Figure 1: Illustration of the three components of LandSlide Seekers project


First, the user of the application takes a picture using a smartphone. The picture and its metadata are then transferred into to the server and the user sees an aerial local view. On the same view can be superposed data related to landslide (precipitation, elevation, slope angle, soil moisture, vegetation index, human development, temperature). The user can also have an historical view of the local landslide activity (even tracker) and assessment of the risk for a landslide at the location of the user (landslide index). The goal of the application is to be informative.


The purpose of the application is to show the potential landslide hazard in the user's region, and allow the individuals and local institutions to upload information for analysis. First, the user needs to provide the current location (could be using the GPS if user allows it), and the app will provide an aerial view of the location and past events of landslides in the region. The user can then point to the different landslides events to get more information (historical data) and contribute where some data is missing. The user can also record a photo on its current location (using the camera icon on the bottom right) and obtain a local landslide index from the server assessing the risk for a potential landslide locally. The data is analyzed based on the previous and current data and suggest whether you can safely stay or leave. The histogram icon on the bottom right is offering the option to upload other type of local data that are not images (soil composition, moisture, type of vegetation, ...) and potentially useful for researchers to develop landslide prediction models. Couple of snapshots of the application are shown in Figure 2.


Figure 2: Different views of the application


The application can be furthermore turned into a game by giving incentive to user to take pictures. The goal of the game would be to take picture of avatars using Augmented Reality techniques positioned in areas where local information is identified to be missing (could be done with an analytical model estimating the error regarding landslide evaluation). Each picture taken could be rewarded by building a collection of avatars and a system of XP to level-up a fictional Landslide hunter character (also open for mods to create skins and with of course no-microtransactions to be kids friendly :)). The game would contribute in gathering a large amount of local data necessary to feed the AI algorithms. In addition, the data extracted for the landslide risk evaluation (such as precipitation, vegetation index, satellite view, ...) can be displayed for extra information to the user since the data is already available on the server as shown in Figure 3.


Figure 3: Data available on the server that can be displayed to the user for the app (successively from left to right and top to bottom, corrected reflectance, precipitation, soil moisture, human development, elevation and vegetation index) . The data here are snapshots of Nasa Worldview app of Hamilton and should be coming from sever after extraction and transformation of data from the satellites.


The GPS location and the time of the recorded image is used as one of input for the data extraction from NASA databases. As the ML algorithms require the original data, a dedicated interface is necessary to collect the proper data and transform it to be suitable for the ML learning/evaluation process. To not overload the server with huge amount of data, subsets from NASA databased are extracted based on the space-time locality of the user.


On the server side, two AI algorithms are designed for the following tasks:

  • A deep convolutional neural network classification architecture (CNN) to analyze the picture from the user to identify the geological features (mountains, vegetation, habitation, etc.) or publish warnings in the case that a high-risk landslide-related event was observed. At this moment the CNN architecture is only trained to flag high-risk phenomena in a captured image, and further feature extractions are considered for the future releases of the script. These generated flags are binary values that are mapped to a captured image which will be integrated with its GPS coordinate which will be used as a parameter for training the other AI-driven architecture in this project for predicting landslide events in a given GPS coordinates; this process is schematically shown in Figure 4. These images can also be used to populate the NASA Landslide database for review. Currently trained CNN architecture has a layered architecture as is shown in Figure 5. The Python script can be found on the Github repository assigned for this project [1]. The accuracy of prediction in this CNN network is around 80 percent. Figure 6 shows the training evolution by epochs and its validation. For the training of the network, landslide images were extracted from the google search engine and the available Kaggle landscape database; used data can also be found in the devoted GitHub repository [1].

Figure 4: ML architecture of the classification algorithm


Figure 5: Layered architecture and training parameters of the CNN classifier used as a high-risk phenomena flagger.

Figure 6: Accuracy of the designed CNN architecture in hazard flagging and its validation during different epochs.


  • A predictive long short-term memory recursive neural network (LSTM RNN) fed by chosen satellite observations and local parameters. In this regard, satellite extracted parameters in this study are precipitation rate, vegetation index, earthquake events rate, soil moisture, volcano activity rate, elevation, distance from the rivers. The impact of these parameters in predicting landslide events (landslide susceptibility) is suggested by Liming Xiao et. al. [1]. The performance of LSTM RNN networks in predicting landslide-related phenomena are studied refs[2-4]. For improving the accuracy of the predictions and locality of the assessments the hazard flags generated by the designed CNN are also used as a parameter to train LSTM RNN architecture. The ultimate goal will be to also improve the satellite measured parameters using the additional information from the local images which were beyond the time scope of the current event. The schematic data treatment and the workflow of the LSTM RNN prediction are shown in Figure 7. The layered architecture of LSTM RNN and training parameters are shown in Figure 8. Due to the time restriction, the training and data feeding process is not finished. As at the time of implementing the code, Geological data were not available, these maps (some can be seen in Figure 7) are simulated. The Python scripts of these map simulations and Neural Network setup can be found in the assigned GitHub repository to this project [1].

Figure 7: ML architecture of the predictive algorithm.


Figure 8: The layered structure of LSTM RNN architecture for landslide susceptibility assessment.


After processing the data, the server's main goal is to transfer all the images to display to the application and a local landslide index assessing the risk for a potential landslide locally; see Figure 7. The server would be maintained by an academic/research institution and open for collaborations to use the most advanced models proposed by the researchers.


We realized how sparse the local data is and we think there are opportunities in involving local authorities in investing to deploy sensors in their jurisdictions. The application could be used to identify the missing information and its location to help predict a landslide event. Ideally, recommendations for local authorities could be proposed visually by the application directly.


With respect to time, it was not possible to realize all the features from LandSlide Seekers and the interfaces between the different components. Thus, up to this point, most of the parameter maps are simulate and only the data extraction procedure is only pulling the precipitation data from GPM_3IMERGHH.06. In addition, the classification CNN, currently, only detects the hazardous landslide-related events from the images which hopefully will be expanded to extract more features and improve local parameters in future.

Hackathon Journey

We would first thank Shayan for initiating the idea to do this. It has been a ride. When we first decided on a challenge we had zero expectation of anything. Oh how naive we were... The night before the challenge began, we were going through the SpaceApps-provided resources and realized how complicated this challenge was going to be. And yet, looking back, we were STILL naive at that point because the next morning we came face-to-face with the true challenge, the summit, the dragon: extracting data from NASA!!! It took us from 10am to midnight to finally get some data points directly from a satellite. There was so much information, combined with unclear documentation, and the pressure of a 2-day challenge, that we really had to dig and learn to go slowly to figure it out. The dragon was finally conquered, but there was also an AI dragon that was difficult to train when confronted with landslides, and another dragon that raised a great question - how to make an application that is available and interactive to a wide audience while at the same time a powerful tool for a complex problem? Despite half the team being separated due to the nature of the event, we were still always available to talk, talk, and talk through the struggles. Going on walks also helped develop ideas and further our solutions. The last mile, filled with adrenaline, was a lot of fun because of the time constraint, but also because of this we did not get to do our best work!! Even still, we gave it our all, taming three dragons in the process, and giving everything we had was probably the most rewarding part of the challenge. Thank you SpaceApps and thank you NASA!!!!

References

[1] https://github.com/slimpotatoes/LandSlide_Seekers_NasaAppChall2021

[2] Xiao, L., Zhang, Y. and Peng, G., 2018. Landslide susceptibility assessment using integrated deep learning algorithm along the China-Nepal highway. Sensors, 18(12), p.4436.

[3] Van Dao, D., Jaafari, A., Bayat, M., Mafi-Gholami, D., Qi, C., Moayedi, H., Van Phong, T., Ly, H.B., Le, T.T., Trinh, P.T. and Luu, C., 2020. A spatially explicit deep learning neural network model for the prediction of landslide susceptibility. Catena, 188, p.104451.

https://doi.org/10.1016/j.catena.2019.104451

[4] Ullo, S.L., Langenkamp, M.S., Oikarinen, T.P., DelRosso, M.P., Sebastianelli, A. and Sica, S., 2019, July. Landslide geohazard assessment with convolutional neural networks using sentinel-2 imagery data. In IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium (pp. 9646-9649). IEEE.


Additional references used

Tags

#app #landslide #ML #AI #hazard #data

Global Judging

This project has been submitted for consideration during the Judging process.