Sign in

AI for Good: Anticipating disasters with better data.

Developing a tool for predicting natural disasters using IBM Cloud AutoAI.

Predicting natural disasters is an essential tool, as it reinforces the mitigation of loss of damage of mankind and nature. In this project, we are using IBM Cloud’s AutoAI to create a natural disaster prediction system. Using satelite data, we are going to predict the intesity and probablity of disasters like wildfire, earthquake and cyclone in various locations. AutoAI automate your AI lifecycle management. It generates top performing models for our dataset and handles all the mundane tasks. Let’s explore this.

Over 160 million lives are affected by natural disasters every year, from which 60,000 do not make it out alive. Because of climate change, there are more and more natural disasters. Natural disasters damage the environment and the people living in it. Some of them are wildfires, earthquakes, and cyclones. Such disasters affect millions of people every year and cause property damage worth hundreds of billions. Most of these disasters are unexplainably devastating to those who have become victims of them. The damage these disaster is causing to nature and mankind is undeniable. They can happen at any time and anywhere. This uncertainty makes it hard for identifying them early. So, we need to harness the power of artificial intelligence to predict them. Detection and mitigation of these natural disasters must be quick and accurate. By predicting the occurrence of natural disasters, we can save thousands of lives and take appropriate measures to reduce property damage.

The impact of these phenomena can be reduced if we were able to predict their occurrence.

Using Artificial Intelligence for predicting natural disasters

Wildfires are becoming more frequent and their intensity is also increasing. By the year 2020, there were about 57,000 wildfires and burned more than 10.3 million acres and cost over two billion dollars annually just to fight it, without even considering the implication for lives lost and property damage. Moreover, there are enormous health concerns related to these. We can use artificial intelligence to detect wildfires before they spiral out of control. Up to an extent, we can predict the intensity of the wildfires and take action to prevent them from causing further harm. The geographical conditions of the location play an important role in increasing the intensity of the fire spread. So, we can predict the intensity of the fire and take necessary precautions to counter the fire and save lives. Possibly reducing the damage it may have caused. We can use the power of AI to predict the intensity at which the fire will spread. We’ll get insights about the risk level of the wildfire and possibilities on how to tackle it.

From AI’s prediction, we will get insights into how severely the wildfire will spread. So, we can make plans to evacuate and be alert of the situation. Also, we can closely watch places where there is high risk. So, we can implement the evacuation and rescue plan properly. Early detection of disasters like this will help a lot in saving lives and reducing destruction. Also, wildfires are known to cause a lot of CO2 emissions. So, by reducing them we are addressing a bigger problem of global warming.

This Californian wildfire has killed dozens of people and scorched hundreds of thousands of acres of forests and infrastructure. One of the state’s deadliest blazes ever. Historic wildfires are raging across California as a result of extreme weather conditions, forcing tens of thousands of citizens to flee their homes and businesses. Recent strong winds and low humidity will fan existing fires and can cause new fires to spread quickly.

Unlike most natural disasters, earthquakes strike without warning. Between large quakes and resulting tsunamis, millions of lives have been lost because science has been unable to provide accurate, useful earthquake forecasts. Even though many scientists have tried out different methodologies, for instance observing phenomenons such as foreshocks, electromagnetic disturbances, changes in groundwater chemistry, or even unusual animal behavior, so far, none of these have been proved to be a reliable way to forecast earthquakes, and sometimes it is considered as a hopeless endeavor. But with advanced machine learning algorithms, computing power, and the availability of enormous amounts of data, now it’s possible to predict with an amazing degree of accuracy.

Earthquake prediction requires the determination of the location, and size of an event before it begins. The application of AI in geoscience has expanded rapidly. Predicting earthquakes is critical for seismic risk assessment, prevention, and safe design of major structures. AI assists in identifying unknown features to more accurately predicting earthquake activities.

Check out How!: AI Developers from Canada has trained and deployed a model using IBM Watson Studio for predicting earthquakes. [link]

Storms are weather phenomena that form over the ocean through the release of energy generated by evaporation and saturation of water on the ocean’s surface. This process results in heavy rain and strong winds and when these storms approach land they can cause damage and flooding to inhabited areas. Storms that form in the Atlantic and northeast Pacific Oceans are called Hurricanes, and storms that form in the Indian and South Pacific Oceans are called Cyclones, and those which form in the northwest Pacific Ocean are called Typhoons. Cyclones are among the most destructive natural phenomena. The impact from cyclones extends over a wide area, with strong winds and heavy rains. The main effects of cyclones include heavy rain, strong wind, large storm, and tornadoes. Property damage is the most common after-effect, with windows, roofs, and doors succumbing to the powerful winds battering them, and the most powerful storms can tear down small buildings. The destruction from a tropical cyclone, such as a hurricane or tropical storm, depends mainly on its intensity, its size, and its location. Cyclone Disaster Management encompasses mitigation and preparedness measures for cyclones. Observations from space have been used to monitor tropical cyclones and scientists use this space-borne data to track the formation and progress of storms for decades. Still, it’s hard to determine when it will intensify as it’s still unknown what’s happening inside a tropical cyclone.

Machine learning, as a means of artificial intelligence, has been certified by many researchers as being able to provide a new way to solve the bottlenecks of tropical cyclone forecasts, whether using a pure data-driven model or improving numerical forecasting models by incorporating machine learning. Thus we can harness the power of AI to predict these disasters.

Check out How!: Researchers at NASA has developed a new AI for forecasting cyclones using IBM Watson Studio with prediction results having impressive accuracy. [link]

How can Artificial Intelligence help us?

Artificial Intelligence is playing a significant role in improving our lives. Finding interesting inferences from a large amount of data is revolutionary and not humanly possible. The capabilities of artificial intelligence are unlimited and it’s evolving day by day. AI can predict the occurrence of numerous natural disasters, which can be the difference between life and death for thousands of people. Artificial Intelligence uses algorithms to see patterns in data. So, we are employing AI to deducing meaningful information from large data. There are a lot of algorithms each of these has a specific purpose. So, here comes AutoAI to save the day. AutoAI is a feature provided by IBM Cloud’s Watson Studio. It’s a revolutionary tool for data scientists and engineers. It is automating artificial intelligence. From data preparation to hyper-parameter optimization.

AutoAI can see patterns in data and provide crucial insights.

In this project, we are going to build an application to predict the probability of wildfire, earthquake, and cyclone at a location. AutoAI selects the top-performing model within minutes, saving a lot of time for data scientists. And It’ll be the best tool for experimenting and creating models on data. It can improve our workflow. All-in-one place for developing a model and deploying it, so that can be used by an external app. In short, it automates all the tasks that mostly take days or weeks for data scientists. Watson Studio is a perfect tool to improve the productivity of data scientists. So, more organizations are starting to use it. While analyzing your dataset, AutoAI discovers the best algorithm that works for your purpose. Automatically, does hyper-parameter tuning and feature engineering then presents you with a leaderboard of different model pipelines. The best thing is that you don’t need to write a single line of code. AutoAI does most of the work for you.

How we can benefit from early predictions?

Better predictions and warnings save lives. With only a few minutes’ notice of a tornado or flash flood, people can act to protect themselves from injury and death. Predictions and warnings can also reduce damage and economic losses. When notice of an impending disaster can be issued well in advance, as it can for some floods, wildfires, and hurricanes, property and natural resources be protected.

  • Better preparedness — Help utilities plan ahead, ensure enough crews are mobilized, and position them properly with equipment already set up and ready to respond to the predicted damage
  • Risk mitigation — Take necessary actions to reduce economic losses and mitigate the number of injuries or deaths from a disaster.
  • Monitoring — Monitoring locations that are susceptible to natural disasters. This helps to find early signs of disasters.

Disaster prediction is really important. As it plays a major role in resource allocation, mitigation and recovery efforts.

About the app

Check out our app:

The dataset used here is Active Fire Data and it is provided by NASA. All the parameters used here are from MODIS (Moderate Resolution Imaging Spectroradiometer) a key instrument used in Terra and Aqua Satellites. We are using the latitudes and longitudes to determine the severity of the wildfires. The spread of wildfire depends largely on the location, so we can find patterns in the spread of wildfire based on locations. We have an interactive map on the platform, so users can choose the location by clicking over it. And it’ll send that, coordinates to AutoAI to predict the intensity of the wildfire. Based on that value we calculate the risk level and give appropriate suggestions to users. This app can be used to create a long-term action plan to reduce wildfires. Also, authorities can be alert of how the wildfire will spread in that area. And be ready for that.

How AutoAI improves your productivity?

AutoAI automates the following process and creates the best model for your dataset.

Cleaning the Dataset

Some datasets may have missing values and various data formats, that’ll affect the accuracy of the model. If a data scientist is working on his own to clean up the dataset it would take him hours to complete that. AutoAI applies special techniques and algorithms to clean and analyze your data and makes it ready for machine learning. It categories, the fields with datatype (eg: integer or string).

Selects the best model for your objective

AutoAI automatically chooses the best algorithm for your dataset. AutoAI tests and ranks various algorithms using small parts of the data sets, and finds the best algorithm and runs it with the whole dataset, and creates the best model out of it. If a data scientist wanted to do this it’ll take him months to create and run these many models.

Feature Engineering

AutoAI attempts to convert the data into a combination of features so that the model produces more accurate predictions. AutoAI uses reinforcement learning to progressively maximize the model’s accuracy. So, AutoAI transforms the data in a new way so that it can be used efficiently by the selected algorithm.

Hyperparameter optimization

Finding the right hyperparameters is an iterative process. AutoAI uses novel methods to converge into a good solution without causing much evaluation time and long iteration. It uses cost-effective solutions, for model training and scoring. AutoAI further refines the best-performing pipelines.

After successfully creating our model, we can connect it with a web app, AutoAI is also providing necessary services for that.

How to go from Model to an App

Integrating the deployed model into our application

For our model to be practically useful, we need to integrate it with an app. So, users will be able to provide inputs and get predictions. We can interact with the model and receive prediction results using endpoints of the deployed model. For that we need an API Key, you can create it on Identity and Access Management (IAM). There you can generate an API key. Using this API key, you’ll need to send an authentication request. As a response to that, we’ll get an access token for our application. You can also access the endpoint of our deployed model. That’s available in deployments, under API references. In order to generate predictions, we need to give a scoring request. We need to provide input values along with the access token and deployment and version in order to generate a prediction. All the fields and values should be in the same order. Following is the sample of how the input data is to be structured. This JSON is available in deployment under test. Tutorial for this is available at the end.

Best things About AutoAI

1. Select the best model quickly

Shortlist top-performing models in minutes instead of days/weeks Drastically reduce neural network search time

2. Reduces Skill Gap

Go live with better models using the skill sets you have Increase repeatability and minimize human intervention

3. Improves Productivity

Get started with AI experiments without knowing how to code do more innovative work instead of mundane tasks (e.g. lengthy feature selection process)

4. Control your AI

Monitor AI outcomes with explainability and debiasing. Increase trust and transparency with IBM Watson OpenScale

5. Ready, set, deploy

Automate your AI lifecycle management Enable one-click deployment with Watson Machine Learning

Notebook Links:

Let’s learn how to use AutoAI

To start working on Watson Studio, create a free account on IBM Cloud

Spinning up Watson Studio Service

Let's spin up a new instance of Watson studio for performing an AutoAI experiment.

  • Create Watson Studio
    For that go to Resource list and click on Create Resource, then search for Watson Studio and create one.
  • Go to Watson Studio and Create a New project.
    Click on Get Started to Open Watson Studio. And click on Add Project to create a new project.
  • Creating New Project
    Click on Add Project. Choose an empty project and give it a name. Also, you’ll need to link storage. For that click on Add below the Select Storage, here you’ll be redirected to create a Cloud Object Storage

Creating AutoAI Experiment

  • Add a New AutoAI Experiment
    Click on Add to Project and choose AutoAI Experiment. We need to associate a Machine Learning instance. For that click on Associate a Machine Learning Service.
  • Creating Machine Learning Instance
    We need to create a new machine learning instance from Watson studio. Click on New Service and choose Machine Learning.
  • Associate Created Instance
    Select the created instance and click on associate service. And now go back to the experiment creation page and click on reload. Then click on create. Now your AutoAI experiment is ready. Let’s start adding data
  • Upload Dataset
    Drag and drop the CSV dataset. And choose what you need to predict. We can also make changes to the settings of the experiment. Here you can also, choose the algorithms you need to run. You can also choose the metric to optimize for the experiment. And adjust the test and training data split. Then click on Run Experiment.

Generating Model

  • Your model is being generated.
    AutoAI will automate all the tasks. It’ll start with data-preprocessing and then selects the best model and build several pipelines. AutoAI creates a leaderboard based on accuracy. You can compare different pipelines. AutoAI reads the dataset and it’ll split holdout data, then after preprocessing it’ll choose a model and finds the top performer. All the algorithms generate pipelines, AutoAI will rank the pipelines based on their performance. Also, you can choose the metric you need to rank them in Root mean squared error is recommended in my case. It’ll only take a couple of minutes for AutoAI to generate the best model for your dataset.
  • Pipeline comparison
    You’ll be able to compare different pipelines and rank them according to various metrics. By clicking on each pipeline we can see the detailed evaluation of that model.
  • Save Model
    Now that you have successfully created your model. Let’s save that. To save your model, click on Save as. And choose the model and give a name to your model. And click on create.


Deployment is really fast and easy. After creating your model, you’ll need to deploy that in order to make predictions and connect it with your app.

  • Promote to Deployment Space
    Go to projects and you can view the saved model in assets. Choose the model and click on Promote to deployment space
  • Create Deployment Space
    Click on New Space to create a deployment space.
  • Now Promote
    Your deployment space is created. Now promote your model to the deployment space. Click on promote.
  • Go to your Deployment Space
    Choose deployment from the menu and choose your space.
  • Choose your Model
    You’ll be able to see the models in the assets, click on that.
  • Create Deployment
    To deploy your model, click on the Create deployment.
  • Creating Deployment
    Choose online and give a name to the deployment.
  • Testing the Model
    Your model is now successfully deployed. Now you can input values and get predictions from AutoAI.

Connecting App with Deployed Model

We have successfully deployed our model. Now we need to connect the endpoints of the deployment with our app. So that the user can enter data and get predictions.

  • Creating API Key
    For creating an API key, we need to go to Identity and Access Management (IAM). Click on Manage on the top menu, then select Access(IAM)
  • At IAM, go to API Keys. Then click on Create an IBM Cloud API key. Now you can name it and add a description. Then click on create. Your API key is ready. Now you can copy it and use it in your app.
  • Accessing Endpoint
    You can view the endpoint for the deployment under the API reference. Here we also have the required code snippets. Scoring URL consists of endpoint, deployment id, and version. We can use this in our app to access this model to make predictions.
  • Sending Authentication Request
    Using API key, send auth request to
  • Receiving Access Token
    As a response to the earlier auth request, you’ll get an access token.
  • Scoring Request
    Using the above access token and prediction input along with the deployment id, and the version we are sending another request to get the prediction.
  • Prediction Result
    As a response to the scoring request, we’ll receive a value and that’ll be the prediction result.


AutoAI definitely improves the workflow of a data scientist. I have developed around 3 models using AutoAI, from my experience it’s really easy to use and takes care of all the hard tasks so we can be creative and focus on things that matter the most. I have learned a lot after working on this. I have gained a lot of insights into various algorithms. AutoAI will be a perfect tool for data scientists who doesn’t have much technical knowledge of coding, as it doesn’t require a single line of code to create the best model for your dataset.

Simple and Powerful tool: AutoAI is really easy to use, I’m really amazed to see its capability. With a couple of clicks, we can generate a model. That’s really revolutionary. Empowering citizen data scientists to explore more and play with data.

Get started for free, the tool is offered for free with adecent CHU(this is a measure for computing power). So, we can try it for free, and as our needs increase we can pay for the additional computing power required.

Great UI/UX: IBM design team has done a great job on the UI/UX for AutoAI. The visualization for model creation is really cool. The user interface is really easy to navigate and user-friendly.

All in one tool: AutoAI is an all-in-one tool for model creation and deployment. All these are really easy to do with AutoAI. Also, we’ll be able to save our models as notebooks, so we can visualize all those results.

Overall, my experience with AutoAI was really great. The hard and mundane tasks now seem to be different. Thanks to IBM for revolutionizing this field by bringing in ground-breaking tools like this.

Thank you!

Tech Blogs