paint-brush
How AI Can Spot Wildfires Faster Than Humans by@whatsai
214 reads

How AI Can Spot Wildfires Faster Than Humans

by Louis BouchardJuly 15th, 2021
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Wildfires are more and more present in modern society, mainly caused by heat waves, lightning, droughts, climate change, or even human actions like car fires and cigarette butts. But thanks to AI, we may be able to spot these fires much sooner and take action sooner. Artificial intelligence can be used to reduce fire detection time from an average of 40 minutes to less than five minutes! The most common problem is that they are spotted too late and already widely spread out. It's such a cool and practical application of AI, and it has already been deployed in Brazil.

Company Mentioned

Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - How AI Can Spot Wildfires Faster Than Humans
Louis Bouchard HackerNoon profile picture

Wildfires are more and more present in modern society, mainly caused by heat waves, lightning, droughts, climate change, or even human actions like car fires, or cigarette butts. We've seen it everywhere recently Brazil, Australia, United States, Canada, etc., destroying plant, human, and animal life, property damage, and contributing to global warming through the high amount of CO2 produced.

But thanks to AI, we may be able to spot these fires much sooner and take action sooner.

Here's how artificial intelligence can be used to reduce fire detection time from an average of 40 minutes to less than five minutes!

Watch the video

References

►Read the full article: https://www.louisbouchard.ai
►My Newsletter (A new AI application explained weekly to your emails!): https://www.louisbouchard.ai/newsletter/
►Syntecsys, 2021: https://umgrauemeio.com/
►Odemna article "Artificial Intelligence For Wildfires: Scaling a Solution To Save Lives, Infrastructure, and The Environment", December 2020: https://omdena.com/blog/artificial-intelligence-wildfires/
►Odemna's article "Leveraging AI to fight wildfires", 2021: https://omdena.co1m/projects/ai-wildfires

Video Transcript

00:00

Wildfires are more and more present in modern society, mainly caused by heat waves, lightning,

00:06

droughts, climate change, or even human actions like car fires and cigarette butts.

00:11

We've seen it everywhere recently Brazil, Australia, United States, Canada, etc., destroying

00:16

plant, human and animal life, property damage, and contributing to global warming through

00:22

the high amount of CO2 produced.

00:24

These countries all have walls of videos like this in the county’s fire emergency where

00:29

they can see if something is going on.

00:31

The most common problem is that they are spotted too late and already widely spread out.

00:37

This is because you cannot have somebody staring at that wall all day, waiting to spot smoke

00:42

or fire.

00:43

And now you see where this is going; that's where artificial intelligence comes into play.

00:47

Using a good enough AI, you can have something even better:

00:51

it will be staring at all of these cameras simultaneously all day and will automatically

00:56

ping the authorities as soon as it detects something weird within a split second.

01:01

The best thing is that it can save the video frames with the suspicious smoke and send

01:06

it with this ping along with recommended fighting action, making the process much more efficient,

01:11

and the worst case being a false alert that authorities decide to ignore.

01:16

It's such a cool and practical application of AI, and it has already been deployed in

01:20

the real world!

01:21

Indeed, such an AI-based system has been running in Brazil for the past three years, and it

01:27

reduced fire detection time from an average of 40 minutes to less than five.

01:32

This system, built by a Brazilian company called Sintecsys, started using cameras installed

01:37

on top of 50 towers distributed in Brazil.

01:41

With the help of Omdena’s AI community, where many teams were assembled to attack

01:45

this task, they managed to build the best AI model for this use case.

01:50

The main problems they had to face are that 1.

01:52

the images sent to the model came from different times of the day.

01:56

Meaning that not only the luminosity will be different between day and night, which

01:59

is a massive factor for a model since it affects and changes the whole image giving it a hard

02:04

time to understand what's going on, but also that day fires are easily detected

02:09

through smoke while night fires are much more easily detected through live fire, due to

02:14

obvious reasons.

02:15

To attack this issue, teams either could build two separate models, one for the night and

02:20

one for the day, or build one larger model and assume that

02:24

the smoke is also detectable during night time.

02:27

The latter could work with sufficient training data and parameters to learn from this data.

02:32

Of course, the first way is problematic since there is still the sunset and dawn problem

02:37

where both live fire and smoke could be detected.

02:40

They do not mention how they decided to build the final model, but both were tested by different

02:45

teams from the Omdena's AI community.

02:48

In your opinion, what would you say could be the best solution for this situation?

02:51

I would assume a large enough model would be their best shot to fix the dawn and sunset

02:56

boundary problem without training a model for each sub-case.

03:00

They had to face a second problem: differentiate real smoke and 'smoke-like' anomalies such

03:05

as camera glare, fog, clouds, and smoke released from boilers that appear in the images.

03:11

The final problem was the low definition of the images they received from the cameras.

03:16

This model initially received heavily compressed images sent from the cameras, so they had

03:20

to upscale it before sending it to their model.

03:23

As you know by now, AI is hugely data dependant, so they had to have the best training data

03:28

possible in terms of quality and quantity to solve these problems successfully.

03:33

Such a model can only be as good as the data it is given during its training, so it had

03:38

to be very broad and contain all possible artifacts that may appear in the real world,

03:43

like clouds, fog, and camera glare we just discussed.

03:46

To start, they had 20 people manually labeling 9'000 images as precisely as possible.

03:53

This means that they manually went through all the images painting over the smoke to

03:57

help the model understand what smoke looks like.

04:00

This is undoubtedly the most expensive and tedious task, but it's crucial to build most

04:05

deep learning-based models used in real-world applications.

04:08

If you are not familiar with data annotation, I invite you to watch this short video I made

04:15

last year explaining it.

04:16

After doing so, they could start diving into how to attack the smoke detection, which means

04:21

finding the best way to detect whether there is smoke or not in the picture.

04:26

We do not have the details on the exact chosen model.

04:29

Still, they shared that they ended up using a convolutional neural network (CNN) approach

04:34

with some modifications to the images before they send them to the network.

04:38

As you may be aware, CNNs are a powerful deep learning architecture for vision-based applications

04:44

in which, simply said, the image is iteratively compressed,

04:48

focusing on the information we need about the image while removing redundant and uninformative

04:53

spatial features, ending up with a confidence rate informing us whether the image contains

04:58

what we were looking for or not.

05:01

This focus can be on anything, from detecting cats, humans, objects, to detecting smoke

05:07

in this case.

05:08

It all depends on the data it is trained on, but the overall architecture and working will

05:12

stay the same.

05:13

You can see CNNs as compressing the image, focusing on a specific feature of the image

05:18

at every step, getting more compressed and relevant to what we want the deeper we get

05:23

in the network.

05:24

This is done using filters that will go through the whole image, putting its focus on specific

05:29

features like edges with specific orientations.

05:32

This process is repeated with multiple filters making one convolution, and those filters

05:37

are what is learned during training.

05:40

After the first convolution, we get a new smaller image for each filter, which we call

05:45

a feature map, each of them focusing on specific edges or features.

05:51

So they will all look like a weird and blurry zoomed version of the image giving an accent

05:56

on specific features.

05:57

And we can use as many filters as needed to optimize our task.

06:01

Then, each of these new images is sent to the same process repeated over and over until

06:07

the image is so compressed that we have a lot of these tiny feature maps optimized on

06:12

the information we need adapted for the many different images our dataset contains.

06:18

Lastly, these tiny feature maps are sent into what we call "fully connected layers" to extract

06:23

the relevant information using weights.

06:25

These last few layers contain all connected weights that will learn which feature the

06:30

model should focus on based on the images fed and pass the information forward for our

06:35

final classification.

06:37

This process will further compress the information and finally tell us if there is smoke or not

06:43

with a confidence level.

06:44

So assuming the model is well trained, the final results would be a model focusing its

06:49

compression on smoke features in the image, which is why it is so appropriate to this

06:53

task or any task involving images.

06:56

If there is smoke, the filters will produce high responses, and we will end up with a

07:01

network telling us that there is smoke in the image with high confidence.

07:05

If there is no smoke, these compression results will produce low responses, letting us know

07:09

that nothing is going on in the picture regarding what we are trying to detect, which is a fire

07:15

in this case.

07:16

It will also produce results with a confidence rate anywhere in between no smoke and evident

07:20

smoke.

07:21

As they shared, the final model detected smoke in images with an impressive 95% to 97% accuracy!

07:28

Of course, the model isn't perfect yet, and as I said, it is very dependant on its training

07:33

data like most deep learning-based approaches.

07:36

This means that it may not be as good when trying to use this same model on different

07:40

types of environments, and we may need to use more data to adapt the model to the new

07:45

environment.

07:46

Fortunately, there are many ways to adapt a model with very little available data, which

07:51

we call fine-tuning a model, and it's done with new data different from the ones used

07:55

in training.

07:56

So, starting with this strong baseline they have, you won't need to label 9'000 images

08:01

for any new country you want to run your model on.

08:04

For example, this model trained on Brazilian forest images may need to be trained again

08:09

on a few hundred to a few thousand more images from Canadian forests if we would like to

08:14

use it in Canada.

08:16

This is an amazing real-world application of machine learning with a great use case

08:20

that will benefit everyone, especially these days with many wildfires around the world.

08:25

Before ending this video, are there any environment-related applications where you would see AI could

08:30

help?

08:31

Let me know in the comments, and I will look for it to cover any of them!

08:34

As always, you can find my blog article and newsletter in the description below, where

08:38

I send out a new AI application every week and more.

08:42

Thank you for watching.