Q: When a tree falls in the forest and nobody is around to hear it, does it make a sound?
A: Yes.
That’s an easy question to answer. It’s not a 3000-year-old philosophical conundrum with no answer. Sound is simply a pressure wave moving through some medium (e.g. air, or the ground). A tree falling in the forest will create a pressure wave whether or not there is someone there to listen to it. It pushes against the air, for one. And it smacks into the ground (or other trees), for two. These will happen no matter who is around. As long as that tree doesn’t fall over in the vacuum of space (where there is nothing to transmit the sound waves and nothing to crash into), that tree will make “a sound”. (There are also sounds that humans cannot hear. Think of a dog whistle. Does that sound not exist because a human can’t hear it?)
What if it’s not a tree? What if it’s 120 million metric tons of rock falling onto a glacier? Does that make a sound? To quote a former governor, “You betcha!” It even causes a 2.9 magnitude earthquake!
That’s right! On 28 June 2016, a massive landslide occurred in southeast Alaska. It was picked up on seismometers all over Alaska. And, a pilot who regularly flies over Glacier Bay National Park saw the aftermath:
If you didn’t read the articles from the previous links, here’s one with more (and updated) information. And, according to this last article, rocks were still falling and still making sounds (“like fast flowing streams but ‘crunchier'”) four days later. That pile of fallen rocks is roughly 6.5 miles long and 1 mile wide. And, some of the rock was pushed at least 300 ft (~100 m) uphill on some of the neighboring mountain slopes.
Of course, who needs pilots with video cameras? All we need is a satellite instrument known as VIIRS to see it. (That, and a couple of cloud-free days.) First, lets take a look at an ultra-high-resolution Landsat image (that I stole from the National Park Service website and annotated):
Of course, you’ll want to click on that image to see it at full resolution. The names I’ve added to the image are the names of the major (and a few minor) glaciers in the park. The one to take note of is Lamplugh. Study it’s location, then see if you can find it in this VIIRS True Color image from 9 June 2016:
Anything? No? Well, how about in this image from 7 July 2016:
I see it! If you don’t, take a look at this animated GIF made from those two images:
The arrow is pointing out the location of the landslide. Of course, with True Color images, it can be hard to tell what is cloud and what is snow (or glacier) and with VIIRS you’re limited to 750 m resolution. We can take care of those issues with the high-resolution (375 m) Natural Color images:
Make sure you click on it to see the full resolution. If you want to really zoom in, here is the high-resolution visible channel (I-1) imagery of the event:
You don’t even need an arrow to point it out. Plus, if you look closely, I think you can even see some of the dust coming from the slide.
That’s what 120 million metric tons of rock falling off the side of a mountain looks like, according to VIIRS!
It’s not everyday that one comes across something that is truly surprising. But, here’s something I recently came across that surprised me: a website on ghosts, angels and demons with useful scientific information. Of relevance here is the section on lens flare and ghosting. Although, maybe it shouldn’t be surprising. If you’re looking for “real” ghosts, you have to be able to spot the “fake” ones.
Simply put, lens ghosting (or optical ghosting) is a consequence of the fact that no camera lens in existence perfectly transmits 100% of the light incident upon it. Some of the light is reflected from the back of lens to the front, and then back again, as in the first diagram on this website. When the source of this light is bright enough, the component of this light that bounces around due to internal reflections within the lens may be as bright or brighter than the rest of the incoming light and will show up on the film (for you old fogies) or recorded by the array of detector elements that convert light into an electric signal (pretty much any camera purchased after 2004). That leads to the phenomena known as “flaring” and “ghosting”.
We’ve all seen pictures or movies that contain these artifacts. Here’s an example of flaring. Here’s an example of ghosting. And here’s both in the same image:
Professional photographers use flaring and ghosting to their advantage. Amateurs wonder why it ruined their picture.
In the particular case of “ghosts”, the light you see often takes on the shape of the aperture, which gives you polygonal or circular shapes like these:
I hate to be a stickler but those are pentagons, not hexagons. (Keep on your toes!) Flaring and ghosting is so prevalent in cameras of all kinds that animated movies replicate it in order to look “more real.” And, they are two examples of the many artifacts produced by cameras. (Take a look at the differences between CCD and CMOS detectors, as an example of others.)
Why bring this up on a blog about a weather satellite? Because the VIIRS Day/Night Band is, in a manner of speaking, just a really high-powered CCD camera. It, too, is subject to ghosts. (More so than other VIIRS bands because of its high sensitivity to low levels of light.)
Before we get to that, see if you notice anything unusual about this Day/Night Band image:
Those with photographic memories will recognize this image from an earlier post about the N-ICE field campaign in 2015 (which I hid in one of the animations). See that row of 6 bright lights north of Svalbard? Those aren’t boats and they’re not optical ghosts – they are 6 images of the same satellite (using the more liberal definition of satellite: 2a).
Don’t believe me? Here’s the explanation: VIIRS is on a satellite that orbits the Earth at about 835 km. That means two things: 1) there are plenty of satellites (or bits of space junk) that orbit at lower altitudes; and 2) every time a satellite crosses over to the nighttime side of the terminator, there is a period of time that the object is still illuminated by the sun before it passes behind the Earth’s shadow. And, there’s a third thing to consider: lower orbiting objects travel faster than higher orbiting objects. If one of these lower orbiting satellites should pass through the field-of-view of VIIRS while it is still illuminated by the sun, it can reflect light back to VIIRS, where the Day/Night Band can detect it. It’s a form of glint, like sunglint or moonglint. If it moves only slightly faster than VIIRS, it will be in the field-of-view for multiple scans, like in the image above.
It happened again in the same area 4 days later, only with 5 bright spots this time:
With all the striping that is present in the above image, you can clearly see the outline of each VIIRS scan. Note the relative position of the bright light in each scan in which it is imaged. See how it moves in the along-track dimension from one edge of the scan to the other? (The along-track dimension is basically perpendicular to the scan lines.)
Here are the two previous images zoomed in at 400%:
If this “satellite” reflects a high amount of light back to VIIRS, it can cause optical ghosts like in this image:
The ghosting is obvious. The “satellite” is less obvious, but you should be able to see the six smaller dots indicating its location. Eagle-eyed observers may click on it to see the full resolution image and note the two partial dots at either end of the row, indicating where this “satellite” was only partially within the VIIRS field-of-view. Even when the “satellite” was not in the field-of-view of VIIRS, it still caused ghosts – just like how the sun doesn’t have to be in a camera’s field-of-view to cause flares and ghosts.
The yellow line demarcates where the solar zenith angle is 108° on the Earth’s surface and the green line demarcates the lunar zenith angle of 108°. The yellow line is the limit of astronomical twilight. (Astronomical twilight exists to the right of that line.) Even though the surface is dark where this ghosting occurs (astronomical night), satellites are still illuminated by the sun (and moon) in this region. In fact, my back-of-the-envelope calculation indicates that VIIRS (at ~835 km) doesn’t pass into the Earth’s shadow until the sub-satellite point reaches a solar zenith angle of ~118°. (As an aside, the International Space Station is much lower [~400 km], so it is illuminated only to a solar zenith angle of ~110°.)
Here is the above image zoomed in at 200%:
Now that you’ve passed the crash course, see if you can earn your PhD. How many ghosts you can find in this image from last month? Make sure you click on it to see it in full resolution:
Where is the “satellite” in this case? What is the “real” image? And what are the “ghosts”? Are they even ghosts? As shown on the Angels & Ghosts website, objects that are out of focus are not necessarily ghosts – either “real” ghosts or “fake” ones. VIIRS is focused on the Earth’s surface (835 km away), so if another satellite were orbiting the Earth just a few kilometers lower in altitude, it would definitely appear out of focus and it would have a very similar speed to VIIRS, so it could be causing ghosts in the Day/Night Band for a long time, as you see here.
Here are all the ghosts that I found:
But, is that what we’re seeing? Are we seeing one satellite? Or is it a clutter of space junk? Did VIIRS just come close to a collision with something (because we’re seeing nearby out-of-focus objects)? Or are they optical ghosts from an object well below VIIRS, so we don’t have to worry about it? Maybe it’s a UFO! What about that!?
For once, I don’t have all the answers. But, the truth is out there! (Cue music…)
UPDATE (6/24/2016): Thanks to Dan L. for pointing out an instance of the high-resolution Landsat-8 Operational Land Imager quite clearly spotting the lower-orbiting International Space Station. With a different instrument scan strategy, it produces a different kind of artifact: tracking the ISS motion from one band to the next!
Take a second to think about what would happen if Florida was hit by four hurricanes in one month.
Would the news media get talking heads from both sides to argue whether or not global warming is real by yelling at each other until they have to cut to a commercial? Would Jim Cantorelose his mind and say “I don’t need to keep standing out here in this stuff- I quit!”? Would we all lose our minds? Would our economy collapse? (1: yes. 2: every man has his breaking point. 3: maybe not “all”. 4: everybody panic! AHHH!)
It doesn’t have to just be Florida. It could be four tropical cyclones making landfall anywhere in the CONUS (and, maybe, Hawaii) in a 1-month period. The impact would be massive. But, what about Alaska?
Of course, Alaska doesn’t get “tropical cyclones” – it’s too far from the tropics. But, Alaska does get monster storms that are just as strong that may be the remnants of tropical cyclones that undergo “extratropical transition“. Or, they may be mid-latitude cyclones or “Polar lows” that undergo rapid cyclogenesis. When they are as strong as a hurricane, forecasters call them “hurricance force” (HF) lows. And guess what? Alaska has been hit by four HF lows in a 1-month period (12 December 2015 – 6 January 2016).
With very-many HF lows, some of which were ultra-strong, we might call them VHF or UHF lows. (Although, we must be careful not to confuse them with the old VHF and UHF TV channels, or the Weird Al movie.) In that case, let’s just refer to them as HF, shall we?
Since Alaska is far enough north, polar orbiting satellites like Suomi-NPP provide more than 2 overpasses per day. Here’s an animation from the VIIRS Day/Night Band on Suomi-NPP:
It’s almost like a geostationary satellite! (Not quite, as I’ll show later.) This is the view you get with just 4 images per day. (The further north you go, the more passes you get. The Interior of Alaska gets 6-8 passes, while the North Pole itself gets all 15.) Seeing the system wrap up into a symmetric circulation would be a thing of beauty, if it weren’t so destructive. Keep in mind that places like Adak are remote enough as it is. When a storm like this comes along, they are completely isolated from the rest of Alaska!
Here’s the same animation for the high-resolution longwave infrared (IR) band (I-5, 11.5 µm):
You may have heard of Himawari and its primary instrument, the Advanced Himawari Imager (AHI). AHI can be thought of as a geostationary version of VIIRS, and it’s nearly identical to what GOES-R will provide. Well, Himawari’s field of view includes the Aleutian Islands, and it takes images of the full disk every 10 minutes. Would you like to see how this storm evolved with 10 minute temporal resolution? Of course you would.
Here is a loop of the full disk RGB Airmass product applied to Himawari. Look for the storm moving northeast from Japan and then rapidly wrapping up near the edge of the Earth. This is an example of something you can’t do with VIIRS, because VIIRS does not have any detectors sensitive to the 6-7 µm water vapor absorption band, which is one of the components of the RGB Airmass product. The RGB Airmass and Geocolor products are very popular with forecasters, but they’re too complicated to go into here. You can read up on the RGB Airmass product here, or visit my collegue D. Bikos’ blog to find out more about this storm and these products.
You might be asking how we know what the central pressure was in this storm. After all, there aren’t many weather observation sites in this part of the world. The truth is that it was estimated (in the same way the remnants of Typhoon Nuri were estimated) using the methodology outlined in this paper. I’d recommend reading that paper, since it’s how places like the Ocean Prediction Center at the National Weather Service estimate mid-latitude storm intensity when there are no surface observations. I’ll be using their terminology for the rest of this discussion.
Notice that this storm is much more elongated than the first one. Winds with this one were only in the 60-80 mph range, making it a weak Category 1 HF low.
Storm #3 hit southwest Alaska just before New Year’s, right at the same time the Midwest was flooding. This one brought 90 mph winds, making it a strong Category 1 HF low. This one is bit difficult to identify in the Day/Night Band. I mean, how many different swirls can you see in this image?
(NOTE: This was the only storm of the 4 to happen when there was moonlight available to the DNB, which is why the clouds appear so bright. The rest of the storms were illuminated by the sun during the short days and by airglow during the long nights.) The one to focus on is the one of the three big swirls closest to the center of the image (just above and right of center). It shows up a little better in the IR:
The colder (brighter/colored) cloud tops are the clue that this is the strongest storm, since all three have similar brightness (reflectivity) in the Day/Night Band. If you look close, you’ll also notice that this storm was peaking in intensity (reaching mature stage) right as it was making landfall along the southwest coast of Alaska.
This storm elongated as it filled in and then retrograded to the west over Siberia. There aren’t many hurricanes that do that after heading northeast!
So, there you have it: 4 HF lows hitting Alaska in less than 1 month, with no reports of fatalities (that I could find) and only some structural damage. Think that would happen in Florida?
PS: I know this is a VIIRS blog, but if you want to look at CIRA’s Himawari data products, we have both full disk and North Pacific (including the Aleutians) sectors available in near real-time on this website.
Minnesota calls itself the “Land of 10,000 Lakes” – they even put it on their license plates. To an Alaskan, it seems funny to brag about that since Alaska has over 3,000,000 lakes. That’s like a Ford Escort bragging to a Bugatti Veyron that it can achieve highway speeds!
Alaska has Minnesota beat in one other area, and it’s one they’re definitely not putting on their license plates: the number of wildfires. OK, so it may not be 10,000 fires as my title implies, but there sure are a lot:
That map shows the number of known wildfires in Alaska on 23 June 2015 and was produced by the Alaska Interagency Coordination Center (AICC). To say that 2015 has been an active fire season in Alaska is an understatement. That would be like saying a Bugatti Veyron is a vehicle capable of achieving highway speeds! (By the way, if anyone in the audience works at Bugatti, and would like to compensate me for this bit of free advertising by, say, giving me a free Veyron, it would be much appreciated.)
Imagine being the person responsible for keeping track of all these fires! (That’s what the good folks at AICC do on a daily basis. There’s also a graduate student at the University of Alaska-Fairbanks working on this very same problem, who has come up with this solution.) This is being called the worst fire season in Alaska since, well, the beginning of recorded history. 2004 was the worst on record, but 2015 is on pace to shatter that. By 26 June 2015, Alaska’s fires had burned over 1.5 Rhode Islands worth of land area (or, alternatively, 0.8 Delawares). By 2 July, the total burned acreage had achieved 3 Rhode Islands (1.6 Delawares; 2/3 of a Connecticut). On 7 July, the total hit 3,000,000 acres (1 Connecticut; 2.5 Delawares; 5 Rhode Islands). (2004 ended at 2 Connecticuts worth of land burned, so there is a chance the pattern could switch and Alaska will get enough rain to fall short of the record, but at this rate, that seems unlikely.)
It’s interesting to see how this came to be, given that there were only a couple of fires burning in the middle of June:
If you followed this blog last year, you should know about this RGB composite, called “Fire Temperature.” If not, read this and this. Click to the full size image and see if you can see the six obvious fires. (Two are in the Yukon Territory.) Now, count up the number of fires you see in this image from just one week later:
Why so many fires all of a sudden? Well, it has been an abnormally dry spring following a winter with much less snow than usual. Plus, there have been a number of dry thunderstorms that produced more lightning than rain. You can see them in the image above as the convective clouds, which appear dark green because they are topped with ice particles. (Ice clouds appear dark green in this composite. Liquid clouds appear more blue.) A number of thunderstorms filled with lightning formed on the 19th of June, and a lot of fires got started shortly after. Here is an animation of Fire Temperature RGB images from 15-25 July (showing only the afternoon VIIRS overpasses):
It’s difficult to see the storms that led to all these fires, because the storms don’t last long and they typically form and die in between images. Plus, some of the fires may have started from hot embers of other fires that were carried by the wind.
Of course, when there’s smoke there’s fire. I mean – when there’s fire there’s smoke. Lots of it, which you’d never be able to tell from the Fire Temperature RGB. The Fire Temperature RGB uses channels at long enough wavelengths that it sees through the smoke as if it weren’t even there. But, the True Color RGB is very sensitive to smoke. Here’s a similar animation of True Color images:
Look at how quickly the sky fills with smoke from these fires. And also note that the area covered by smoke by the end of the loop (25 June 2015) is too large to be measured in Delawares – units of Californias might be more useful.
The last frame in each animation comes from the VIIRS overpass at 21:30 UTC on 25 June 2015. It’s nice to know that you can still detect fires in the Fire Temperature RGB even with all that smoke around.
Another popular RGB composite to look at is the so-called “Natural Color”. This is the primary RGB composite that can be created from the high-resolution imagery bands I-1, I-2 and I-3. The Natural Color RGB is sort-of in-between wavelengths compared to the Fire Temperature and True Color. The True Color uses visible wavelengths (0.48 µm, 0.55 µm and 0.64 µm), the Fire Temperature uses near- and shortwave infrared wavelengths (1.61 µm, 2.25 µm and 3.7 µm), and the Natural Color spans the two (0.64 µm, 0.87 µm and 1.61 µm). This means the Natural Color is not as sensitive to smoke and not as sensitive to fires – except in the case of very intense fires and very thick smoke plumes.
Well, guess what? These fires in Alaska have been intense and have been putting out a lot of smoke, so they do show up. Here’s a comparison between the True Color and Natural Color images from 19 June 2015:
Thin smoke is invisible to the Natural Color, but thick smoke appears blueish (because the blue component at 0.64 µm is the most sensitive to it). As you go from visible wavelengths to near-infrared wavelengths, the smoke’s influence on the radiation transitions from Rayleigh scattering to Mie scattering, and the light is scattered more in the forward direction. This makes the smoke much more visible in the Natural Color composite when the sun is near the horizon, as in this image from 13:47 UTC on 24 June:
Notice the red spot in the clouds at 154 °W, 65 °N? (Click on the image to zoom in.) Here, the smoke plume is optically thick at 0.64 µm (blue component) and 0.87 µm (green component), but transparent at 1.61 µm (red component). It’s like the smoke is casting a shadow in two of the three wavelengths, but is invisible in the other. It’s the combination of large smoke particles and large solar zenith angle creating a variety of Rayleigh and Mie scattering effects leading to this interesting result.
A more dramatic example of this can be seen from the 12:57 UTC overpass on 21 June:
Notice the reddish brown band of clouds just offshore along the coast of southeast Alaska.
But, that’s not all! Really intense fires may be visible at 1.6 µm, so it’s possible the Natural Color composite can see them. Here’s the Natural Color composite from 22:09 UTC on 4 July zoomed in on an area of intense fires:
Notice the salmon- and red-colored pixels at the edges of some of the smoke plumes? Those are very intense hot spots showing up at 1.6 µm (I-3). In fact, the fires were so intense that they saturated the sensor at 3.7 µm (I-4) and this lead to “fold-over“:
Fold-over is when the sensor detects so much radiation above its saturation point, the hardware is “tricked” into thinking the scene is much colder than it is. In the image above, colors indicate pixels with a brightness temperature above 340 K. The scale ranges from red at 340 K to orange to yellow at 390 K. Channel I-4 reaches its saturation point at 368 K. Notice the white and light gray pixels inside the hot spots: the reported brightness temperature in these pixels is ~ 210 K – much colder than everything else around – even the clouds! This is an example of “fold-over”.
The reddish pixels in the Natural Color image match up very closely with the saturated, “fold-over” pixels in I-4:
What to do if we have fires saturating our sensor? Use M-13 (4.0 µm), which has a sensor designed to not saturate in these conditions:
Here, we have reached color table saturation (yellow is as high as it goes), but M-13 did not saturate. In fact, the “fold-over” pixels in I-4 have a brightness temperature above 500 K in M-13. That’s 130-140 K above the saturation point of I-4 (110-120 K above the top of the color table)! The lack of saturation is also why the hot spots appear hotter in M-13, even though it has lower spatial resolution:
The fact that intense fires show up at 1.6 µm is part of the design of the Fire Temperature RGB. Most fires show up at 3.7 µm (red component). Moderately intense fires are also visible at 2.25 µm (green component) and will appear orange to yellow. Really intense fires, like these, appear at 1.6 µm (blue component) and will appear white (or nearly white):
And, if you’re curious as to how all four of these images compare, here you go:
Shortwave infrared wavelengths are good for detecting fires, visible wavelengths are good for detecting smoke and the Natural Color composite, which uses wavelengths in-between, might just detect both – especially when intense fires exist.