Some of the earliest problems in mathematics asked about the integer solutions to simple polynomial equations. For instance, what are the possible right triangles with whole number side lengths? The solution dates at least back to the Ancient Greeks; the side lengths are related by Pythagoras' famous formula x2 + y2 = z2. The 7th century Indian mathematician Brahmagupta studied integer solutions to the equation x2 - 2y2 = 1 as well as the same formula with 2 replaced by a general integer n (called Pell's equation). Many other similar equations have been studied for centuries or millennia.
In general, a Diophantine equation is a polynomial equation for which we are interested in integer solutions. Counterintuitively, some questions about solving these in the integers may be more difficult than considering all types of solutions. For example, the fundamental theorem of algebra states that any polynomial in a single variable has a root over the complex numbers (e.g. x3 - 4x2 + 17x + 20 = 0 is true for some complex number). However, there is often no integer solution to such equations.
Historically, different types of Diophantine Equations were typically solved by ad hoc methods, as they come in many different varieties. However, one general observation that connects many of these equations is that they state something about the factorization of a sum of two numbers. Pythagoras' equation says something special about the sum of two squares, namely that it is another square! Similarly, Pell's equation says that one plus some number multiplied by a square has the property that it too is square. Our motivating question may then be taken to be:
How does the factorization of a sum of two numbers relate to the factorizations of the individual numbers?
The abc Conjecture provides a partial answer to this question. Its name comes from the fact that we are considering equations of the form a + b = c and asking how the factorizations of the three numbers relate. Mathematicians David Masser and Joseph Oesterlé first made the conjecture in 1985 while studying integer points on what are called elliptic curves, in this case given by the equation y2 = x3 + k (where k is a fixed integer). This is yet another example of a sum having special factorization properties. Throughout the rest of this post, we will see how thinking about the motivating question might lead you to formulating the abc conjecture.
Simply put, we want the answer to our motivating question to be "it doesn't." Somehow, the additive and multiplicative structures of the integers should be independent of one another. This is in some ways a deep statement, and not at all intuitively clear, but we'll begin with this assumption. In other words, for an equation a + b = c, if all three numbers satisfy some special factorization properties (e.g. being cubes, etc.) it should in some sense be a coincidence. Our next task is to make this progressively less vague. First, we need a definition.
Definition: Two numbers are relatively prime if they share no common prime divisors.
For example, 34 and 45 are relatively prime, but 24 and 63 are not, because they are both divisible by 3. Here is how we will express our independence hypothesis: for any equation of the form a + b = c, where a and b are relatively prime, if a and b are divisible by high powers of primes, c almost always is not. This is in keeping with our theme because "divisible by high powers of primes" is special factorization property. That is, most prime factorizations should look more like 705 = 3*5*47 and not 768 = 28*3. The assumption that a and b are relatively prime exists to rule out silly equations like
2n + 2n = 2n + 1,
in which all three numbers are divisible by arbitrarily high powers of 2. This doesn't represent some special connection between addition and multiplication - all we've done is multiplied the equation 1 + 1 = 2 by 2n. If we assert that a and b are relatively prime, then the prime factors of each of the three numbers are distinct, and we eliminate the uninteresting examples. Next, we require a mathematical notion that measures "divisibility by high prime powers".
Definition: The radical of a number n, denoted rad(n), is the product of the distinct prime powers of n. Also define rad(1) = 1.
For example, rad(705) = 3*5*47 = 705 (since the factors 3, 5, and 47 are distinct) but rad(768) = 2*3 = 6. The radical function forgets about any powers in the prime factorization, keeping only the primes themselves. Notice that the radical of a number can be as large as the number itself, but it can also be much smaller. The amount by which rad(n) is smaller than n can be taken as a measure of to what extent n is divisible by large prime powers.
Now we return to our equation a + b = c (where we will now consistently assume the relatively prime hypothesis). A reasonable way to test for high prime power divisibility for all three of these numbers is to calculate rad(abc) = rad(a)rad(b)rad(c) (the reader may wish to prove this last equation). Since rad(abc) could be as large as abc itself, it seems likely that rad(abc) would usually be much larger any of the individual numbers, the largest of which is c. For example, consider 13 + 22 = 35. In this case, rad(abc) = rad(13*22*35) = 13*2*11*7*5 = 10010, which is much larger than c = 35. However, this property does not always hold true. Consider another example, 1 + 8 = 9. Now we have rad(abc) = rad(1*8*9) = 2*3 = 6, and 6 < 9 = c. Notice that this anomaly reflects something weird going on; the equation can also be written 1 + 23 = 32, so one plus a cube is a square. Testing different values of a, b, and c gives the impression that equations of the second sort are rare. Therefore, we make an almost mathematical conjecture:
"Almost" Conjecture: For equations of the form a + b = c where a and b are relatively prime, rad(abc) is almost always greater than c.
We're close! The equation rad(abc) > c is a bona fide mathematical condition that we can check. However, we have yet to render "almost always" into mathematical language. Clearly there are infinitely many a + b = c equations to look at. What does it mean to say that "most of them" behave in some way? We know from our 1 + 8 = 9 example that there are at least some exceptions. Maybe we could assert that there are less than 10 total exceptions, or less than 100. However, these numbers seem arbitrary, so we'll just guess that there are only finitely many exceptions. That is, all but at most N of these equations, for some fixed finite number N, satisfy our hypothesis. In conclusion, we conjecture that:
Conjecture 1: For all but finitely many equations of the form a + b = c where a and b are relatively prime, rad(abc) > c.
Finally, a real conjecture! Unfortunately, it's false. In other words, there are infinitely many such equations for which rad(abc) ≤ c. Don't worry! It's rare in mathematics to come up with the correct statement on the first try! In the next post, we'll prove our conjecture 1 false and see how to correct it.
Sources: https://rlbenedetto.people.amherst.edu/talks/abc\_intro14.pdf, Brian Conrad: The abc Conjecture, 12 sep 2013.
Tuesday, March 26, 2019
Tuesday, March 5, 2019
The Casimir Effect
The idea of the electromagnetic field is essential to physics. Dating back to the work of James Clark Maxwell in the mid-1800s, the classical theory of electromagnetism posits the existence of certain electric and magnetic fields that permeate space. Mathematically, these fields assign vectors (arrows) to every point in space, and their values at various points determine how a charged particle moving in space would behave. For example, the magnetic field generated by a magnet exerts forces on other nearby magnetic objects. Crucially, the theory also explains light as an electromagnetic phenomenon: what we observe as visible light, radio waves, X-rays, etc. are "waves" in the electromagnetic field that propagate in space.
Maxwell's theory is still an essential backbone of physics today. Nevertheless, the introduction of quantum mechanics in the early 20th century introduced new aspects of electromagnetism. Perhaps most importantly, it was discovered that light comes in discrete units called photons and behaves in some ways both as a wave and a particle. Though electromagnetism on the human scale still behaves largely as the classical theory predicts, at small scales there are quantum effects to account for. Around the middle of the century, physicists Richard Feynman, Shinichiro Tomonaga, Julian Schwinger, and many others devised a new theory of quantum electrodynamics (or QED) that described how light and matter interact, even on quantum scales.
Naturally, QED predicted new phenomena that classical electromagnetism had not. One especially profound change was the idea of vacuum energy. For most purposes, "vacuum" is synonymous with "empty space". As is typical of quantum mechanics, however, a system is rarely considered to be in a single state, but rather in a superposition of many different states simultaneously. These different states can have different "weights" so that the system is "more" in one given state than another. This paradigm applies even to the vacuum. Certain pairs of particles may appear and disappear spontaneously in many of these states and even exchange photons. Some of the possible interactions are illustrated below with Feynman Diagrams.
In these diagrams, the loops represent the evanescent virtual particle pairs described above. Wavy lines represent the exchange of photons. Each of the six diagrams represents a possible vacuum interaction, and there are many more besides (infinitely many, in fact!). The takeaway is that the QED vacuum is not empty, but rather a "soup" of virtual particle interactions due to quantum fluctuations. Further, these interactions have energy, known as vacuum energy. This, at least, is the mathematical description. There are some curious aspects to this description, because the vacuum energy calculation in any finite volume yields a divergent series. In other words, there is theoretically an infinite amount of vacuum energy in any finite volume! Because of this, physicists devised a process called renormalization that cancels out these infinities in calculations describing the interaction of real particles. This process in fact gives results that have been confirmed by experiment. Nevertheless, it does not follow that the infinite vacuum energy exists in any "real" sense or is accessible to measurement. One possible way in which it is, however, is the Casimir Effect.
The setup of the Casimir effect involves two conducting metal plates placed parallel to one another. The fact that the plates are conducting is important because the electric field vanishes inside conducting materials. Now, the vacuum energy between the plates can be calculated as a sum over the possible wavelengths of the fluctuations of the electromagnetic field. However, unlike the free space vacuum, the possible wavelengths are limited by the size of the available space: the longest wavelength contributions to the vacuum energy do not occur between the plates (this is schematically illustrated in the image above). A careful subtraction of the vacuum energy density inside the plates from outside yields that there is more energy outside. Remarkably, this causes an attractive force between these plates known as the Casimir force. The force increases as the distance between plates is decreased. Precisely, the magnitude of the force F is proportional to 1/d4, where d is the distance between the plates. As a result, if the distance is halved, the force goes up by a factor of sixteen! The initial calculation of this effect was due to H.G.B Casimir in 1948.
Around 50 years after first being postulated, the effect was finally measured experimentally with significant precision. The primary issue was that for the Casimir force to be large enough to measure, the metal plates would have to be put very close to one another, less than 1 micrometer (0.001 mm). Even then, very sensitive instruments are necessary to measure the force. One landmark experiment took place in 1998. Due to the practical difficulty of maintaining two parallel plates very close to one another, this experiment utilized one metal plate and one metal sphere with a radius large compared to the separation (so that it would "look" like a flat plate close up). The authors of the experiment also added corrections to Casimir's original equation accounting for the sphere instead of the plane and the roughness of the metal surfaces (at the small distances of the experiment, microscopic bumps matter). They obtained the following data for the force as it varies with distance:
In the figure above, the squares indicate data points from the experiment and the curve is the theoretical model (including the corrections mentioned). The distance on the x-axis is in nanometers and the smallest distance measured was around 100 nm, hundreds of times smaller than the width of a human hair. Even at these minuscule distances, the force only reached a magnitude of about 1*10-10 Newtons, a billion times smaller than the weight of a piece of paper. Nevertheless, the results confirmed the presence of the Casimir force to high accuracy.
The existence of the Casimir effect would seem to vindicate the rather strange predictions of QED with respect to the quantum vacuum, suggesting that it is indeed full of energy that can be tapped, if indirectly. However, others have argued that it is possible to derive the effect without reference to the energy of the vacuum, and therefore the experiment does not necessarily mean that vacuum energy is "real" in any meaningful way. Continued study into the existence of vacuum energy may help to explain the accelerating expansion of the universe since some mysterious "dark energy" is believed to be the source. In the mean time, the Casimir effect is an important experimental verification of QED and could someday see applications in nanotechnology, since the force would be relatively large on small scales.
Sources: https://www.scientificamerican.com/article/what-is-the-casimir-effec/, https://arxiv.org/pdf/hep-th/0503158.pdf, The Quantum Vacuum: An Introduction to Quantum Electrodynamics by Peter W. Milonni, http://web.mit.edu/kardar/www/research/seminars/PolymerForce/articles/PRL-Mohideen98.pdf
Maxwell's theory is still an essential backbone of physics today. Nevertheless, the introduction of quantum mechanics in the early 20th century introduced new aspects of electromagnetism. Perhaps most importantly, it was discovered that light comes in discrete units called photons and behaves in some ways both as a wave and a particle. Though electromagnetism on the human scale still behaves largely as the classical theory predicts, at small scales there are quantum effects to account for. Around the middle of the century, physicists Richard Feynman, Shinichiro Tomonaga, Julian Schwinger, and many others devised a new theory of quantum electrodynamics (or QED) that described how light and matter interact, even on quantum scales.
Naturally, QED predicted new phenomena that classical electromagnetism had not. One especially profound change was the idea of vacuum energy. For most purposes, "vacuum" is synonymous with "empty space". As is typical of quantum mechanics, however, a system is rarely considered to be in a single state, but rather in a superposition of many different states simultaneously. These different states can have different "weights" so that the system is "more" in one given state than another. This paradigm applies even to the vacuum. Certain pairs of particles may appear and disappear spontaneously in many of these states and even exchange photons. Some of the possible interactions are illustrated below with Feynman Diagrams.
In these diagrams, the loops represent the evanescent virtual particle pairs described above. Wavy lines represent the exchange of photons. Each of the six diagrams represents a possible vacuum interaction, and there are many more besides (infinitely many, in fact!). The takeaway is that the QED vacuum is not empty, but rather a "soup" of virtual particle interactions due to quantum fluctuations. Further, these interactions have energy, known as vacuum energy. This, at least, is the mathematical description. There are some curious aspects to this description, because the vacuum energy calculation in any finite volume yields a divergent series. In other words, there is theoretically an infinite amount of vacuum energy in any finite volume! Because of this, physicists devised a process called renormalization that cancels out these infinities in calculations describing the interaction of real particles. This process in fact gives results that have been confirmed by experiment. Nevertheless, it does not follow that the infinite vacuum energy exists in any "real" sense or is accessible to measurement. One possible way in which it is, however, is the Casimir Effect.
The setup of the Casimir effect involves two conducting metal plates placed parallel to one another. The fact that the plates are conducting is important because the electric field vanishes inside conducting materials. Now, the vacuum energy between the plates can be calculated as a sum over the possible wavelengths of the fluctuations of the electromagnetic field. However, unlike the free space vacuum, the possible wavelengths are limited by the size of the available space: the longest wavelength contributions to the vacuum energy do not occur between the plates (this is schematically illustrated in the image above). A careful subtraction of the vacuum energy density inside the plates from outside yields that there is more energy outside. Remarkably, this causes an attractive force between these plates known as the Casimir force. The force increases as the distance between plates is decreased. Precisely, the magnitude of the force F is proportional to 1/d4, where d is the distance between the plates. As a result, if the distance is halved, the force goes up by a factor of sixteen! The initial calculation of this effect was due to H.G.B Casimir in 1948.
Around 50 years after first being postulated, the effect was finally measured experimentally with significant precision. The primary issue was that for the Casimir force to be large enough to measure, the metal plates would have to be put very close to one another, less than 1 micrometer (0.001 mm). Even then, very sensitive instruments are necessary to measure the force. One landmark experiment took place in 1998. Due to the practical difficulty of maintaining two parallel plates very close to one another, this experiment utilized one metal plate and one metal sphere with a radius large compared to the separation (so that it would "look" like a flat plate close up). The authors of the experiment also added corrections to Casimir's original equation accounting for the sphere instead of the plane and the roughness of the metal surfaces (at the small distances of the experiment, microscopic bumps matter). They obtained the following data for the force as it varies with distance:
In the figure above, the squares indicate data points from the experiment and the curve is the theoretical model (including the corrections mentioned). The distance on the x-axis is in nanometers and the smallest distance measured was around 100 nm, hundreds of times smaller than the width of a human hair. Even at these minuscule distances, the force only reached a magnitude of about 1*10-10 Newtons, a billion times smaller than the weight of a piece of paper. Nevertheless, the results confirmed the presence of the Casimir force to high accuracy.
The existence of the Casimir effect would seem to vindicate the rather strange predictions of QED with respect to the quantum vacuum, suggesting that it is indeed full of energy that can be tapped, if indirectly. However, others have argued that it is possible to derive the effect without reference to the energy of the vacuum, and therefore the experiment does not necessarily mean that vacuum energy is "real" in any meaningful way. Continued study into the existence of vacuum energy may help to explain the accelerating expansion of the universe since some mysterious "dark energy" is believed to be the source. In the mean time, the Casimir effect is an important experimental verification of QED and could someday see applications in nanotechnology, since the force would be relatively large on small scales.
Sources: https://www.scientificamerican.com/article/what-is-the-casimir-effec/, https://arxiv.org/pdf/hep-th/0503158.pdf, The Quantum Vacuum: An Introduction to Quantum Electrodynamics by Peter W. Milonni, http://web.mit.edu/kardar/www/research/seminars/PolymerForce/articles/PRL-Mohideen98.pdf
Labels:
Astronomy and Physics,
Forces
Tuesday, February 12, 2019
Ocean Currents and the Thermohaline Circulation
Ocean currents are ubiquitous and familiar. Beach goers are wary of tidal currents, as well as those caused by weather systems. Currents caused by tides and weather are constantly changing and chaotic. However, under this noise exists a larger-scale and more orderly system of circulation. By averaging over long time periods (in effect screening out the noise of short-term fluctuations), larger currents such as the Gulf Stream, which brings warm water northward along the east coast of the United States, appear. Another example is the California current, which brings the cold waters of the north Pacific down along the west coast. But why do these currents exist? Some patterns may be seen if we expand our view to the world as a whole.
The above image shows major surface ocean currents around the world. Note that despite geographical differences, some currents in each ocean in each hemisphere follow the same general pattern, flowing east to west in the tropical latitudes, toward the poles on the western edge of ocean basins, west to east at mid-latitudes, and finally toward the equator on the eastern edges. These circular currents are known as subtropical gyres. For example, the Gulf Stream is the western poleward current in the north Atlantic subtropical gyre and the California current the eastern current toward the equator in the north Pacific tropical gyre. These exist largely due to the Earth's prevailing winds.
The prevailing winds at the Earth's surface fit into a larger dynamic atmospheric pattern. The greater heating of the tropics as compared to the polar regions and the rotation of the Earth lead to the formation of three atmospheric cells in each hemisphere. The winds in these cells rotate due to the Coriolis effect (in essence the fact that straight paths appear to curve from the viewpoint of an observer on a rotating planet), producing east to west winds in the tropics and polar regions, and west to east winds in the mid-latitudes. Look back at the subtropical gyres on the map of currents. Notice that the currents labeled "equatorial" follow the trade winds, and the north Pacific, north Atlantic, etc. currents follow the prevailing westerlies. The west and east boundary currents then "complete the circle" and close the flow. This is no coincidence. It is friction between air and water that drives subtropical gyres: the force of wind tends to make water flow in the same direction. Another important example is the Antarctic circumpolar current, the largest in the world. Since there are no landmasses between roughly 50°S and 60°S latitude, the westerlies drive a current unimpeded that stretches all the way around the continent of Antarctica. Many other currents in the global diagram are responses to these main gyres, or are connected to prevailing winds in more complicated ways.
This system of ocean currents has profound impacts on weather and climate. Due to the Gulf Stream, north Atlantic current, and its northern extension, the Norwegian current, temperatures in northwestern Europe are several degrees warmer than they would otherwise be.
The above image illustrates one of the influences of ocean currents on weather. It shows all tropical cyclone tracks (hurricanes, typhoons, etc.) worldwide from 1985-2005. Since tropical cyclones need warm ocean surface waters to develop, the cold California current helps to suppress eastern Pacific hurricanes north of 25° N or so. In contrast, the warm Kuroshio current in the western Pacific allows typhoons to regularly affect Japan, which is at a higher latitude. Note also the presence of cyclones in the southwest Pacific and the lack of any formation in the southeast Pacific (though very cold surface waters are only one of several factors in this).
Despite their vast impact, which goes well beyond the examples listed, all of the currents considered so far are surface currents. Typically, these currents exist only in the top kilometer of the ocean, and the picture below this can look quite different.
The above image gives a very schematic illustration of the global three-dimensional circulation of the oceans, known as the thermohaline circulation. The first basic fact about this circulation, especially the deep ocean circulation, is that it is slow. Narrow, swift surface currents such as the Gulf Stream have speeds up to 250 cm/s. Even the slower eastern boundary currents often manage 10 cm/s. In contrast, deep ocean currents seldom exceed 1 cm/s. Their tiny speed and remoteness makes them extremely difficult to measure; in fact, rather than directly charting their course, the flow is inferred from quantities called "tracers" in water samples. Measurements of the proportion of certain radioactive isotopes, for example, are used to calculate the last time a given water sample "made contact" with the atmosphere.
The above graphic illustrates the age of deep ocean water around the world. The age (in years) is how long it has been since a given water parcel came to equilibrium with the surface. Note that the thermohaline circulation occurs on timescales of over 1000 years. This information indicates that deep water formation (when water from the surface sinks) takes place in the North Atlantic but not the North Pacific, as indicated in the first graphic. This is because all the deep waters of the Pacific are quite "old". Deep water formation also occurs in the Southern Ocean, near Antarctica. In both cases, the mechanism is similar: exposure to frigid air near the poles makes the surface waters very cold, and therefore dense. Further, in winter, sea ice forms in these cold waters, leaving saltier water behind (since freshwater was "taken away" to form sea ice). This salty, cold water is denser than the ocean around it and it sinks. The newly formed deep water can flow near the bottom of the ocean for hundreds of years before coming back to the surface.
One climatological influence of this phenomenon is the ocean's increased ability to take up carbon dioxide. Most of the carbon dioxide emitted by human industry since the late 1800s has dissolved in the oceans. Since deep water takes so long to circulate, increased CO2 levels are only now beginning to penetrate the deep ocean. Most ocean water has not "seen" the anthropogenic CO2 so it will continue to take up more of the gas for hundreds of years. Without this, there would much more carbon dioxide in the atmosphere, and likely faster global warming.
The network of mechanisms driving ocean currents and the thermohaline circulation is quite intricate, and we have only touched on some of them here: weather systems, prevailing winds, differences in density, etc. There are many more subtleties as to why the ocean circulates the way it does. The study of these nuances is essential for fully understanding the Earth's weather and climate.
Sources: Atmosphere, Ocean, and Climate Dynamics: An Introductory Text by John Marshall and R. Alan Plumb, https://www.britannica.com/science/ocean-current, http://www.seos-project.eu/modules/oceancurrents/oceancurrents-c01-p03.html
The above image shows major surface ocean currents around the world. Note that despite geographical differences, some currents in each ocean in each hemisphere follow the same general pattern, flowing east to west in the tropical latitudes, toward the poles on the western edge of ocean basins, west to east at mid-latitudes, and finally toward the equator on the eastern edges. These circular currents are known as subtropical gyres. For example, the Gulf Stream is the western poleward current in the north Atlantic subtropical gyre and the California current the eastern current toward the equator in the north Pacific tropical gyre. These exist largely due to the Earth's prevailing winds.
The prevailing winds at the Earth's surface fit into a larger dynamic atmospheric pattern. The greater heating of the tropics as compared to the polar regions and the rotation of the Earth lead to the formation of three atmospheric cells in each hemisphere. The winds in these cells rotate due to the Coriolis effect (in essence the fact that straight paths appear to curve from the viewpoint of an observer on a rotating planet), producing east to west winds in the tropics and polar regions, and west to east winds in the mid-latitudes. Look back at the subtropical gyres on the map of currents. Notice that the currents labeled "equatorial" follow the trade winds, and the north Pacific, north Atlantic, etc. currents follow the prevailing westerlies. The west and east boundary currents then "complete the circle" and close the flow. This is no coincidence. It is friction between air and water that drives subtropical gyres: the force of wind tends to make water flow in the same direction. Another important example is the Antarctic circumpolar current, the largest in the world. Since there are no landmasses between roughly 50°S and 60°S latitude, the westerlies drive a current unimpeded that stretches all the way around the continent of Antarctica. Many other currents in the global diagram are responses to these main gyres, or are connected to prevailing winds in more complicated ways.
This system of ocean currents has profound impacts on weather and climate. Due to the Gulf Stream, north Atlantic current, and its northern extension, the Norwegian current, temperatures in northwestern Europe are several degrees warmer than they would otherwise be.
The above image illustrates one of the influences of ocean currents on weather. It shows all tropical cyclone tracks (hurricanes, typhoons, etc.) worldwide from 1985-2005. Since tropical cyclones need warm ocean surface waters to develop, the cold California current helps to suppress eastern Pacific hurricanes north of 25° N or so. In contrast, the warm Kuroshio current in the western Pacific allows typhoons to regularly affect Japan, which is at a higher latitude. Note also the presence of cyclones in the southwest Pacific and the lack of any formation in the southeast Pacific (though very cold surface waters are only one of several factors in this).
Despite their vast impact, which goes well beyond the examples listed, all of the currents considered so far are surface currents. Typically, these currents exist only in the top kilometer of the ocean, and the picture below this can look quite different.
The above image gives a very schematic illustration of the global three-dimensional circulation of the oceans, known as the thermohaline circulation. The first basic fact about this circulation, especially the deep ocean circulation, is that it is slow. Narrow, swift surface currents such as the Gulf Stream have speeds up to 250 cm/s. Even the slower eastern boundary currents often manage 10 cm/s. In contrast, deep ocean currents seldom exceed 1 cm/s. Their tiny speed and remoteness makes them extremely difficult to measure; in fact, rather than directly charting their course, the flow is inferred from quantities called "tracers" in water samples. Measurements of the proportion of certain radioactive isotopes, for example, are used to calculate the last time a given water sample "made contact" with the atmosphere.
The above graphic illustrates the age of deep ocean water around the world. The age (in years) is how long it has been since a given water parcel came to equilibrium with the surface. Note that the thermohaline circulation occurs on timescales of over 1000 years. This information indicates that deep water formation (when water from the surface sinks) takes place in the North Atlantic but not the North Pacific, as indicated in the first graphic. This is because all the deep waters of the Pacific are quite "old". Deep water formation also occurs in the Southern Ocean, near Antarctica. In both cases, the mechanism is similar: exposure to frigid air near the poles makes the surface waters very cold, and therefore dense. Further, in winter, sea ice forms in these cold waters, leaving saltier water behind (since freshwater was "taken away" to form sea ice). This salty, cold water is denser than the ocean around it and it sinks. The newly formed deep water can flow near the bottom of the ocean for hundreds of years before coming back to the surface.
One climatological influence of this phenomenon is the ocean's increased ability to take up carbon dioxide. Most of the carbon dioxide emitted by human industry since the late 1800s has dissolved in the oceans. Since deep water takes so long to circulate, increased CO2 levels are only now beginning to penetrate the deep ocean. Most ocean water has not "seen" the anthropogenic CO2 so it will continue to take up more of the gas for hundreds of years. Without this, there would much more carbon dioxide in the atmosphere, and likely faster global warming.
The network of mechanisms driving ocean currents and the thermohaline circulation is quite intricate, and we have only touched on some of them here: weather systems, prevailing winds, differences in density, etc. There are many more subtleties as to why the ocean circulates the way it does. The study of these nuances is essential for fully understanding the Earth's weather and climate.
Sources: Atmosphere, Ocean, and Climate Dynamics: An Introductory Text by John Marshall and R. Alan Plumb, https://www.britannica.com/science/ocean-current, http://www.seos-project.eu/modules/oceancurrents/oceancurrents-c01-p03.html
Labels:
Meteorology
Tuesday, January 22, 2019
GW170817 and Multi-Messenger Astronomy Part 2
This is the second part of a two-part post. For the first part, see here.
The previous post described the gravitational wave event GW170817 (which took place on August 17, 2017) and how it was ultimately identified as a binary neutron star merger. In addition, it was associated with a gamma ray burst (designated GW170817) and imaged across the electromagnetic spectrum, an unprecedented and landmark event in the field of multi-messenger astronomy. Though it is intrinsically of interest to be able to both "see" (EM waves) and "hear" (gravitational waves) an astrophysical event, what are some other conclusions to be drawn from the merger?
One simple conclusion requires nothing more than a quick calculation, but verifies a foundational principle of physics that while almost universally assumed, had never been directly proven. This principle states that both electromagnetic waves and gravitational waves travel at the speed of light in a vacuum, about 3*108 m/s. Recall that the merger is estimated to have taken place about 130 million light-years away. This means that both the gravitational wave signal and the gamma ray burst both took about 130 million years to travel from the source to detectors on Earth. Despite their long journey, they arrived within a few seconds of each other. Now, we cannot be certain exactly when the gamma ray burst was emitted, due to our incomplete understanding of how a binary neutron star merger would work. However, it is likely that the neutron stars must first collide (marking the end of the gravitational wave signal) before emitting a burst of gamma radiation. Moreover, this initial high-energy burst was estimated by most models to occur no more than a few minutes after the merger. Therefore, dividing the amount by which the signals could have drifted apart over their travel time, we obtain bounds on the "speed of gravity" relative to the speed of light. Even with conservative assumptions, these observations prove that the two speeds very likely differ by no more than one part in a trillion (0.0000000001%) and probably several orders of magnitude less than this. Theoretically, they are equal, but never before has this been measured with such incredible precision.
In a similar vein, the merger allowed other tests of various aspects of general relativity and field theory, such as the influence of gravitational waves on the propagation of electromagnetic field. The data all confirmed the current understanding of general relativity and set very tight bounds on possible deviations, better than those ever achieved in the past.
The detection of the merger also taught us about the very structure of neutron stars. Unlike black holes, which (to our knowledge) are effectively points of mass, neutron stars are on the order of a few miles across. Considering their mass (usually 1-2 solar masses), they are exceedingly dense, but nevertheless their physical size affects how the gravitational wave event unfolds. When the two objects get very close to one another, their mutual gravitational attraction is expected to cause tidal deformations, i.e. warping of their shape and mass distribution. In theory, information concerning the deformation is encoded in the measured waveforms.
The figure above (click to enlarge), while rather technical, gives some idea as to how exactly the gravitational wave data constrain the structure of the neutron star. The statement |χ| < 0.05 in the diagram indicates that the entire figure is made presupposing that the neutron stars were not spinning too fast (which our knowledge of neutron star systems suggests is a very reasonable assumption). The two axes measure the magnitude of two parameters Λ1 and Λ2 that measure how much the larger and the smaller neutron stars, respectively respond to tidal deformation. In other words, smaller values of the parameters (toward the lower left) mean denser and more compact neutron stars, as indicated. More on what these parameters actually mean can be found in the original paper here.
Next, the darker shades of blue represent values considered more likely given the shape of the gravitational wave signal. This is a probability distribution, and lighter shaded areas were not ruled out with certainty, but simply deemed less likely. The uncertainty in the original masses contributes to the uncertainty in this diagram. Finally, the gray shaded "stripes" indicate the predictions of several different theoretical models of neutron stars. These are distinguished by their different equations of state, which specify how mass, pressure, density, and other properties of neutron stars relate to one another. The varying predictions of these models show just how little was definitively known about neutron stars. Analysis of the merger event suggested that the "SLy" and "APR4" models were more accurate than the rest (at 50% confidence) and that the "MS1" and "MS1b" models are unlikely to be correct (with more than 90% confidence). No model was ruled out for sure, but the data above suggest that neutron stars are more compact than most models predicted.
The gamma ray burst that followed the merger also contained some information concerning how these mergers actually occur and the physics of when and why high-energy radiation is released. Notably, the gamma ray burst was a single short pulse (lasting under a second) with no discernible substructure. It was difficult to draw conclusions from this limited sample, but explaining the nature of the pulse and the delay may require a dense layer of ejecta from each of the neutron stars to momentarily impede electromagnetic radiation from the merger. It would take some time for gamma rays to penetrate this cloud of debris until they finally burst through.
Moreover, among the known population of gamma ray bursts, GW1701817A was relatively dim. This may have been due to the main "jets" of energy not being along our line of sight; most of the burst is hypothesized to have been released along the original axis of rotation of the two bodies. The discrepancy may in part have been due to observational bias, since brighter events are more likely to be observed. In such cases, the Earth likely was directly facing the angle of peak gamma ray emission. Detecting an event "off-center" elucidates somewhat the structure and extent of the these jets.
The image above (click to enlarge) was originally from this paper. It demonstrates schematically some different theories explaining the relative dimness of the gamma ray burst event and the structure of the jets along the axis of rotation. Earlier theories postulated a relatively uniform jet, as shown in the first scenario. If this is the case, our line of sight may have been outside the jet, but relativistic effects allowed us to see a smaller amount of the radiation. Other explanations postulate that the jet has some internal structure and "fades" with increasing angle from the axis (ii) or that the interaction of the jet with surrounding matter produces a secondary cocoon of radiation (iii). A final scenario is simply that this event was a few orders of magnitude dimmer than other known gamma ray bursts for some intrinsic reason, although the authors deem this unlikely.
Without the background information provided by the gravitational wave signal (the component masses of the merger, the timing of the merger, etc.), little of the above could be gleaned from the gamma ray signal. Nor would it be possible with only one of the two to conduct the precision tests of fundamental physics described earlier. These are examples of the power of multi-messenger astronomy. Having both an eye and an ear to the cosmos will continue to yield fundamental insights into the nature of our universe.
Sources: https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.119.161101, https://arxiv.org/pdf/1710.05834.pdf, http://iopscience.iop.org/article/10.3847/2041-8213/aa91c9/pdf
The previous post described the gravitational wave event GW170817 (which took place on August 17, 2017) and how it was ultimately identified as a binary neutron star merger. In addition, it was associated with a gamma ray burst (designated GW170817) and imaged across the electromagnetic spectrum, an unprecedented and landmark event in the field of multi-messenger astronomy. Though it is intrinsically of interest to be able to both "see" (EM waves) and "hear" (gravitational waves) an astrophysical event, what are some other conclusions to be drawn from the merger?
One simple conclusion requires nothing more than a quick calculation, but verifies a foundational principle of physics that while almost universally assumed, had never been directly proven. This principle states that both electromagnetic waves and gravitational waves travel at the speed of light in a vacuum, about 3*108 m/s. Recall that the merger is estimated to have taken place about 130 million light-years away. This means that both the gravitational wave signal and the gamma ray burst both took about 130 million years to travel from the source to detectors on Earth. Despite their long journey, they arrived within a few seconds of each other. Now, we cannot be certain exactly when the gamma ray burst was emitted, due to our incomplete understanding of how a binary neutron star merger would work. However, it is likely that the neutron stars must first collide (marking the end of the gravitational wave signal) before emitting a burst of gamma radiation. Moreover, this initial high-energy burst was estimated by most models to occur no more than a few minutes after the merger. Therefore, dividing the amount by which the signals could have drifted apart over their travel time, we obtain bounds on the "speed of gravity" relative to the speed of light. Even with conservative assumptions, these observations prove that the two speeds very likely differ by no more than one part in a trillion (0.0000000001%) and probably several orders of magnitude less than this. Theoretically, they are equal, but never before has this been measured with such incredible precision.
In a similar vein, the merger allowed other tests of various aspects of general relativity and field theory, such as the influence of gravitational waves on the propagation of electromagnetic field. The data all confirmed the current understanding of general relativity and set very tight bounds on possible deviations, better than those ever achieved in the past.
The detection of the merger also taught us about the very structure of neutron stars. Unlike black holes, which (to our knowledge) are effectively points of mass, neutron stars are on the order of a few miles across. Considering their mass (usually 1-2 solar masses), they are exceedingly dense, but nevertheless their physical size affects how the gravitational wave event unfolds. When the two objects get very close to one another, their mutual gravitational attraction is expected to cause tidal deformations, i.e. warping of their shape and mass distribution. In theory, information concerning the deformation is encoded in the measured waveforms.
The figure above (click to enlarge), while rather technical, gives some idea as to how exactly the gravitational wave data constrain the structure of the neutron star. The statement |χ| < 0.05 in the diagram indicates that the entire figure is made presupposing that the neutron stars were not spinning too fast (which our knowledge of neutron star systems suggests is a very reasonable assumption). The two axes measure the magnitude of two parameters Λ1 and Λ2 that measure how much the larger and the smaller neutron stars, respectively respond to tidal deformation. In other words, smaller values of the parameters (toward the lower left) mean denser and more compact neutron stars, as indicated. More on what these parameters actually mean can be found in the original paper here.
Next, the darker shades of blue represent values considered more likely given the shape of the gravitational wave signal. This is a probability distribution, and lighter shaded areas were not ruled out with certainty, but simply deemed less likely. The uncertainty in the original masses contributes to the uncertainty in this diagram. Finally, the gray shaded "stripes" indicate the predictions of several different theoretical models of neutron stars. These are distinguished by their different equations of state, which specify how mass, pressure, density, and other properties of neutron stars relate to one another. The varying predictions of these models show just how little was definitively known about neutron stars. Analysis of the merger event suggested that the "SLy" and "APR4" models were more accurate than the rest (at 50% confidence) and that the "MS1" and "MS1b" models are unlikely to be correct (with more than 90% confidence). No model was ruled out for sure, but the data above suggest that neutron stars are more compact than most models predicted.
The gamma ray burst that followed the merger also contained some information concerning how these mergers actually occur and the physics of when and why high-energy radiation is released. Notably, the gamma ray burst was a single short pulse (lasting under a second) with no discernible substructure. It was difficult to draw conclusions from this limited sample, but explaining the nature of the pulse and the delay may require a dense layer of ejecta from each of the neutron stars to momentarily impede electromagnetic radiation from the merger. It would take some time for gamma rays to penetrate this cloud of debris until they finally burst through.
Moreover, among the known population of gamma ray bursts, GW1701817A was relatively dim. This may have been due to the main "jets" of energy not being along our line of sight; most of the burst is hypothesized to have been released along the original axis of rotation of the two bodies. The discrepancy may in part have been due to observational bias, since brighter events are more likely to be observed. In such cases, the Earth likely was directly facing the angle of peak gamma ray emission. Detecting an event "off-center" elucidates somewhat the structure and extent of the these jets.
The image above (click to enlarge) was originally from this paper. It demonstrates schematically some different theories explaining the relative dimness of the gamma ray burst event and the structure of the jets along the axis of rotation. Earlier theories postulated a relatively uniform jet, as shown in the first scenario. If this is the case, our line of sight may have been outside the jet, but relativistic effects allowed us to see a smaller amount of the radiation. Other explanations postulate that the jet has some internal structure and "fades" with increasing angle from the axis (ii) or that the interaction of the jet with surrounding matter produces a secondary cocoon of radiation (iii). A final scenario is simply that this event was a few orders of magnitude dimmer than other known gamma ray bursts for some intrinsic reason, although the authors deem this unlikely.
Without the background information provided by the gravitational wave signal (the component masses of the merger, the timing of the merger, etc.), little of the above could be gleaned from the gamma ray signal. Nor would it be possible with only one of the two to conduct the precision tests of fundamental physics described earlier. These are examples of the power of multi-messenger astronomy. Having both an eye and an ear to the cosmos will continue to yield fundamental insights into the nature of our universe.
Sources: https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.119.161101, https://arxiv.org/pdf/1710.05834.pdf, http://iopscience.iop.org/article/10.3847/2041-8213/aa91c9/pdf
Tuesday, January 1, 2019
GW170817 and Multi-messenger Astronomy
The first astronomers had only their own eyes as tools, and visible light was their only source of information. Recent instruments have broadened our sight to include all types of electromagnetic radiation, from radio waves to X-rays and gamma rays. Each part of the spectrum is suited to different types of observations and gave us incredible new insight into the cosmos. However, the second decade of the 21st century saw the advent of a fundamentally new kind of astronomy: the detection of gravitational waves.
Gravitational waves, as discussed in a previous post, are the "ripples" in spacetime that propagate in response to the acceleration of massive objects (stars, black holes, and the like). All objects with mass produce these waves, but the vast majority are far too small to detect. It was only with the advent of extremely sensitive instruments that the first detection of gravitational waves was made by LIGO (the Laser Interferometer Gravitational-Wave Observatory) in 2015. This detection, and its immediate successors, were of binary black hole merger events, in which two black holes orbiting one another spiraled inwards and finally combined into a single, larger black hole. The last moments before merging brought exceptionally colossal objects (weighing perhaps dozens of solar masses) to great accelerations, the perfect recipe for producing strong gravitational waves detectible across the cosmos. However, these cataclysmic events were quite dark: little electromagnetic radiation was emitted, and no "visual" evidence for these events accompanied the wave signal. Something quite different occurred in 2017.
On August 17, 2017 at 12:41 UTC, the LIGO detectors at Hanford, Washington and Livingston, Louisiana and the Virgo gravitational wave detector in Italy simultaneously measured an event as shown above (click to enlarge). The two LIGO frequency-time diagrams clearly show a curve that increases in frequency before disappearing at time 0. This corresponds to two inspiraling objects orbiting one another faster and faster before merging finally occurs and the signal stops. In the Virgo diagram, the same line is not very visible, but further analysis of the data nevertheless expose the same signal from the noise. The gravitational wave event, designated GW170817, was genuine.
Having three detectors at different points on the Earth measure the event allowed a better triangulation of the location of the source than had occurred previously (when LIGO and Virgo were not simultaneously active).
The above figure shows a visualization of the celestial sphere (representing all possible directions in the sky from which the signal could have come) and locations from which the signal data suggest the signal originated. The green zone is the highest probability region taking all three instruments into account. This area is still 31 square degrees, quite large by astronomical standards. Fortunately, corroboration of the event came immediately from an entirely separate source.
The above figure (click to enlarge) shows at the bottom the same gravitational wave signal from before. The rest of the data come from the Fermi Gamma-ray Space Telescope and the International Gamma Ray Astrophysics Laboratory, both satellites in Earth orbit. As their names suggest, they search the cosmos for astrophysical sources of high-energy gamma rays. In particular, they monitor the cosmos for gamma-ray bursts (GRBs), especially intense flashes of radiation that typically accompany only the most explosive events, such as supernovae. As the figure shows, less than two seconds after the gravitational wave signal stopped (indicated the merger of two orbiting objects), there was an elevated count of gamma rays in each detector across the different photon energy levels. The source of this burst is indicated by a reticle in the celestial sphere figure above, lying right within the estimated location of the merger! It appeared that this merger had an electromagnetic counterpart! Further, analysis of the gravitational waves indicated that the masses of the two objects were around 1.36-2.26 and 0.86-1.36 solar masses (these were the uncertainty ranges), respectively, not heavy enough for black holes. What was going on?
The conclusion drawn from these events was that the merger was not of black holes, but of neutron stars, compact remnants of large stars that were yet not massive enough to collapse into black holes. An artist's conception of a binary neutron star black hole merger is shown above. Following the initial identification of the event, countless telescopes around the world trained on the event the very same day after a notice was released around 13:00 UTC, hoping to observe more following the merger.
And they were not disappointed. Less than a day after the initial gamma ray burst had faded, the source began to appear at other frequencies, and remained bright for several weeks before fading. The above figure shows the Hubble image of the merger's host galaxy, NGC 4993. This galaxy is at a distance of roughly 130 million light-years, and even at this distance, the collision of the neutron stars was clearly visible against the billions of other stars. Finally, the chart below demonstrates just how well documented the event was:
Many different instruments took images in X-rays as well as ultraviolet, visible, infrared, and radio waves. The horizontal axis indicates the rough timeline of events (on a logarithmic scale) in each part of the electromagnetic spectrum, stretching from less than a day to several weeks after the merger. Several representative images of NGC 4993 and the source within are shown at bottom.
Without extensive collaboration within the astronomical community, collecting this wealth of data on this binary neutron star merger would not have been possible. This marked the first time in history that a single event was measured in both gravitational waves and electromagnetic waves, not to mention how thoroughly the merger was photographed across the spectrum. This coordinated observation is known as multi-messenger astronomy, and may have profound implications on our future understanding of the universe. Some of what we learned from the binary neutron star merger is discussed in the next post.
Note: Most of the figures above are taken from the open access papers detailing the discovery and analysis of the binary neutron star merger. For further reading on the event, links to these papers may be found in the sources below.
Sources: https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.119.161101, https://arxiv.org/pdf/1710.05834.pdf, http://iopscience.iop.org/article/10.3847/2041-8213/aa91c9/pdf
Gravitational waves, as discussed in a previous post, are the "ripples" in spacetime that propagate in response to the acceleration of massive objects (stars, black holes, and the like). All objects with mass produce these waves, but the vast majority are far too small to detect. It was only with the advent of extremely sensitive instruments that the first detection of gravitational waves was made by LIGO (the Laser Interferometer Gravitational-Wave Observatory) in 2015. This detection, and its immediate successors, were of binary black hole merger events, in which two black holes orbiting one another spiraled inwards and finally combined into a single, larger black hole. The last moments before merging brought exceptionally colossal objects (weighing perhaps dozens of solar masses) to great accelerations, the perfect recipe for producing strong gravitational waves detectible across the cosmos. However, these cataclysmic events were quite dark: little electromagnetic radiation was emitted, and no "visual" evidence for these events accompanied the wave signal. Something quite different occurred in 2017.
On August 17, 2017 at 12:41 UTC, the LIGO detectors at Hanford, Washington and Livingston, Louisiana and the Virgo gravitational wave detector in Italy simultaneously measured an event as shown above (click to enlarge). The two LIGO frequency-time diagrams clearly show a curve that increases in frequency before disappearing at time 0. This corresponds to two inspiraling objects orbiting one another faster and faster before merging finally occurs and the signal stops. In the Virgo diagram, the same line is not very visible, but further analysis of the data nevertheless expose the same signal from the noise. The gravitational wave event, designated GW170817, was genuine.
Having three detectors at different points on the Earth measure the event allowed a better triangulation of the location of the source than had occurred previously (when LIGO and Virgo were not simultaneously active).
The above figure shows a visualization of the celestial sphere (representing all possible directions in the sky from which the signal could have come) and locations from which the signal data suggest the signal originated. The green zone is the highest probability region taking all three instruments into account. This area is still 31 square degrees, quite large by astronomical standards. Fortunately, corroboration of the event came immediately from an entirely separate source.
The above figure (click to enlarge) shows at the bottom the same gravitational wave signal from before. The rest of the data come from the Fermi Gamma-ray Space Telescope and the International Gamma Ray Astrophysics Laboratory, both satellites in Earth orbit. As their names suggest, they search the cosmos for astrophysical sources of high-energy gamma rays. In particular, they monitor the cosmos for gamma-ray bursts (GRBs), especially intense flashes of radiation that typically accompany only the most explosive events, such as supernovae. As the figure shows, less than two seconds after the gravitational wave signal stopped (indicated the merger of two orbiting objects), there was an elevated count of gamma rays in each detector across the different photon energy levels. The source of this burst is indicated by a reticle in the celestial sphere figure above, lying right within the estimated location of the merger! It appeared that this merger had an electromagnetic counterpart! Further, analysis of the gravitational waves indicated that the masses of the two objects were around 1.36-2.26 and 0.86-1.36 solar masses (these were the uncertainty ranges), respectively, not heavy enough for black holes. What was going on?
The conclusion drawn from these events was that the merger was not of black holes, but of neutron stars, compact remnants of large stars that were yet not massive enough to collapse into black holes. An artist's conception of a binary neutron star black hole merger is shown above. Following the initial identification of the event, countless telescopes around the world trained on the event the very same day after a notice was released around 13:00 UTC, hoping to observe more following the merger.
And they were not disappointed. Less than a day after the initial gamma ray burst had faded, the source began to appear at other frequencies, and remained bright for several weeks before fading. The above figure shows the Hubble image of the merger's host galaxy, NGC 4993. This galaxy is at a distance of roughly 130 million light-years, and even at this distance, the collision of the neutron stars was clearly visible against the billions of other stars. Finally, the chart below demonstrates just how well documented the event was:
Many different instruments took images in X-rays as well as ultraviolet, visible, infrared, and radio waves. The horizontal axis indicates the rough timeline of events (on a logarithmic scale) in each part of the electromagnetic spectrum, stretching from less than a day to several weeks after the merger. Several representative images of NGC 4993 and the source within are shown at bottom.
Without extensive collaboration within the astronomical community, collecting this wealth of data on this binary neutron star merger would not have been possible. This marked the first time in history that a single event was measured in both gravitational waves and electromagnetic waves, not to mention how thoroughly the merger was photographed across the spectrum. This coordinated observation is known as multi-messenger astronomy, and may have profound implications on our future understanding of the universe. Some of what we learned from the binary neutron star merger is discussed in the next post.
Note: Most of the figures above are taken from the open access papers detailing the discovery and analysis of the binary neutron star merger. For further reading on the event, links to these papers may be found in the sources below.
Sources: https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.119.161101, https://arxiv.org/pdf/1710.05834.pdf, http://iopscience.iop.org/article/10.3847/2041-8213/aa91c9/pdf
Sunday, December 9, 2018
2018 Season Summary
The 2018 Atlantic Hurricane Season had above-average activity, with a total of
16 cyclones attaining tropical depression status,
15 cyclones attaining tropical storm status,
8 cyclones attaining hurricane status, and
2 cyclones attaining major hurricane status.
Before the beginning of the season, I predicted that there would be
18 cyclones attaining tropical depression status,
16 cyclones attaining tropical storm status,
8 cyclones attaining hurricane status, and
4 cyclones attaining major hurricane status.
The average number of named storms, hurricanes, and major hurricanes for an Atlantic hurricane season (over the 30-year period 1981-2010) are 12.1, 6.4, and 2.7, respectively. The 2018 season was somewhat above average in these categories, with the exception of the number of major hurricanes. The formation of many short-lived subtropical storms inflated the named storm total, but the Accumulated Cyclone Energy (ACE) value of 127 for the season was still above average. This value accounts for the duration and intensity of tropical cyclones as well as their number.
As usual, the ENSO oscillation was a major player in tropical cyclone activity this year. As hurricane season progressed into autumn, ocean temperatures of the equatorial Pacific trended higher than normal, signaling the advent of an El Niño event. Typically, such an event causes higher wind shear over the Atlantic and suppresses tropical cyclone activity, but it arose later in the year than anticipated, mitigating its effects.
The increase in wind shear during El Niño is most pronounced in the Caribbean Sea. Indeed, this region was a "graveyard" for tropical cyclones during 2018, as indicated by the above map of all the season's tracks. Every storm that entered the eastern Caribbean dissipated shortly thereafter due to unfavorable atmospheric winds. Ocean temperatures in the tropical Atlantic east of the Caribbean were also fairly cool for much of the season. This setup prevented long-track hurricanes from forming, with one notable exception: Hurricane Florence. Florence took a highly unusual route farther north but still pushed westward into the U.S. east coast. Overall, my predictions were slightly higher than the actual season activity, but they did correctly indicate the risk to the east coast.
The two most notable storms of the season were Hurricane Florence and Hurricane Michael. Florence made landfall in North Carolina, where it stalled and brought over 30 inches of rain to some areas. The record-breaking rainfall caused unprecedented flooding and extensive damage. Michael brought torrential rain to central America as it was forming and then went on to strengthen right up until landfall in the Florida Panhandle. With a pressure of 919 mb at landfall, Michael was at the time the 3rd most intense cyclone ever to make landfall in the United States and only the 4th category 5 hurricane ever recorded to do so. Some other notable facts and records from the 2018 Atlantic season include:
Sources: https://www.cpc.ncep.noaa.gov/products/analysis_monitoring/lanina/enso_evolution-status-fcsts-web.pdf,
16 cyclones attaining tropical depression status,
15 cyclones attaining tropical storm status,
8 cyclones attaining hurricane status, and
2 cyclones attaining major hurricane status.
Before the beginning of the season, I predicted that there would be
18 cyclones attaining tropical depression status,
16 cyclones attaining tropical storm status,
8 cyclones attaining hurricane status, and
4 cyclones attaining major hurricane status.
The average number of named storms, hurricanes, and major hurricanes for an Atlantic hurricane season (over the 30-year period 1981-2010) are 12.1, 6.4, and 2.7, respectively. The 2018 season was somewhat above average in these categories, with the exception of the number of major hurricanes. The formation of many short-lived subtropical storms inflated the named storm total, but the Accumulated Cyclone Energy (ACE) value of 127 for the season was still above average. This value accounts for the duration and intensity of tropical cyclones as well as their number.
As usual, the ENSO oscillation was a major player in tropical cyclone activity this year. As hurricane season progressed into autumn, ocean temperatures of the equatorial Pacific trended higher than normal, signaling the advent of an El Niño event. Typically, such an event causes higher wind shear over the Atlantic and suppresses tropical cyclone activity, but it arose later in the year than anticipated, mitigating its effects.
The increase in wind shear during El Niño is most pronounced in the Caribbean Sea. Indeed, this region was a "graveyard" for tropical cyclones during 2018, as indicated by the above map of all the season's tracks. Every storm that entered the eastern Caribbean dissipated shortly thereafter due to unfavorable atmospheric winds. Ocean temperatures in the tropical Atlantic east of the Caribbean were also fairly cool for much of the season. This setup prevented long-track hurricanes from forming, with one notable exception: Hurricane Florence. Florence took a highly unusual route farther north but still pushed westward into the U.S. east coast. Overall, my predictions were slightly higher than the actual season activity, but they did correctly indicate the risk to the east coast.
The two most notable storms of the season were Hurricane Florence and Hurricane Michael. Florence made landfall in North Carolina, where it stalled and brought over 30 inches of rain to some areas. The record-breaking rainfall caused unprecedented flooding and extensive damage. Michael brought torrential rain to central America as it was forming and then went on to strengthen right up until landfall in the Florida Panhandle. With a pressure of 919 mb at landfall, Michael was at the time the 3rd most intense cyclone ever to make landfall in the United States and only the 4th category 5 hurricane ever recorded to do so. Some other notable facts and records from the 2018 Atlantic season include:
- The 2018 season had seven storms that were at some point subtropical, a new record
- On September 12, Florence, Helene, Isaac, and Joyce all coexisted in the Atlantic. This was the first time four named storms existed simultaneously since 2008
- Hurricane Leslie took a highly unusual track over the far eastern Atlantic near the end of its lifetime. As a result, the first tropical storm warning on record was issued for Madeira Island southwest of Portugal; Leslie became post-tropical just before landfalling in Portugal itself
Sources: https://www.cpc.ncep.noaa.gov/products/analysis_monitoring/lanina/enso_evolution-status-fcsts-web.pdf,
Labels:
Hurricane Stats
Saturday, October 27, 2018
Hurricane Oscar (2018)
Storm Active: October 26-31
On October 23, an area of low pressure a few hundred miles east-northeast of the Lesser Antilles began to produce some shower and thunderstorm activity. Over the next few days, the disturbance moved slowly northward and atmospheric conditions for development improved. As westerly shear diminished, convection persisted closer to the center of the developing low. Late on October 26, the low-level center had become well-defined. However, due to its interaction with a nearby upper-level low, the system was classified Subtropical Storm Oscar. This was the seventh named storm in 2018 to be subtropical during some part of its lifetime, making 2018 the first known season for such an occurrence.
Interaction with the upper-level low caused Oscar to turn sharply westward on the 27th. Meanwhile, significant deepening took place, indicating that Oscar's maximum winds had increased. Oscar's structure evolved throughout that day until the cyclone possessed a small core of deep convection and maximum winds close to the center. As a result, it was reclassified as a tropical storm. A ridge pushed the system south of west on the 28th and favorable conditions allowed an eye feature to begin forming. Around the same time, Oscar strengthened into a hurricane. The trend of gradual intensification persisted on October 29 and the system became a category 2 that evening, reaching its peak intensity of 105 mph winds and a pressure of 970 mb. Meanwhile, a mid-latitude frontal system approaching from the west began to sheer the cyclone toward the north.
By the 30th, Oscar was picking up speed toward the north and north-northeast and began to encounter cooler waters as it passed east of Bermuda. As a result, deep convection near the center waned, the maximum winds dropped, and the system began to take on extratropical characteristics. Nevertheless, it remained a potent cyclone and brought rough surf to Bermuda that day. Around midday on October 31, Oscar transitioned to a hurricane-strength extratropical storm as it sped north-northeastward over the open Atlantic. The post-tropical low deepened over the north Atlantic during the following days, reaching a minimum pressure of 950 mb on November 2. It dissipated a few days later near Iceland.
This image shows the small hurricane Oscar at peak intensity as a category 2 hurricane.
Oscar did not directly affect any land areas during its time as a tropical or subtropical cyclone.
On October 23, an area of low pressure a few hundred miles east-northeast of the Lesser Antilles began to produce some shower and thunderstorm activity. Over the next few days, the disturbance moved slowly northward and atmospheric conditions for development improved. As westerly shear diminished, convection persisted closer to the center of the developing low. Late on October 26, the low-level center had become well-defined. However, due to its interaction with a nearby upper-level low, the system was classified Subtropical Storm Oscar. This was the seventh named storm in 2018 to be subtropical during some part of its lifetime, making 2018 the first known season for such an occurrence.
Interaction with the upper-level low caused Oscar to turn sharply westward on the 27th. Meanwhile, significant deepening took place, indicating that Oscar's maximum winds had increased. Oscar's structure evolved throughout that day until the cyclone possessed a small core of deep convection and maximum winds close to the center. As a result, it was reclassified as a tropical storm. A ridge pushed the system south of west on the 28th and favorable conditions allowed an eye feature to begin forming. Around the same time, Oscar strengthened into a hurricane. The trend of gradual intensification persisted on October 29 and the system became a category 2 that evening, reaching its peak intensity of 105 mph winds and a pressure of 970 mb. Meanwhile, a mid-latitude frontal system approaching from the west began to sheer the cyclone toward the north.
By the 30th, Oscar was picking up speed toward the north and north-northeast and began to encounter cooler waters as it passed east of Bermuda. As a result, deep convection near the center waned, the maximum winds dropped, and the system began to take on extratropical characteristics. Nevertheless, it remained a potent cyclone and brought rough surf to Bermuda that day. Around midday on October 31, Oscar transitioned to a hurricane-strength extratropical storm as it sped north-northeastward over the open Atlantic. The post-tropical low deepened over the north Atlantic during the following days, reaching a minimum pressure of 950 mb on November 2. It dissipated a few days later near Iceland.
This image shows the small hurricane Oscar at peak intensity as a category 2 hurricane.
Oscar did not directly affect any land areas during its time as a tropical or subtropical cyclone.
Labels:
2018 Storms
Tuesday, October 9, 2018
Tropical Storm Nadine (2018)
Storm Active: October 9-12
A late-season tropical wave entered the Atlantic ocean on October 6 and began to show signs of development. Waters in the eastern Atlantic were still fairly warm and shear was low, so organization proceeded fairly quickly. By October 8, convection had wrapped nearly around the disturbance, but it still lacked a closed circulation. The next day, it formed into Tropical Depression Fifteen. Within a few hours, satellite intensity estimates supported its upgrade into Tropical Storm Nadine. Nadine formed unusually late in the season for a system so far east in the tropical Atlantic.
The cyclone was fairly small, and hence prone to rapid changes in intensity. Over the next day, it took advantage of a favorable environment and strengthened quickly to its peak intensity as a strong tropical storm with 65 mph winds and a pressure of 997 mb on October 10. However, wind shear sharply increased that night and displaced all of Nadine's convection to the east of the center by October 11. As a result, the storm decayed rapidly as it moved northwest. The next day, Nadine dissipated over the central Atlantic.
Nadine was a small cyclone that quickly succumbed to unfavorable atmospheric conditions a few hours after formation.
The short-lived Nadine did not affect any land areas, but was an unusually late-season storm to form in the central tropical Atlantic.
A late-season tropical wave entered the Atlantic ocean on October 6 and began to show signs of development. Waters in the eastern Atlantic were still fairly warm and shear was low, so organization proceeded fairly quickly. By October 8, convection had wrapped nearly around the disturbance, but it still lacked a closed circulation. The next day, it formed into Tropical Depression Fifteen. Within a few hours, satellite intensity estimates supported its upgrade into Tropical Storm Nadine. Nadine formed unusually late in the season for a system so far east in the tropical Atlantic.
The cyclone was fairly small, and hence prone to rapid changes in intensity. Over the next day, it took advantage of a favorable environment and strengthened quickly to its peak intensity as a strong tropical storm with 65 mph winds and a pressure of 997 mb on October 10. However, wind shear sharply increased that night and displaced all of Nadine's convection to the east of the center by October 11. As a result, the storm decayed rapidly as it moved northwest. The next day, Nadine dissipated over the central Atlantic.
Nadine was a small cyclone that quickly succumbed to unfavorable atmospheric conditions a few hours after formation.
The short-lived Nadine did not affect any land areas, but was an unusually late-season storm to form in the central tropical Atlantic.
Labels:
2018 Storms
Sunday, October 7, 2018
Hurricane Michael (2018)
Storm Active: October 7-12
During the first few days of October, a broad area of low pressure developed in the southwestern Caribbean, with associated showers and thunderstorms extending from central America all the way to Jamaica and Haiti. Such systems are common in this region in the autumn, and are known as Central American Gyres (CAGs). CAGs tend to bring heavy rainfall to a wide area of central America, which held true in this case. In addition, they can sometimes spawn tropical cyclones. Nevertheless, the large circulation of a CAG takes time to consolidate, and the system only slowly organized as it moved northwestward. By October 6, the center of the low was just north of Honduras and the region of strong upper-level winds that had been affecting the system retreated to the north. This allowed further organization, and a flare up of organized thunderstorm activity led to the classification of Tropical Depression Fourteen early on October 7.
Even immediately after formation, the storm had an impressive satellite signature, with very cold cloud tops to the east of the center of circulation. Soon after, satellite and aircraft data indicated an immense radius of gale force winds, extending over 200 miles from the center in some quadrants, and the depression was upgraded to Tropical Storm Michael. Under the influence of some westerly shear, the center of Michael underwent some reformations toward the east that day, but the large cyclone strengthened steadily into the evening as it moved slowly northward. Already, the outer rainbands were hitting the southern tip of Florida, even though the center was still just east of the Yucatan Peninsula. By the morning of October 8, Michael had strengthened into a hurricane.
During that day, the inner core gradually became more organized and the cyclone steadily intensified as it passed near the western tip of Cuba. A large eyewall struggled to surround the center throughout the day, but coverage increased during the evening. The storm became a category 2 early on October 9. The system gained some forward speed toward the north that day and the warm waters of the Gulf of Mexico supported extremely intense convection. Shear also lessened, and the eyewall became complete early that evening, bring Michael to category 3 strength. The outer bands of the storm were now crossing the Gulf coastline, but proximity to land did nothing to slow the system's intensification. A symmetric eye cleared out on satellite imagery overnight and Michael rocketed to category 4 status, deepening even when the center was within 50 miles of landfall. The powerful cyclone reached a peak intensity as a category 5 hurricane with 160 mph winds* and a pressure of 919 mb when it slammed into the Florida Panhandle around 1:00pm local time on October 10. In terms of pressure, Michael became the third strongest hurricane ever on record to make landfall in the United States, behind only Hurricane Camille of 1969 and the Labor Day Hurricane of 1935. At the time, it also broke the top 10 overall list for strongest landfall recorded for an Atlantic hurricane. Furthermore, it was the only category 5 ever known to hit the Florida panhandle.
The storm surge that Michael brought to the coastline was unprecedented and record-braking in some areas and the wind damage was catastrophic, though the worst of it was confined to quite a small area where the eyewall made landfall. However, the rainfall was not especially severe, as the system accelerated northeastward as it moved inland and did not linger. In fact, the system entered southwestern Georgia before losing major hurricane status, thus becoming the first major hurricane to impact the state since 1898. Nevertheless, the core did rapidly deteriorate once inland, and Michael weakened to a tropical storm early on October 11 over central Georgia. Later that day, it moved through the Carolinas, bringing heavy rain and wind to regions inundated by Florence's rains the previous month. Fortunately, the storm was moving so fast that the flooding impacts were not as severe as they otherwise would have been. The cyclone emerged over the Atlantic near the border of North Carolina and Virginia and became post-tropical early on October 12. The system crossed the Atlantic and eventually brought some rain and wind to western Europe on October 15.
The above image shows Hurricane Michael making landfall in the Florida panhandle at category 5 intensity. This was among the strongest landfalls ever recorded for an Atlantic hurricane.
Michael's speedy development amid only marginally favorable conditions and rapid strengthening prior to landfall were very unexpected and demonstrate how far there is to go in modeling intensity changes in tropical cyclones.
*Note: Hurricane Michael was operationally identified as a top-end category 4 hurricane with 155 mph winds at landfall, but this was changed to 160 mph (category 5 strength) in post-season analysis.
During the first few days of October, a broad area of low pressure developed in the southwestern Caribbean, with associated showers and thunderstorms extending from central America all the way to Jamaica and Haiti. Such systems are common in this region in the autumn, and are known as Central American Gyres (CAGs). CAGs tend to bring heavy rainfall to a wide area of central America, which held true in this case. In addition, they can sometimes spawn tropical cyclones. Nevertheless, the large circulation of a CAG takes time to consolidate, and the system only slowly organized as it moved northwestward. By October 6, the center of the low was just north of Honduras and the region of strong upper-level winds that had been affecting the system retreated to the north. This allowed further organization, and a flare up of organized thunderstorm activity led to the classification of Tropical Depression Fourteen early on October 7.
Even immediately after formation, the storm had an impressive satellite signature, with very cold cloud tops to the east of the center of circulation. Soon after, satellite and aircraft data indicated an immense radius of gale force winds, extending over 200 miles from the center in some quadrants, and the depression was upgraded to Tropical Storm Michael. Under the influence of some westerly shear, the center of Michael underwent some reformations toward the east that day, but the large cyclone strengthened steadily into the evening as it moved slowly northward. Already, the outer rainbands were hitting the southern tip of Florida, even though the center was still just east of the Yucatan Peninsula. By the morning of October 8, Michael had strengthened into a hurricane.
During that day, the inner core gradually became more organized and the cyclone steadily intensified as it passed near the western tip of Cuba. A large eyewall struggled to surround the center throughout the day, but coverage increased during the evening. The storm became a category 2 early on October 9. The system gained some forward speed toward the north that day and the warm waters of the Gulf of Mexico supported extremely intense convection. Shear also lessened, and the eyewall became complete early that evening, bring Michael to category 3 strength. The outer bands of the storm were now crossing the Gulf coastline, but proximity to land did nothing to slow the system's intensification. A symmetric eye cleared out on satellite imagery overnight and Michael rocketed to category 4 status, deepening even when the center was within 50 miles of landfall. The powerful cyclone reached a peak intensity as a category 5 hurricane with 160 mph winds* and a pressure of 919 mb when it slammed into the Florida Panhandle around 1:00pm local time on October 10. In terms of pressure, Michael became the third strongest hurricane ever on record to make landfall in the United States, behind only Hurricane Camille of 1969 and the Labor Day Hurricane of 1935. At the time, it also broke the top 10 overall list for strongest landfall recorded for an Atlantic hurricane. Furthermore, it was the only category 5 ever known to hit the Florida panhandle.
The storm surge that Michael brought to the coastline was unprecedented and record-braking in some areas and the wind damage was catastrophic, though the worst of it was confined to quite a small area where the eyewall made landfall. However, the rainfall was not especially severe, as the system accelerated northeastward as it moved inland and did not linger. In fact, the system entered southwestern Georgia before losing major hurricane status, thus becoming the first major hurricane to impact the state since 1898. Nevertheless, the core did rapidly deteriorate once inland, and Michael weakened to a tropical storm early on October 11 over central Georgia. Later that day, it moved through the Carolinas, bringing heavy rain and wind to regions inundated by Florence's rains the previous month. Fortunately, the storm was moving so fast that the flooding impacts were not as severe as they otherwise would have been. The cyclone emerged over the Atlantic near the border of North Carolina and Virginia and became post-tropical early on October 12. The system crossed the Atlantic and eventually brought some rain and wind to western Europe on October 15.
The above image shows Hurricane Michael making landfall in the Florida panhandle at category 5 intensity. This was among the strongest landfalls ever recorded for an Atlantic hurricane.
Michael's speedy development amid only marginally favorable conditions and rapid strengthening prior to landfall were very unexpected and demonstrate how far there is to go in modeling intensity changes in tropical cyclones.
*Note: Hurricane Michael was operationally identified as a top-end category 4 hurricane with 155 mph winds at landfall, but this was changed to 160 mph (category 5 strength) in post-season analysis.
Labels:
2018 Storms
Sunday, September 23, 2018
Hurricane Leslie (2018)
Storm Active: September 23-25, September 28-October 13
On September 18, an extratropical system associated with the remnants of Hurricane Florence moved away from the U.S. east coast over the tropical Atlantic. A new low formed along the frontal boundary around September 22 in the central subtropical Atlantic. Over the next day, the low developed spiral banding and lots its frontal nature. By the morning of September 23, it had transitioned into Subtropical Storm Leslie. This was the sixth subtropical storm of the 2018 season, setting a new record.
At the time of formation, Leslie was drifting westward, but steering currents were quite weak and it turned southward and ultimately eastward over the next day. The system had never had much in the way of deep convection, but what was there diminished further by September 25. Meanwhile, a new front was approaching from the west and interacting with Leslie, elongating its circulation. By late that morning, the system had become extratropical. Upon transition, it underwent a rapid burst of the strengthening and was producing hurricane force winds by the 26th. Since it was non-tropical, however, it was not designated a hurricane.
At the same time it continued to turn toward the north and then back west. Conditions were still fairly favorable for tropical cyclone development so it began to transition back the next day. On September 28, enough deep convection had reappeared near the center for Leslie to again be classified as subtropical. However, its maximum winds had subsided back to around 50 mph, so this was the initial intensity. The system moved slowly southwest over the next few days and gradually developed more banding features south and east of the circulation center. Leslie transitioned to a fully tropical storm for the first time on September 29. Sea surface temperatures increased and wind shear decreased along the storm's path, leading to some slow strengthening over the next few days as thunderstorms finally wrapped entirely around the center.
By October 2, Leslie was approaching hurricane strength and had dipped in latitude to below 30° N due to its unusual southwestward motion. A ragged eye formed that evening and the system was upgraded to a hurricane for the first time. Overnight, the cyclone became stationary around 500 miles east-southeast of Bermuda. It also peaked in intensity at maximum sustained winds of 80 mph and a central pressure of 975 mb. Due to the influence of an upper-level low pressure system to the north, Leslie began to move northward on October 3. This motion took it over cooler water, and convection waned again, with a shallow ring of convection around the center separated from the outer bands by a "moat" of dryer air. Leslie began to weaken as a result and soon was a tropical storm again.
Although the system was still quite distant from any landmasses, the large size of the circulation generated significant ocean swells that led to rough surf in Bermuda and even the east coast of North America. Leslie stalled again about 450 miles northeast of Bermuda on October 5, and began to feel the influence of the mid-latitude westerlies. The cyclone turned sharply eastward that day. Meanwhile, the structure of the storm had changed quite a bit; a central area of strong thunderstorms had replaced the large eye, and a large area of convection persisted to the north of the center. Leslie began to separate from a trough to its north and turned south of east on October 7. The storm accelerated southeastward over the next day, bringing it over warmer waters, and it began to restrengthen.
The cyclone developed a central dense overcast on October 8 and approached hurricane strength on the 9th, achieving category 1 status that evening a week after doing so the first time. Leslie turned due south for a little while on the 10th, reaching a southernmost latitude of 27.8 ° N. However, another trough moving to its north turned the system east-northeast and began to accelerate it toward the far eastern Atlantic. The inner core structure fluctuated a great deal in organization during the following day, but overall it became a bit better defined and Leslie strengthened somewhat. Late on October 11, Leslie reached its peak intensity as a top-end category 1 hurricane with 90 mph winds and a pressure of 969 mb.
The system picked up even more speed the next day and colder waters weakened the storm's convection. A tropical storm warning was issued for the island of Madeira, located southwest of Portugal. This was the first ever warning issued for the island and Leslie was the first known tropical cyclone ever to affect it in modern history. The center passed north of Madeira later on the 12th. Finally, on October 13, Leslie transitioned to an extratropical low just before making landfall in northern Portugal. This transition did not prevent the cyclone from bringing hurricane force winds gusts and heavy rain to the Iberian Peninsula. The low finally dissipated inland a few days later.
This image shows Leslie during its second and final stint as a hurricane, moving east-northeastward toward Europe.
Leslie's convoluted track included some highly unusual southward dips over the central Atlantic. Just after becoming extratropical, it moved over the Iberian Peninsula, though this is not shown above.
On September 18, an extratropical system associated with the remnants of Hurricane Florence moved away from the U.S. east coast over the tropical Atlantic. A new low formed along the frontal boundary around September 22 in the central subtropical Atlantic. Over the next day, the low developed spiral banding and lots its frontal nature. By the morning of September 23, it had transitioned into Subtropical Storm Leslie. This was the sixth subtropical storm of the 2018 season, setting a new record.
At the time of formation, Leslie was drifting westward, but steering currents were quite weak and it turned southward and ultimately eastward over the next day. The system had never had much in the way of deep convection, but what was there diminished further by September 25. Meanwhile, a new front was approaching from the west and interacting with Leslie, elongating its circulation. By late that morning, the system had become extratropical. Upon transition, it underwent a rapid burst of the strengthening and was producing hurricane force winds by the 26th. Since it was non-tropical, however, it was not designated a hurricane.
At the same time it continued to turn toward the north and then back west. Conditions were still fairly favorable for tropical cyclone development so it began to transition back the next day. On September 28, enough deep convection had reappeared near the center for Leslie to again be classified as subtropical. However, its maximum winds had subsided back to around 50 mph, so this was the initial intensity. The system moved slowly southwest over the next few days and gradually developed more banding features south and east of the circulation center. Leslie transitioned to a fully tropical storm for the first time on September 29. Sea surface temperatures increased and wind shear decreased along the storm's path, leading to some slow strengthening over the next few days as thunderstorms finally wrapped entirely around the center.
By October 2, Leslie was approaching hurricane strength and had dipped in latitude to below 30° N due to its unusual southwestward motion. A ragged eye formed that evening and the system was upgraded to a hurricane for the first time. Overnight, the cyclone became stationary around 500 miles east-southeast of Bermuda. It also peaked in intensity at maximum sustained winds of 80 mph and a central pressure of 975 mb. Due to the influence of an upper-level low pressure system to the north, Leslie began to move northward on October 3. This motion took it over cooler water, and convection waned again, with a shallow ring of convection around the center separated from the outer bands by a "moat" of dryer air. Leslie began to weaken as a result and soon was a tropical storm again.
Although the system was still quite distant from any landmasses, the large size of the circulation generated significant ocean swells that led to rough surf in Bermuda and even the east coast of North America. Leslie stalled again about 450 miles northeast of Bermuda on October 5, and began to feel the influence of the mid-latitude westerlies. The cyclone turned sharply eastward that day. Meanwhile, the structure of the storm had changed quite a bit; a central area of strong thunderstorms had replaced the large eye, and a large area of convection persisted to the north of the center. Leslie began to separate from a trough to its north and turned south of east on October 7. The storm accelerated southeastward over the next day, bringing it over warmer waters, and it began to restrengthen.
The cyclone developed a central dense overcast on October 8 and approached hurricane strength on the 9th, achieving category 1 status that evening a week after doing so the first time. Leslie turned due south for a little while on the 10th, reaching a southernmost latitude of 27.8 ° N. However, another trough moving to its north turned the system east-northeast and began to accelerate it toward the far eastern Atlantic. The inner core structure fluctuated a great deal in organization during the following day, but overall it became a bit better defined and Leslie strengthened somewhat. Late on October 11, Leslie reached its peak intensity as a top-end category 1 hurricane with 90 mph winds and a pressure of 969 mb.
The system picked up even more speed the next day and colder waters weakened the storm's convection. A tropical storm warning was issued for the island of Madeira, located southwest of Portugal. This was the first ever warning issued for the island and Leslie was the first known tropical cyclone ever to affect it in modern history. The center passed north of Madeira later on the 12th. Finally, on October 13, Leslie transitioned to an extratropical low just before making landfall in northern Portugal. This transition did not prevent the cyclone from bringing hurricane force winds gusts and heavy rain to the Iberian Peninsula. The low finally dissipated inland a few days later.
This image shows Leslie during its second and final stint as a hurricane, moving east-northeastward toward Europe.
Leslie's convoluted track included some highly unusual southward dips over the central Atlantic. Just after becoming extratropical, it moved over the Iberian Peninsula, though this is not shown above.
Labels:
2018 Storms
Subscribe to:
Posts (Atom)