Monday, July 22, 2019

Tropical Depression Three (2019)

Storm Active: July 22-23

Around July 12, a tropical wave moved off the coast of Africa. It was among the first of the season to be seriously monitored for cyclone development, but it traversed the Atlantic basin for the following week without incident. A portion of the wave axis took a northern route, passing north of the Caribbean islands and approaching the Bahamas by July 21. Stable dry air in the region made progress difficult for the disturbance, but it managed to spin up a small area of convection driven by very warm ocean waters. This led to a tiny circulation and the system strengthened into Tropical Depression Three on July 22 over the western Bahamas.

Soon after, the depression began to feel the influence of an approaching cold front and turned northward on July 23. The center passed just offshore of east Florida, but its small size meant that only a few showers and occasional gusty winds impacted land. By the late morning, the system had already lost its identity and dissipated as it combined with the front.



Even though the tropical depression formed over very warm water, it succumbed quickly to dry mid-level air.



Tropical Depression Three was a small and short-lived system with minimal land impacts.

Thursday, July 11, 2019

Hurricane Barry (2019)

Storm Active: July 11-14

During the second week of July, a trough of over the southeast United States drifted slowly south-southeastward, producing some scattered afternoon thunderstorms as it went. A few days later, on July 9, this anomalous motion brought it into the extreme northeastern Gulf of Mexico, where more consistent convection began to flare up. Weak steering currents allowed the system to meander west-southwestward and it gradually organized, developing a broad circulation. By July 10, a clear low-level center had formed, but it was displaced well to the northeast of the mid-level circulation. Moreover, the strongest thunderstorms were actually located over southeast Louisiana, where significant flooding occurred before the tropical disturbance had even been classified.

Finally, on July 11, improvements in organization prompted the naming of Tropical Storm Barry, located now nearly due south of the Mississippi delta. Even after naming, however, dry continental air pushing in from the other restricted cloud cover to the southern portion of the tropical storm, and several small low-level vortices were evident on satellite imagery. This disorganized hampered Barry's intensification. Nevertheless, the pressure fell appreciably over the next day and aircraft reconnaissance indicated that the system's maximum winds steadily increased to strong tropical storm strength by July 12.

Meanwhile, Barry took a turn north of west around the edge of the mid-level steering ridge and began to move toward Louisiana. Even by the afternoon of July 12, however, the northern semicircle remained very dry, so few effects were felt over land even with the system less than 100 miles offshore. Despite its unconventional structure, Barry steadily strengthened through landfall. It peaked as a category 1 hurricane with 75 mph winds and a pressure of 993 mb on July 13 as it crossed the central Louisiana coastline around noon local time. The slow movement of the system resulted in only a gradual weakening trend and prolonged heavy rainfall, especially just east of the landfall point. Nevertheless, Barry weakened to a tropical storm shortly after landfall and a tropical depression on July 14 as the center of circulation pushed further inland. Upper level winds out of the north kept most of the precipitation over water even as the center moved away, sparing inland areas from more severe flooding. Soon after, the storm became extratropical over the midwest.



The above image shows Hurricane Barry near landfall, with most of the northern half of the circulation exposed.



Barry originated from a non-tropical disturbance over the southeast U.S.

Monday, May 20, 2019

Subtropical Storm Andrea (2019)

Storm Active: May 20-21

As the third week of May began, a frontal boundary moved off of the U.S. east coast. The southern end of the front stalled north of Hispaniola and formed a trough of low pressure. There the system found a relatively favorable atmosphere and marginally warm ocean temperatures, supporting some scattered storm development. Before long, a low-pressure center had developed. By May 20, the storm had a small convective shield displaced to the north and east and aircraft reconnaissance measured gale-force winds. Since the circulation was still interacting with an upper-level low to its southwest and the gale force winds were spread out from the center, the storm was classified Subtropical Storm Andrea, the first named storm of the 2019 Atlantic hurricane season.

The system moved northward that day, but began to slow down and veer eastward by the the afternoon of May 21. Meanwhile, the convection associated with the system dissipated, leaving behind just a swirl of low-level clouds to mark the center of circulation. As a result, Andrea was downgraded to a subtropical depression. Early that evening, it degenerated further into a remnant low and these remnants dissipated the next day as a new front approached. Moisture that had been associated with Andrea brought some rain showers to Bermuda on the 22nd. The formation of Andrea marked the 5th consecutive year during which a named storm formed prior to the official start of the hurricane season on June 1, surpassing the record set in 1951-4. However, short-lived weak systems such as Andrea may very well have been missed prior to the era of satellite observation.



This image shows Subtropical Storm Andrea on May 20. Also visible is the upper-level low to its southwest which helped to weaken the system.



Andrea formed in the far western Atlantic, one of the typical areas for early season cyclogenesis.

Sunday, May 19, 2019

Professor Quibb's Picks – 2019

My personal prediction for the 2019 North Atlantic hurricane season (written May 19, 2019) is as follows:

15 cyclones attaining tropical depression status,
14 cyclones attaining tropical storm status,
6 cyclones attaining hurricane status, and
3 cyclones attaining major hurricane status.

Following a fairly average hurricane season in 2018 (which nevertheless featured two devastating major hurricanes), I predict that the 2019 season will see a comparable number of cyclones, albeit with rather different areas to watch. Note that the average Atlantic hurricane season (1981-2010 average) has 12.1 tropical storms, 6.4 hurricanes, and 2.7 major hurricanes. As with any season, our prediction begins with a look at the El Niño Southern Oscillation (ENSO) index, a measure of equatorial sea temperature anomalies in the Pacific ocean that have a well-documented impact on Atlantic hurricane activity. These anomalies are currently positive, corresponding to an El Niño state, and have been since last fall. The image below (click to enlarge) shows model predictions for the ENSO index through the remainder of 2019.



In comparison to the last several years, the situation is more static: no significant change of state is expected during this year's hurricane season (though there is, of course, significant uncertainty). This state of affairs tends to suppress hurricane activity and increase the chance of cyclones in the subtropical Atlantic curving away from the North American coastline (unlike, for example, the unusual track of Hurricane Florence last year).

This is fortunate, because all indications are the subtropical Atlantic will continue to churn out named storms as it did last season. Sea surface temperatures continue to run high in the region, and El Niño effects are not as pronounced there, partially explaining why my prediction still features an above average number of storms. Other factors also somewhat offset the El Niño: ocean temperatures in the tropical Atlantic (the birthplace of most long-track hurricanes) are slightly above normal this year, a trend expected to persist over the next several months. The atmosphere has also been less dry in the region, with less Saharan dry air than in 2018 and the beginning of the 2017 season to quash developing tropical waves. Expect the tropics to be less hostile to long-track hurricane formation than last year, when all cyclones taking the southerly route dissipated upon entering the Caribbean.

My estimated risks on a scale from 1 (least risk) to 5 (most risk) for different specific parts of the Atlantic are as follows:

U.S. East Coast: 3
Though the subtropical Atlantic will be active, I predict less of a risk to the U.S. coastline, with a smaller chance of a Florence-like system this year. Though there may be a few hurricanes passing offshore, most should recurve out over open water. Bermuda, however, is at higher risk.

Yucatan Peninsula and Central America: 2
These regions may benefit the most from a persistent El Niño, with wind shear making the development of an intense hurricane in the western Caribbean difficult. Further, I expect tracks to curve northward more often than striking Central America directly. Later season cyclones originating in the monsoonal gyre near Panama may pose the primary threat, and these tend to be principally rainmakers.

Caribbean Islands: 4
With the main development region (MDR) of the tropical Atlantic more favorable this year, the Caribbean is unlikely to continue the reprieve last year that followed arguably its worst season of all time (2017). Early season storms are still likely to fizzle out due to El Niño-related shear, but a wetter atmosphere suggests that tropical disturbances will have to be watched carefully. This includes a greater possibility of tropical cyclogensis in the Caribbean itself.

Gulf of Mexico: 3
Sea temperatures are consistently higher in the Gulf this year than they have been recently, especially near the Florida gulf coastline, but conditions here overall are a mixed bag. A strong jet stream across the continental U.S. will support more severe thunderstorms over land this summer, but this actually may work against cyclones thriving in the region. Balancing these factors yields an average risk, though this overall rating is a combination of a higher-than-normal risk in the eastern Gulf and a lower-than-normal risk farther west.

Overall, I expect the 2019 hurricane season to feature close-to-average activity. Nevertheless, this is just an informal forecast. Individuals in hurricane-prone areas should always have emergency measures in place. For more on hurricane safety sources, see here. Remember, devastating storms can occur even in otherwise quiet seasons.

Sources: https://www.cpc.ncep.noaa.gov/products/analysis_monitoring/lanina/enso_evolution-status-fcsts-web.pdf, https://www.cpc.ncep.noaa.gov/products/CFSv2/CFSv2seasonal.shtml, https://www.ospo.noaa.gov/Products/ocean/sst/anomaly/

Wednesday, May 15, 2019

Hurricane Names List – 2019

The name list for tropical cyclones forming in the North Atlantic basin for the year 2019 is as follows:

Andrea
Barry
Chantal
Dorian
Erin
Fernand
Gabrielle
Humberto
Imelda
Jerry
Karen
Lorenzo
Melissa
Nestor
Olga
Pablo
Rebekah
Sebastien
Tanya
Van
Wendy

This list is the same as the list for the 2013 season, with the exception of Imelda, which replaced the retired name Ingrid.

Tuesday, May 7, 2019

The abc Conjecture: Applications and Significance

This is the third part of a three-part post concerning the abc conjecture. For the first, see here.

The first post in this series presented some explanation as to why the abc conjecture seems like a reasonable attempt to mathematically codify a big idea. This idea is that the prime factorization of a sum of two numbers should not really relate to those of the individual numbers. Equivalently, it says that if we see an equation like 3 + 53 = 27, we should think of it as a "rare event" or "coincidence" that big powers of small primes are related in this way. The second post provided some examples and numerical evidence rigorous version of the conjecture. To review, this states that

The abc Conjecture: For any ε > 0, no matter how small, for all but finitely many equations of the form a + b = c where a and b are relatively prime, rad(abc)1 + ε > c.

Again, the radical rad(n) of an integer n is the product of its distinct prime factors. However, none of what has been discussed so far constitutes a mathematical proof that the abc conjecture is true or false.

In 2012, the Japanese mathematician Shinichi Mochizuki shocked the mathematical community by publishing, out of the blue, what he claimed was a proof of the abc conjecture. However, the initial excitement at this announcement was quickly replaced by confusion; almost no one was able to decipher the tools used in the proof, which totaled over 500 pages in length! Mochizuki, working in isolation for years, had built up a brand new mathematical formalism which he called "Inter-Universal Teichmüller Theory" that was bizarre and unfamiliar to other researchers. The language and notation (an sample of which is provided in the screenshot below) seemed alien, even to mathematicians!



Moreover, he refused to publicly lecture on the new material, instead only working with a few close colleagues. The combination of the length and inscrutability of the proof with his unwillingness to elucidate it discouraged people from attempting to understand it. In the years since the proof was published, skepticism has mounted concerning the proof's validity. While a small group of mathematicians defend it, a majority of the mathematical community thinks it is unlikely that the proof is valid. For now, the abc conjecture remains effectively open.

Nevertheless, it is certain that attempts to prove the conjecture will continue. It has a number of useful applications that would solve a myriad of other mathematical problems, should it be true. To illustrate the power of the abc conjecture, we give one famous example of an application: Fermat's Last Theorem.

One of the first equations we considered in this series was x2 + y2 = z2, which relates the side lengths of right triangles. This equation has infinitely many solutions, namely 32 + 42 = 52, 52 + 122 = 132, etc. Fermat's Last Theorem states that if we raise the exponents from 2 to any higher power, there are no solutions in the positive integers. That is, x3 + y3 = z3, x4 + y4 = z4, and so on are not satisfied by any x, y, and z > 0. Famously claimed by Pierre de Fermat in the 17th century, this problem remained unsolved for centuries. In 1985, when the abc conjecture was first stated, it remained open.

So let us assume that we have (somehow) proven the abc conjecture, and were interested in Fermat's Last Theorem. The first thing to note about the equation xn + yn = zn is that if we had a solution for this equation, we could always find one for which xn and yn were relatively prime. This is because if they have a common prime factor, so must zn, and we can cancel this factor (raised to the nth power) from both sides. Therefore, we have arrived at a situation in which we can apply the abc conjecture. The radical of xn, for any n, is at most x since multiplying x by itself does not introduce any more prime factors that were not already there. Hence rad(xnynzn) = rad(xn)rad(yn)rad(zn) ≤ xyz < z3. Therefore, for ε > 0, we have that

rad(xnynzn)1 + ε < (z3)1 + ε = z3 + 3ε.

On the other hand, applying the conjecture to this triple, we have that for ε > 0,

rad(xnynzn)1 + ε > zn

in all but finitely many cases. Since we can choose ε to be any positive number, we can make it small enough so that 3 + 3ε < 4 (e.g. if ε = 0.1). Then if n ≥ 4, the two inequalities above directly contradict each other. Since the top one always holds and the bottom holds in all but finitely many cases, we conclude that there can be at most finitely many exceptions to Fermat's Last Theorem when n ≥ 4.

So the abc conjecture does not quite imply Fermat's Last Theorem, but it comes very close. If, in addition, we knew just a bit more about how the exceptional abc triples behaved, we could manually verify that there are no counterexamples to Fermat's Last Theorem for n ≥ 4. Interestingly, this argument does not say anything about the n = 3 case, that is, about the non-existence of solutions to x3 + y3 = z3. This special case, however, had already been proven by Euler in the mid-1700s.

Of course, the abc conjecture remains unproven, while Fermat's Last Theorem was finally proven by Andrew Wiles in 1995. This was done by entirely different means. Nevertheless, this serves as a relatively simple example of how the conjecture can prove results about Diophantine equations without invoking very difficult mathematics. Another example of a consequence is the following statement, sometimes called Pillai's conjecture:

Conjecture: Every natural number k occurs only finitely many times as the difference of two perfect powers.

For example, the special case k = 1 is the subject of Catalan's conjecture, and states that xp - yq = 1 has only one solution: 32 - 23 = 1. This was proven by Preda Mihăilescu in 2002 (again by very different means from those above and from Wiles' methods), but the general case remains unsolved. If we knew for a fact that the abc conjecture were true, we would be able to prove this result by a very similar argument to the one given above for Fermat's Last Theorem (the reader is encouraged to try this!). Note that Pillai's conjecture also implies that the original equation that motivated the abc conjecture, namely y2 = x3 + k, also has only finitely many solutions (for fixed k). This is the result David Masser and Joseph Oesterlé sought on their way to first formulating the statement.

These examples start to indicate how important the abc conjecture is to the study of Diophantine equations; if it were proven, it would resolve many different problems that are currently treated separately in a single stroke. Even reproving known results in a new and simple way would be greatly beneficial to the theory, since a set of tools that could prove abc would help to unify disparate parts of number theory. As a result, mathematicians will doubtlessly continue work toward solving the conjecture and probing the most fundamental structure of numbers.

Sources: http://projectwordsworth.com/the-paradox-of-the-proof/, Shinichi Mochizuki: Inter-Universal Teichmüller Theory I: Construction of Hodge Theaters, http://mathworld.wolfram.com/PillaisConjecture.html, https://rlbenedetto.people.amherst.edu/talks/abc\_intro14.pdf, Brian Conrad: The abc Conjecture, 12 sep 2013.

Tuesday, April 16, 2019

The abc Conjecture: abc Triples

This is the second post in a series about the abc conjecture. For the first post, see here.

In the last post, we defined the radical of an integer n, namely the product of distinct prime factors of n. We suspected in the last post that for most equations a + b = c where a and b are relatively prime, rad(abc) > c. This is because this inequality expresses our hypothesis that there should not be too many high powers of primes in the factorizations of a, b, and c. As a result, we made the following conjecture:

Conjecture 1: For all but finitely many equations of the form a + b = c where a and b are relatively prime, rad(abc) > c.

However, as mentioned at the end of the previous post, this is in fact false. To prove this, we have to exhibit an infinite family of equations a + b = c with rad(abc) ≤ c. Any triple (a,b,c) of numbers satisfying this property is called an abc triple. The only example we've seen so far is (1,8,9), or in equation form, 1 + 8 = 9. In terms of this new definition, we are trying to show that there are infinitely many abc triples. The following claim gives the desired result.

Claim: For any prime number p grater than 2, the triple (a,b,c) = (1,2p(p-1) - 1,2p(p-1)) is an abc triple.

Proof: This family is infinite because there are infinitely many prime numbers p. The proof depends on a fact in elementary number theory known as the Euler-Fermat Theorem. This theorem may be used to show that b = 2p(p-1) - 1 is divisible by p2. This is significant because we now know that the radical of b cannot be greater than b divided by p; this is because taking the radical of b "forgets" about at least one of the factors of p. Of course, rad(1) = 1 and rad(2p(p-1)) = 2 so



Since p > 2, this last value is less than c, so that we do in fact have an infinite family of abc triples.

In fact the situation is even worse than this. Since the radical is less than 2c/p (as shown in the proof), it is not enough to replace the hypothesis rad(abc) > c with 2rad(abc) > c, or any higher multiple. We can make 2/p arbitrarily small by increasing p so that the radical is smaller than c by an arbitrarily large factor. For example, taking p = 5 gives the abc triple (1,1048575,1048576). Note that 52 = 25 divides b = 1048575, as claimed. Our proof guarantees that rad(abc) ≤ 2c/5. In fact the radical of this product is 419430. This is indeed less than 2/5 of c.

All of this shows that we cannot correct our conjecture 1 by adding a multiplicative factor to our inequality. The next reasonable thing one might try is a power law. Perhaps rad(abc)2 > c for all but finitely many equations, or something similar. This, in fact, is the correct idea. However, the choice of 2 as the exponent again seems arbitrary. We know already that the statement is false when the power is 1, so let's try increasing it just a little. This leads us to the actual abc conjecture.

The abc Conjecture: For any ε > 0, no matter how small, for all but finitely many equations of the form a + b = c where a and b are relatively prime, rad(abc)1 + ε > c.

The variable ε could for example be 1, and then we recover the rad(abc)2 > c inequality. However, ε could also be very close to 0, giving an exponent of 1 + ε very close to 1. Crucially, any function x1 + ε with ε > 0 eventually increases faster than any constant multiple of x, for example x1.1, x1.00001, etc. Therefore, this conjecture gets around the counterexample to conjecture 1. Nevertheless, the abc conjecture in some sense says that conjecture 1 is really close to being true. All we needed to do was increase the exponent by any positive amount. These concepts may become a little clearer with a new concept, called the quality of a triple (a,b,c). The formula for the quality, denoted q(a,b,c) is



This is another measure of how large c is compared to rad(a,b,c). In fact, rad(a,b,c)q(a,b,c) = c. For example, q(13,22,35) is about 0.386, and q(1,8,9) is close to 1.226. This allows a more succinct description of the conjecture: for most triples, q ≤ 1. It follows from our definition that abc triples are those for which q > 1. Finally, the abc conjecture is equivalent to the following.

The abc Conjecture (Second Formulation): For any ε > 0, no matter how small, for all but finitely many equations of the form a + b = c where a and b are relatively prime, q(a,b,c) < 1 + ε.

Let's see if our conjecture seems plausible from the numerical data. One possible way to do this is to come up with many triples and see how large the quality q is for each.



In the above diagram (click to enlarge), the x-axis is our variable c. For each c between 2 and 2000, the plot goes through all possible relatively prime values a and b adding to c, finds the triple among these with the highest quality, and plots a corresponding point there. Therefore, all points are already among the highest quality triples. Even among these, abc triples (those that lie above the horizontal q = 1 line) are rare. Furthermore, they seem to get even more rare as c increases. In terms of diagrams of this sort, the conjecture states that only finitely many dots lie above a given horizontal line q = 1 + ε for any ε > 0. The highest quality abc triple that appears on the plot is (3,125,128) = (3,53,27), with a quality of 1.426. Are there any higher quality triples out there?

In fact, there are well over a hundred known with higher quality, a list of which may be found here. Currently, the highest known quality belongs to the triple (2,6436341,6436343) = (2,310109,235), with q = 1.6299. Even assuming the abc conjecture does not answer the question of whether this triple is really the highest quality there is. All it says is that examples of this sort must eventually die out as we approach infinity. For instance, there may very well be no triples at all with q ≥ 2, meaning that crad(abc)2 may hold in all cases with no exceptions.

Of course, no matter how many examples we check, we are no closer to proving that the abc conjecture holds. In the last post, we will discuss attempts to prove it, as well as the applications of the statement, should it be true.

Sources: http://www.math.leidenuniv.nl/~desmit/abc/, Greg Martin and Winnie Miao: abc Triples; Arxiv:1409.2974v1 [math.NT] 10 sep 2014, Brian Conrad: The abc Conjecture, 12 sep 2013.

Tuesday, March 26, 2019

The abc Conjecture: Motivation

Some of the earliest problems in mathematics asked about the integer solutions to simple polynomial equations. For instance, what are the possible right triangles with whole number side lengths? The solution dates at least back to the Ancient Greeks; the side lengths are related by Pythagoras' famous formula x2 + y2 = z2. The 7th century Indian mathematician Brahmagupta studied integer solutions to the equation x2 - 2y2 = 1 as well as the same formula with 2 replaced by a general integer n (called Pell's equation). Many other similar equations have been studied for centuries or millennia.

In general, a Diophantine equation is a polynomial equation for which we are interested in integer solutions. Counterintuitively, some questions about solving these in the integers may be more difficult than considering all types of solutions. For example, the fundamental theorem of algebra states that any polynomial in a single variable has a root over the complex numbers (e.g. x3 - 4x2 + 17x + 20 = 0 is true for some complex number). However, there is often no integer solution to such equations.

Historically, different types of Diophantine Equations were typically solved by ad hoc methods, as they come in many different varieties. However, one general observation that connects many of these equations is that they state something about the factorization of a sum of two numbers. Pythagoras' equation says something special about the sum of two squares, namely that it is another square! Similarly, Pell's equation says that one plus some number multiplied by a square has the property that it too is square. Our motivating question may then be taken to be:

How does the factorization of a sum of two numbers relate to the factorizations of the individual numbers?

The abc Conjecture provides a partial answer to this question. Its name comes from the fact that we are considering equations of the form a + b = c and asking how the factorizations of the three numbers relate. Mathematicians David Masser and Joseph Oesterlé first made the conjecture in 1985 while studying integer points on what are called elliptic curves, in this case given by the equation y2 = x3 + k (where k is a fixed integer). This is yet another example of a sum having special factorization properties. Throughout the rest of this post, we will see how thinking about the motivating question might lead you to formulating the abc conjecture.

Simply put, we want the answer to our motivating question to be "it doesn't." Somehow, the additive and multiplicative structures of the integers should be independent of one another. This is in some ways a deep statement, and not at all intuitively clear, but we'll begin with this assumption. In other words, for an equation a + b = c, if all three numbers satisfy some special factorization properties (e.g. being cubes, etc.) it should in some sense be a coincidence. Our next task is to make this progressively less vague. First, we need a definition.

Definition: Two numbers are relatively prime if they share no common prime divisors.

For example, 34 and 45 are relatively prime, but 24 and 63 are not, because they are both divisible by 3. Here is how we will express our independence hypothesis: for any equation of the form a + b = c, where a and b are relatively prime, if a and b are divisible by high powers of primes, c almost always is not. This is in keeping with our theme because "divisible by high powers of primes" is special factorization property. That is, most prime factorizations should look more like 705 = 3*5*47 and not 768 = 28*3. The assumption that a and b are relatively prime exists to rule out silly equations like

2n + 2n = 2n + 1,

in which all three numbers are divisible by arbitrarily high powers of 2. This doesn't represent some special connection between addition and multiplication - all we've done is multiplied the equation 1 + 1 = 2 by 2n. If we assert that a and b are relatively prime, then the prime factors of each of the three numbers are distinct, and we eliminate the uninteresting examples. Next, we require a mathematical notion that measures "divisibility by high prime powers".

Definition: The radical of a number n, denoted rad(n), is the product of the distinct prime powers of n. Also define rad(1) = 1.

For example, rad(705) = 3*5*47 = 705 (since the factors 3, 5, and 47 are distinct) but rad(768) = 2*3 = 6. The radical function forgets about any powers in the prime factorization, keeping only the primes themselves. Notice that the radical of a number can be as large as the number itself, but it can also be much smaller. The amount by which rad(n) is smaller than n can be taken as a measure of to what extent n is divisible by large prime powers.

Now we return to our equation a + b = c (where we will now consistently assume the relatively prime hypothesis). A reasonable way to test for high prime power divisibility for all three of these numbers is to calculate rad(abc) = rad(a)rad(b)rad(c) (the reader may wish to prove this last equation). Since rad(abc) could be as large as abc itself, it seems likely that rad(abc) would usually be much larger any of the individual numbers, the largest of which is c. For example, consider 13 + 22 = 35. In this case, rad(abc) = rad(13*22*35) = 13*2*11*7*5 = 10010, which is much larger than c = 35. However, this property does not always hold true. Consider another example, 1 + 8 = 9. Now we have rad(abc) = rad(1*8*9) = 2*3 = 6, and 6 < 9 = c. Notice that this anomaly reflects something weird going on; the equation can also be written 1 + 23 = 32, so one plus a cube is a square. Testing different values of a, b, and c gives the impression that equations of the second sort are rare. Therefore, we make an almost mathematical conjecture:

"Almost" Conjecture: For equations of the form a + b = c where a and b are relatively prime, rad(abc) is almost always greater than c.

We're close! The equation rad(abc) > c is a bona fide mathematical condition that we can check. However, we have yet to render "almost always" into mathematical language. Clearly there are infinitely many a + b = c equations to look at. What does it mean to say that "most of them" behave in some way? We know from our 1 + 8 = 9 example that there are at least some exceptions. Maybe we could assert that there are less than 10 total exceptions, or less than 100. However, these numbers seem arbitrary, so we'll just guess that there are only finitely many exceptions. That is, all but at most N of these equations, for some fixed finite number N, satisfy our hypothesis. In conclusion, we conjecture that:

Conjecture 1: For all but finitely many equations of the form a + b = c where a and b are relatively prime, rad(abc) > c.

Finally, a real conjecture! Unfortunately, it's false. In other words, there are infinitely many such equations for which rad(abc) ≤ c. Don't worry! It's rare in mathematics to come up with the correct statement on the first try! In the next post, we'll prove our conjecture 1 false and see how to correct it.

Sources: https://rlbenedetto.people.amherst.edu/talks/abc\_intro14.pdf, Brian Conrad: The abc Conjecture, 12 sep 2013.

Tuesday, March 5, 2019

The Casimir Effect

The idea of the electromagnetic field is essential to physics. Dating back to the work of James Clark Maxwell in the mid-1800s, the classical theory of electromagnetism posits the existence of certain electric and magnetic fields that permeate space. Mathematically, these fields assign vectors (arrows) to every point in space, and their values at various points determine how a charged particle moving in space would behave. For example, the magnetic field generated by a magnet exerts forces on other nearby magnetic objects. Crucially, the theory also explains light as an electromagnetic phenomenon: what we observe as visible light, radio waves, X-rays, etc. are "waves" in the electromagnetic field that propagate in space.

Maxwell's theory is still an essential backbone of physics today. Nevertheless, the introduction of quantum mechanics in the early 20th century introduced new aspects of electromagnetism. Perhaps most importantly, it was discovered that light comes in discrete units called photons and behaves in some ways both as a wave and a particle. Though electromagnetism on the human scale still behaves largely as the classical theory predicts, at small scales there are quantum effects to account for. Around the middle of the century, physicists Richard Feynman, Shinichiro Tomonaga, Julian Schwinger, and many others devised a new theory of quantum electrodynamics (or QED) that described how light and matter interact, even on quantum scales.

Naturally, QED predicted new phenomena that classical electromagnetism had not. One especially profound change was the idea of vacuum energy. For most purposes, "vacuum" is synonymous with "empty space". As is typical of quantum mechanics, however, a system is rarely considered to be in a single state, but rather in a superposition of many different states simultaneously. These different states can have different "weights" so that the system is "more" in one given state than another. This paradigm applies even to the vacuum. Certain pairs of particles may appear and disappear spontaneously in many of these states and even exchange photons. Some of the possible interactions are illustrated below with Feynman Diagrams.



In these diagrams, the loops represent the evanescent virtual particle pairs described above. Wavy lines represent the exchange of photons. Each of the six diagrams represents a possible vacuum interaction, and there are many more besides (infinitely many, in fact!). The takeaway is that the QED vacuum is not empty, but rather a "soup" of virtual particle interactions due to quantum fluctuations. Further, these interactions have energy, known as vacuum energy. This, at least, is the mathematical description. There are some curious aspects to this description, because the vacuum energy calculation in any finite volume yields a divergent series. In other words, there is theoretically an infinite amount of vacuum energy in any finite volume! Because of this, physicists devised a process called renormalization that cancels out these infinities in calculations describing the interaction of real particles. This process in fact gives results that have been confirmed by experiment. Nevertheless, it does not follow that the infinite vacuum energy exists in any "real" sense or is accessible to measurement. One possible way in which it is, however, is the Casimir Effect.



The setup of the Casimir effect involves two conducting metal plates placed parallel to one another. The fact that the plates are conducting is important because the electric field vanishes inside conducting materials. Now, the vacuum energy between the plates can be calculated as a sum over the possible wavelengths of the fluctuations of the electromagnetic field. However, unlike the free space vacuum, the possible wavelengths are limited by the size of the available space: the longest wavelength contributions to the vacuum energy do not occur between the plates (this is schematically illustrated in the image above). A careful subtraction of the vacuum energy density inside the plates from outside yields that there is more energy outside. Remarkably, this causes an attractive force between these plates known as the Casimir force. The force increases as the distance between plates is decreased. Precisely, the magnitude of the force F is proportional to 1/d4, where d is the distance between the plates. As a result, if the distance is halved, the force goes up by a factor of sixteen! The initial calculation of this effect was due to H.G.B Casimir in 1948.

Around 50 years after first being postulated, the effect was finally measured experimentally with significant precision. The primary issue was that for the Casimir force to be large enough to measure, the metal plates would have to be put very close to one another, less than 1 micrometer (0.001 mm). Even then, very sensitive instruments are necessary to measure the force. One landmark experiment took place in 1998. Due to the practical difficulty of maintaining two parallel plates very close to one another, this experiment utilized one metal plate and one metal sphere with a radius large compared to the separation (so that it would "look" like a flat plate close up). The authors of the experiment also added corrections to Casimir's original equation accounting for the sphere instead of the plane and the roughness of the metal surfaces (at the small distances of the experiment, microscopic bumps matter). They obtained the following data for the force as it varies with distance:



In the figure above, the squares indicate data points from the experiment and the curve is the theoretical model (including the corrections mentioned). The distance on the x-axis is in nanometers and the smallest distance measured was around 100 nm, hundreds of times smaller than the width of a human hair. Even at these minuscule distances, the force only reached a magnitude of about 1*10-10 Newtons, a billion times smaller than the weight of a piece of paper. Nevertheless, the results confirmed the presence of the Casimir force to high accuracy.

The existence of the Casimir effect would seem to vindicate the rather strange predictions of QED with respect to the quantum vacuum, suggesting that it is indeed full of energy that can be tapped, if indirectly. However, others have argued that it is possible to derive the effect without reference to the energy of the vacuum, and therefore the experiment does not necessarily mean that vacuum energy is "real" in any meaningful way. Continued study into the existence of vacuum energy may help to explain the accelerating expansion of the universe since some mysterious "dark energy" is believed to be the source. In the mean time, the Casimir effect is an important experimental verification of QED and could someday see applications in nanotechnology, since the force would be relatively large on small scales.

Sources: https://www.scientificamerican.com/article/what-is-the-casimir-effec/, https://arxiv.org/pdf/hep-th/0503158.pdf, The Quantum Vacuum: An Introduction to Quantum Electrodynamics by Peter W. Milonni, http://web.mit.edu/kardar/www/research/seminars/PolymerForce/articles/PRL-Mohideen98.pdf

Tuesday, February 12, 2019

Ocean Currents and the Thermohaline Circulation

Ocean currents are ubiquitous and familiar. Beach goers are wary of tidal currents, as well as those caused by weather systems. Currents caused by tides and weather are constantly changing and chaotic. However, under this noise exists a larger-scale and more orderly system of circulation. By averaging over long time periods (in effect screening out the noise of short-term fluctuations), larger currents such as the Gulf Stream, which brings warm water northward along the east coast of the United States, appear. Another example is the California current, which brings the cold waters of the north Pacific down along the west coast. But why do these currents exist? Some patterns may be seen if we expand our view to the world as a whole.



The above image shows major surface ocean currents around the world. Note that despite geographical differences, some currents in each ocean in each hemisphere follow the same general pattern, flowing east to west in the tropical latitudes, toward the poles on the western edge of ocean basins, west to east at mid-latitudes, and finally toward the equator on the eastern edges. These circular currents are known as subtropical gyres. For example, the Gulf Stream is the western poleward current in the north Atlantic subtropical gyre and the California current the eastern current toward the equator in the north Pacific tropical gyre. These exist largely due to the Earth's prevailing winds.



The prevailing winds at the Earth's surface fit into a larger dynamic atmospheric pattern. The greater heating of the tropics as compared to the polar regions and the rotation of the Earth lead to the formation of three atmospheric cells in each hemisphere. The winds in these cells rotate due to the Coriolis effect (in essence the fact that straight paths appear to curve from the viewpoint of an observer on a rotating planet), producing east to west winds in the tropics and polar regions, and west to east winds in the mid-latitudes. Look back at the subtropical gyres on the map of currents. Notice that the currents labeled "equatorial" follow the trade winds, and the north Pacific, north Atlantic, etc. currents follow the prevailing westerlies. The west and east boundary currents then "complete the circle" and close the flow. This is no coincidence. It is friction between air and water that drives subtropical gyres: the force of wind tends to make water flow in the same direction. Another important example is the Antarctic circumpolar current, the largest in the world. Since there are no landmasses between roughly 50°S and 60°S latitude, the westerlies drive a current unimpeded that stretches all the way around the continent of Antarctica. Many other currents in the global diagram are responses to these main gyres, or are connected to prevailing winds in more complicated ways.

This system of ocean currents has profound impacts on weather and climate. Due to the Gulf Stream, north Atlantic current, and its northern extension, the Norwegian current, temperatures in northwestern Europe are several degrees warmer than they would otherwise be.



The above image illustrates one of the influences of ocean currents on weather. It shows all tropical cyclone tracks (hurricanes, typhoons, etc.) worldwide from 1985-2005. Since tropical cyclones need warm ocean surface waters to develop, the cold California current helps to suppress eastern Pacific hurricanes north of 25° N or so. In contrast, the warm Kuroshio current in the western Pacific allows typhoons to regularly affect Japan, which is at a higher latitude. Note also the presence of cyclones in the southwest Pacific and the lack of any formation in the southeast Pacific (though very cold surface waters are only one of several factors in this).

Despite their vast impact, which goes well beyond the examples listed, all of the currents considered so far are surface currents. Typically, these currents exist only in the top kilometer of the ocean, and the picture below this can look quite different.



The above image gives a very schematic illustration of the global three-dimensional circulation of the oceans, known as the thermohaline circulation. The first basic fact about this circulation, especially the deep ocean circulation, is that it is slow. Narrow, swift surface currents such as the Gulf Stream have speeds up to 250 cm/s. Even the slower eastern boundary currents often manage 10 cm/s. In contrast, deep ocean currents seldom exceed 1 cm/s. Their tiny speed and remoteness makes them extremely difficult to measure; in fact, rather than directly charting their course, the flow is inferred from quantities called "tracers" in water samples. Measurements of the proportion of certain radioactive isotopes, for example, are used to calculate the last time a given water sample "made contact" with the atmosphere.



The above graphic illustrates the age of deep ocean water around the world. The age (in years) is how long it has been since a given water parcel came to equilibrium with the surface. Note that the thermohaline circulation occurs on timescales of over 1000 years. This information indicates that deep water formation (when water from the surface sinks) takes place in the North Atlantic but not the North Pacific, as indicated in the first graphic. This is because all the deep waters of the Pacific are quite "old". Deep water formation also occurs in the Southern Ocean, near Antarctica. In both cases, the mechanism is similar: exposure to frigid air near the poles makes the surface waters very cold, and therefore dense. Further, in winter, sea ice forms in these cold waters, leaving saltier water behind (since freshwater was "taken away" to form sea ice). This salty, cold water is denser than the ocean around it and it sinks. The newly formed deep water can flow near the bottom of the ocean for hundreds of years before coming back to the surface.

One climatological influence of this phenomenon is the ocean's increased ability to take up carbon dioxide. Most of the carbon dioxide emitted by human industry since the late 1800s has dissolved in the oceans. Since deep water takes so long to circulate, increased CO2 levels are only now beginning to penetrate the deep ocean. Most ocean water has not "seen" the anthropogenic CO2 so it will continue to take up more of the gas for hundreds of years. Without this, there would much more carbon dioxide in the atmosphere, and likely faster global warming.

The network of mechanisms driving ocean currents and the thermohaline circulation is quite intricate, and we have only touched on some of them here: weather systems, prevailing winds, differences in density, etc. There are many more subtleties as to why the ocean circulates the way it does. The study of these nuances is essential for fully understanding the Earth's weather and climate.

Sources: Atmosphere, Ocean, and Climate Dynamics: An Introductory Text by John Marshall and R. Alan Plumb, https://www.britannica.com/science/ocean-current, http://www.seos-project.eu/modules/oceancurrents/oceancurrents-c01-p03.html

Tuesday, January 22, 2019

GW170817 and Multi-Messenger Astronomy Part 2

This is the second part of a two-part post. For the first part, see here.

The previous post described the gravitational wave event GW170817 (which took place on August 17, 2017) and how it was ultimately identified as a binary neutron star merger. In addition, it was associated with a gamma ray burst (designated GW170817) and imaged across the electromagnetic spectrum, an unprecedented and landmark event in the field of multi-messenger astronomy. Though it is intrinsically of interest to be able to both "see" (EM waves) and "hear" (gravitational waves) an astrophysical event, what are some other conclusions to be drawn from the merger?

One simple conclusion requires nothing more than a quick calculation, but verifies a foundational principle of physics that while almost universally assumed, had never been directly proven. This principle states that both electromagnetic waves and gravitational waves travel at the speed of light in a vacuum, about 3*108 m/s. Recall that the merger is estimated to have taken place about 130 million light-years away. This means that both the gravitational wave signal and the gamma ray burst both took about 130 million years to travel from the source to detectors on Earth. Despite their long journey, they arrived within a few seconds of each other. Now, we cannot be certain exactly when the gamma ray burst was emitted, due to our incomplete understanding of how a binary neutron star merger would work. However, it is likely that the neutron stars must first collide (marking the end of the gravitational wave signal) before emitting a burst of gamma radiation. Moreover, this initial high-energy burst was estimated by most models to occur no more than a few minutes after the merger. Therefore, dividing the amount by which the signals could have drifted apart over their travel time, we obtain bounds on the "speed of gravity" relative to the speed of light. Even with conservative assumptions, these observations prove that the two speeds very likely differ by no more than one part in a trillion (0.0000000001%) and probably several orders of magnitude less than this. Theoretically, they are equal, but never before has this been measured with such incredible precision.

In a similar vein, the merger allowed other tests of various aspects of general relativity and field theory, such as the influence of gravitational waves on the propagation of electromagnetic field. The data all confirmed the current understanding of general relativity and set very tight bounds on possible deviations, better than those ever achieved in the past.

The detection of the merger also taught us about the very structure of neutron stars. Unlike black holes, which (to our knowledge) are effectively points of mass, neutron stars are on the order of a few miles across. Considering their mass (usually 1-2 solar masses), they are exceedingly dense, but nevertheless their physical size affects how the gravitational wave event unfolds. When the two objects get very close to one another, their mutual gravitational attraction is expected to cause tidal deformations, i.e. warping of their shape and mass distribution. In theory, information concerning the deformation is encoded in the measured waveforms.



The figure above (click to enlarge), while rather technical, gives some idea as to how exactly the gravitational wave data constrain the structure of the neutron star. The statement |χ| < 0.05 in the diagram indicates that the entire figure is made presupposing that the neutron stars were not spinning too fast (which our knowledge of neutron star systems suggests is a very reasonable assumption). The two axes measure the magnitude of two parameters Λ1 and Λ2 that measure how much the larger and the smaller neutron stars, respectively respond to tidal deformation. In other words, smaller values of the parameters (toward the lower left) mean denser and more compact neutron stars, as indicated. More on what these parameters actually mean can be found in the original paper here.

Next, the darker shades of blue represent values considered more likely given the shape of the gravitational wave signal. This is a probability distribution, and lighter shaded areas were not ruled out with certainty, but simply deemed less likely. The uncertainty in the original masses contributes to the uncertainty in this diagram. Finally, the gray shaded "stripes" indicate the predictions of several different theoretical models of neutron stars. These are distinguished by their different equations of state, which specify how mass, pressure, density, and other properties of neutron stars relate to one another. The varying predictions of these models show just how little was definitively known about neutron stars. Analysis of the merger event suggested that the "SLy" and "APR4" models were more accurate than the rest (at 50% confidence) and that the "MS1" and "MS1b" models are unlikely to be correct (with more than 90% confidence). No model was ruled out for sure, but the data above suggest that neutron stars are more compact than most models predicted.

The gamma ray burst that followed the merger also contained some information concerning how these mergers actually occur and the physics of when and why high-energy radiation is released. Notably, the gamma ray burst was a single short pulse (lasting under a second) with no discernible substructure. It was difficult to draw conclusions from this limited sample, but explaining the nature of the pulse and the delay may require a dense layer of ejecta from each of the neutron stars to momentarily impede electromagnetic radiation from the merger. It would take some time for gamma rays to penetrate this cloud of debris until they finally burst through.

Moreover, among the known population of gamma ray bursts, GW1701817A was relatively dim. This may have been due to the main "jets" of energy not being along our line of sight; most of the burst is hypothesized to have been released along the original axis of rotation of the two bodies. The discrepancy may in part have been due to observational bias, since brighter events are more likely to be observed. In such cases, the Earth likely was directly facing the angle of peak gamma ray emission. Detecting an event "off-center" elucidates somewhat the structure and extent of the these jets.



The image above (click to enlarge) was originally from this paper. It demonstrates schematically some different theories explaining the relative dimness of the gamma ray burst event and the structure of the jets along the axis of rotation. Earlier theories postulated a relatively uniform jet, as shown in the first scenario. If this is the case, our line of sight may have been outside the jet, but relativistic effects allowed us to see a smaller amount of the radiation. Other explanations postulate that the jet has some internal structure and "fades" with increasing angle from the axis (ii) or that the interaction of the jet with surrounding matter produces a secondary cocoon of radiation (iii). A final scenario is simply that this event was a few orders of magnitude dimmer than other known gamma ray bursts for some intrinsic reason, although the authors deem this unlikely.

Without the background information provided by the gravitational wave signal (the component masses of the merger, the timing of the merger, etc.), little of the above could be gleaned from the gamma ray signal. Nor would it be possible with only one of the two to conduct the precision tests of fundamental physics described earlier. These are examples of the power of multi-messenger astronomy. Having both an eye and an ear to the cosmos will continue to yield fundamental insights into the nature of our universe.

Sources: https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.119.161101, https://arxiv.org/pdf/1710.05834.pdf, http://iopscience.iop.org/article/10.3847/2041-8213/aa91c9/pdf

Tuesday, January 1, 2019

GW170817 and Multi-messenger Astronomy

The first astronomers had only their own eyes as tools, and visible light was their only source of information. Recent instruments have broadened our sight to include all types of electromagnetic radiation, from radio waves to X-rays and gamma rays. Each part of the spectrum is suited to different types of observations and gave us incredible new insight into the cosmos. However, the second decade of the 21st century saw the advent of a fundamentally new kind of astronomy: the detection of gravitational waves.

Gravitational waves, as discussed in a previous post, are the "ripples" in spacetime that propagate in response to the acceleration of massive objects (stars, black holes, and the like). All objects with mass produce these waves, but the vast majority are far too small to detect. It was only with the advent of extremely sensitive instruments that the first detection of gravitational waves was made by LIGO (the Laser Interferometer Gravitational-Wave Observatory) in 2015. This detection, and its immediate successors, were of binary black hole merger events, in which two black holes orbiting one another spiraled inwards and finally combined into a single, larger black hole. The last moments before merging brought exceptionally colossal objects (weighing perhaps dozens of solar masses) to great accelerations, the perfect recipe for producing strong gravitational waves detectible across the cosmos. However, these cataclysmic events were quite dark: little electromagnetic radiation was emitted, and no "visual" evidence for these events accompanied the wave signal. Something quite different occurred in 2017.



On August 17, 2017 at 12:41 UTC, the LIGO detectors at Hanford, Washington and Livingston, Louisiana and the Virgo gravitational wave detector in Italy simultaneously measured an event as shown above (click to enlarge). The two LIGO frequency-time diagrams clearly show a curve that increases in frequency before disappearing at time 0. This corresponds to two inspiraling objects orbiting one another faster and faster before merging finally occurs and the signal stops. In the Virgo diagram, the same line is not very visible, but further analysis of the data nevertheless expose the same signal from the noise. The gravitational wave event, designated GW170817, was genuine.

Having three detectors at different points on the Earth measure the event allowed a better triangulation of the location of the source than had occurred previously (when LIGO and Virgo were not simultaneously active).


The above figure shows a visualization of the celestial sphere (representing all possible directions in the sky from which the signal could have come) and locations from which the signal data suggest the signal originated. The green zone is the highest probability region taking all three instruments into account. This area is still 31 square degrees, quite large by astronomical standards. Fortunately, corroboration of the event came immediately from an entirely separate source.



The above figure (click to enlarge) shows at the bottom the same gravitational wave signal from before. The rest of the data come from the Fermi Gamma-ray Space Telescope and the International Gamma Ray Astrophysics Laboratory, both satellites in Earth orbit. As their names suggest, they search the cosmos for astrophysical sources of high-energy gamma rays. In particular, they monitor the cosmos for gamma-ray bursts (GRBs), especially intense flashes of radiation that typically accompany only the most explosive events, such as supernovae. As the figure shows, less than two seconds after the gravitational wave signal stopped (indicated the merger of two orbiting objects), there was an elevated count of gamma rays in each detector across the different photon energy levels. The source of this burst is indicated by a reticle in the celestial sphere figure above, lying right within the estimated location of the merger! It appeared that this merger had an electromagnetic counterpart! Further, analysis of the gravitational waves indicated that the masses of the two objects were around 1.36-2.26 and 0.86-1.36 solar masses (these were the uncertainty ranges), respectively, not heavy enough for black holes. What was going on?



The conclusion drawn from these events was that the merger was not of black holes, but of neutron stars, compact remnants of large stars that were yet not massive enough to collapse into black holes. An artist's conception of a binary neutron star black hole merger is shown above. Following the initial identification of the event, countless telescopes around the world trained on the event the very same day after a notice was released around 13:00 UTC, hoping to observe more following the merger.



And they were not disappointed. Less than a day after the initial gamma ray burst had faded, the source began to appear at other frequencies, and remained bright for several weeks before fading. The above figure shows the Hubble image of the merger's host galaxy, NGC 4993. This galaxy is at a distance of roughly 130 million light-years, and even at this distance, the collision of the neutron stars was clearly visible against the billions of other stars. Finally, the chart below demonstrates just how well documented the event was:



Many different instruments took images in X-rays as well as ultraviolet, visible, infrared, and radio waves. The horizontal axis indicates the rough timeline of events (on a logarithmic scale) in each part of the electromagnetic spectrum, stretching from less than a day to several weeks after the merger. Several representative images of NGC 4993 and the source within are shown at bottom.

Without extensive collaboration within the astronomical community, collecting this wealth of data on this binary neutron star merger would not have been possible. This marked the first time in history that a single event was measured in both gravitational waves and electromagnetic waves, not to mention how thoroughly the merger was photographed across the spectrum. This coordinated observation is known as multi-messenger astronomy, and may have profound implications on our future understanding of the universe. Some of what we learned from the binary neutron star merger is discussed in the next post.

Note: Most of the figures above are taken from the open access papers detailing the discovery and analysis of the binary neutron star merger. For further reading on the event, links to these papers may be found in the sources below.

Sources: https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.119.161101, https://arxiv.org/pdf/1710.05834.pdf, http://iopscience.iop.org/article/10.3847/2041-8213/aa91c9/pdf