Storm Active: May 25-30
Beginning around May 20, a trough of low pressure located in the western Caribbean produced widespread thunderstorm activity as it interacted with an upper-level low. Over the next several days, the disturbance tracked generally northwestward. In the mean time, abundant moisture in the area caused sporadic rainfall from portions of Honduras to the Yucatan Peninsula to western Cuba. Even after a surface low formed, the system remained quite disorganized due to land interaction with the Yucatan and strong upper-level winds out of the west. Despite fairly hostile conditions, the low became better defined during the day of May 24. By the morning of the 25th, the surface low had emerged over water adjacent to the northeast Yucatan Peninsula with a large area of strong thunderstorms to the north and east. In addition, buoy and ship reports suggested the presence of winds to gale force. Since the low was situated under an upper-level trough, and not the upper-level high associated with traditional tropical storms, the system was designated Subtropical Storm Alberto late that morning.
During that day, the surface circulation of Alberto was far removed from the thunderstorm activity to the north and east. In fact, the overall circulation appeared to me moving northeast while the low-level swirl drifted just south of east. Nevertheless, heavy rains continued over much of Cuba and the outer bands began to affect southern Florida. Overnight, upper level winds lessened considerably, and limited convection finally appeared near the surface center. The center also turned north and accelerated early on May 26, essentially "catching up" with the rest of the circulation. As a result, Alberto's satellite presentation improved considerably. A further reformation of the center took place later that day, this time to the northeast of the previous position. This and the system's generally northward movement brought Alberto into the eastern Gulf of Mexico, not too far from the west coast of Florida. However, this coast was saved from the heavier rainfall by a dry slot in the eastern semicircle; it was now the other side that had most of the convection.
That evening and overnight, Alberto's pressure dropped considerably, its center became better defined, and it began to take on some more tropical characteristics. The storm's maximum winds increased in turn during the day of May 27. The storm also turned toward the northwest briefly under the influence of an upper-level low. Despite organization improvements, dry air was taking its toll on Alberto, invading via the western side and eroding deep convection away from the center. Situated over relatively cold eastern Gulf waters, the system also did not develop the deep warm core needed to be classified as a tropical storm. Nevertheless, Alberto reached its peak intensity of 65 mph winds and a pressure of 991 mb that evening as it approached the Florida panhandle.
Continued dry air intrusion and proximity to land decreased the storm's winds gradually as bands of heavy rain swept across the Gulf coast early on May 28. The center of Alberto made landfall that afternoon in the Florida panhandle, bringing heavy rains and localized flooding to parts of the southeast U.S. At landfall, the storm had maximum winds of 50 mph. That night, it weakened to a subtropical depression over land as it continued northward over Alabama. Curiously, the system completed its transition to a tropical cyclone (becoming a tropical depression) over Tennessee late that evening. The circulation maintained its identity and continued to cause rainfall even into May 30, when it finally became extratropical over Michigan. Alberto marked an early start to the Atlantic hurricane season for the 4th consecutive time, only the 2nd known time this has occurred (after 1951-4).
The above image shows Alberto in the eastern Gulf of Mexico on May 27.
Alberto was subtropical most of its life (square points) but transitioned over land to a tropical depression (blue circular points) and maintained this status remarkably far north.
A tactical guide to the infinite realm of science. Although the world of science would take eternity to explore, Professor Quibb attempts to scrape the edge of this Universe. This blog helps you to understand particular topics under the more general categories: cosmology, mathematics, quantum physics, meteorology and others. Join me on my trek across the untraversed lands of the unknown.
Pages
▼
Friday, May 25, 2018
Wednesday, May 16, 2018
Professor Quibb's Picks – 2018
My personal prediction for the 2018 North Atlantic hurricane season (written May 16, 2018) is as follows:
18 cyclones attaining tropical depression status,
16 cyclones attaining tropical storm status,
8 cyclones attaining hurricane status, and
4 cyclones attaining major hurricane status.
In the wake of the especially devastating 2017 season, it is difficult to predict with any certainty the outcomes for this year. Once again, models indicate that the El Niño Southern Oscillation Index (or ENSO index) will be near zero or slightly positive during this hurricane season. This index, which is a certain quantitative measure of sea surface temperature anomalies in the tropical Pacific Ocean, has some ability to predict Atlantic hurricane activity. A positive index indicates an El Niño event, which tends to correlate with higher wind shear across the Atlantic basin and less tropical cyclone development. This effect is especially pronounced in the Gulf of Mexico and Caribbean Sea. The image below shows the ENSO forecast for this season (image from the International Research Institute for Climate and Society):
However, last year's forecast was qualitatively similar, but the index ended up dipping back negative and leaving very favorable conditions for hurricane formation. Though consideration of the ENSO index alone would lead to the prediction of an average hurricane season, there is significant uncertainty. Overall, I consider the ENSO to mainly a neutral factor this year.
Present ocean temperatures in the Atlantic are slightly above average in the Gulf of Mexico and Caribbean, and significantly above average in the subtropical Atlantic and near the U.S. east coast. However, there is a large area of below average temperatures in the tropical Atlantic which is forecast by long-term models to possibly persist for a few months. The tropical Atlantic has also been dry and stable, in contrast to elevated storm activity in the Caribbean and Gulf of Mexico. These trends also show some signs of persisting into the beginning of hurricane season. I therefore expect a slow start to the season in the main development region of the tropical Atlantic (extending from Africa to the Caribbean) and a corresponding lack of Cape Verde or long-track hurricanes, though these could appear more in late September and October. There is significant potential for formation in areas closer to land, so I expect some shorter lived hurricanes in the northern Caribbean/Gulf of Mexico and U.S. east coast regions.
My estimated risks for different parts of the Atlantic basin are as follows (with 1 indicating very low risk, 5 very high, and 3 average):
U.S. East Coast: 5
The jet stream over the U.S. has been weaker than usual so far this season, and the Bermuda high stronger. However, with a weak El Nino possibly developing, long hurricane tracks westward into the Gulf still seem unlikely. The east coast, in contrast, is at a greater risk. Ocean temperatures offshore are anomalously warm and region will be very moist, suggesting a fairly high probability of tropical cyclone impacts.
Yucatan Peninsula and Central America: 3
The western Caribbean shows some signs of being a fertile area for cyclonogenesis, but with prevailing upper-level patterns as they are, it is difficult to see strong system taking due westward tracks into central America. Compared to the last few years, strong hurricanes are less of a threat, though the potential for flooding rains may be equal or greater.
Caribbean Islands: 2
As discussed above, the main development region may remain quiet for at least the first half of hurricane season. This would insulate the Caribbean islands from the approach of Cape Verde hurricanes to the west, but does not preclude development occurring locally. Nevertheless, it is somewhat more likely this year that the islands will receive a break from intense hurricane landfalls, especially the easternmost islands.
Gulf of Mexico: 3
Factors in the Gulf point in different directions. Ocean waters are warm and will likely continue to be so, particularly in eddies originating in the northern Caribbean (which also happens to be a likely source of Gulf hurricanes). On the other side, if an El Niño does develop, the Gulf of Mexico is where the suppression of hurricane activity would be most felt. Putting this together suggests a near-average risk this year.
Overall, the 2018 season is expected to be a bit above average; it should not be a repeat of the devastating 2017 season, but many areas such as the U.S. east coast may still be at high risk. Further, this is just an informal forecast and uncertainty in the outcome remains significant. Everyone in hurricane-prone areas should still take due precautions as hurricane season approaches. Dangerous storms may still occur even in overall quiet seasons.
Sources: http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/lanina/enso_evolution-status-fcsts-web.pdf, https://www.tropicaltidbits.com/analysis/models/?model=cfs-avg, https://ocean.weather.gov/
18 cyclones attaining tropical depression status,
16 cyclones attaining tropical storm status,
8 cyclones attaining hurricane status, and
4 cyclones attaining major hurricane status.
In the wake of the especially devastating 2017 season, it is difficult to predict with any certainty the outcomes for this year. Once again, models indicate that the El Niño Southern Oscillation Index (or ENSO index) will be near zero or slightly positive during this hurricane season. This index, which is a certain quantitative measure of sea surface temperature anomalies in the tropical Pacific Ocean, has some ability to predict Atlantic hurricane activity. A positive index indicates an El Niño event, which tends to correlate with higher wind shear across the Atlantic basin and less tropical cyclone development. This effect is especially pronounced in the Gulf of Mexico and Caribbean Sea. The image below shows the ENSO forecast for this season (image from the International Research Institute for Climate and Society):
However, last year's forecast was qualitatively similar, but the index ended up dipping back negative and leaving very favorable conditions for hurricane formation. Though consideration of the ENSO index alone would lead to the prediction of an average hurricane season, there is significant uncertainty. Overall, I consider the ENSO to mainly a neutral factor this year.
Present ocean temperatures in the Atlantic are slightly above average in the Gulf of Mexico and Caribbean, and significantly above average in the subtropical Atlantic and near the U.S. east coast. However, there is a large area of below average temperatures in the tropical Atlantic which is forecast by long-term models to possibly persist for a few months. The tropical Atlantic has also been dry and stable, in contrast to elevated storm activity in the Caribbean and Gulf of Mexico. These trends also show some signs of persisting into the beginning of hurricane season. I therefore expect a slow start to the season in the main development region of the tropical Atlantic (extending from Africa to the Caribbean) and a corresponding lack of Cape Verde or long-track hurricanes, though these could appear more in late September and October. There is significant potential for formation in areas closer to land, so I expect some shorter lived hurricanes in the northern Caribbean/Gulf of Mexico and U.S. east coast regions.
My estimated risks for different parts of the Atlantic basin are as follows (with 1 indicating very low risk, 5 very high, and 3 average):
U.S. East Coast: 5
The jet stream over the U.S. has been weaker than usual so far this season, and the Bermuda high stronger. However, with a weak El Nino possibly developing, long hurricane tracks westward into the Gulf still seem unlikely. The east coast, in contrast, is at a greater risk. Ocean temperatures offshore are anomalously warm and region will be very moist, suggesting a fairly high probability of tropical cyclone impacts.
Yucatan Peninsula and Central America: 3
The western Caribbean shows some signs of being a fertile area for cyclonogenesis, but with prevailing upper-level patterns as they are, it is difficult to see strong system taking due westward tracks into central America. Compared to the last few years, strong hurricanes are less of a threat, though the potential for flooding rains may be equal or greater.
Caribbean Islands: 2
As discussed above, the main development region may remain quiet for at least the first half of hurricane season. This would insulate the Caribbean islands from the approach of Cape Verde hurricanes to the west, but does not preclude development occurring locally. Nevertheless, it is somewhat more likely this year that the islands will receive a break from intense hurricane landfalls, especially the easternmost islands.
Gulf of Mexico: 3
Factors in the Gulf point in different directions. Ocean waters are warm and will likely continue to be so, particularly in eddies originating in the northern Caribbean (which also happens to be a likely source of Gulf hurricanes). On the other side, if an El Niño does develop, the Gulf of Mexico is where the suppression of hurricane activity would be most felt. Putting this together suggests a near-average risk this year.
Overall, the 2018 season is expected to be a bit above average; it should not be a repeat of the devastating 2017 season, but many areas such as the U.S. east coast may still be at high risk. Further, this is just an informal forecast and uncertainty in the outcome remains significant. Everyone in hurricane-prone areas should still take due precautions as hurricane season approaches. Dangerous storms may still occur even in overall quiet seasons.
Sources: http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/lanina/enso_evolution-status-fcsts-web.pdf, https://www.tropicaltidbits.com/analysis/models/?model=cfs-avg, https://ocean.weather.gov/
Tuesday, May 15, 2018
Hurricane Names List – 2018
The name list for tropical cyclones forming in the North Atlantic basin for the year 2018 is as follows:
Alberto (used)
Beryl (used)
Chris (used)
Debbie (used)
Ernesto (used)
Florence (used)
Gordon (used)
Helene (used)
Isaac (used)
Joyce (used)
Kirk (used)
Leslie (used)
Michael (used)
Nadine (used)
Oscar (used)
Patty
Rafael
Sara
Tony
Valerie
William
This list is the same as the list for the 2012 season, with the exception of Sara, which replaced the retired name Sandy.
Alberto (used)
Beryl (used)
Chris (used)
Debbie (used)
Ernesto (used)
Florence (used)
Gordon (used)
Helene (used)
Isaac (used)
Joyce (used)
Kirk (used)
Leslie (used)
Michael (used)
Nadine (used)
Oscar (used)
Patty
Rafael
Sara
Tony
Valerie
William
This list is the same as the list for the 2012 season, with the exception of Sara, which replaced the retired name Sandy.
Monday, May 7, 2018
Goodstein's Theorem and Non-Standard Models of Arithmetic
This is the final post in a four-part series on logic and arithmetic, with a focus on Goodstein's Theorem. For the first post, see here.
In the previous post, Goodstein's Theorem, a statement about the properties of certain sequences of natural numbers, was proven using infinite ordinals. The use of a method "outside" arithmetic makes it reasonable that this proof cannot be encoded in the language of Peano Arithmetic (PA), the formal logical system for discussing the natural numbers. A stronger statement is also true: there is no proof of Goodstein's Theorem in PA because it cannot be deduced from the axioms of PA.
But how does one go about proving something unprovable? Certainly it is intractable to check every possible method, as the diversity of such attempts could be infinite. Mathematicians take a different approach, using tools from what is called model theory. In mathematical logic, a model of a collection of axioms is a specific structure within which the axioms (and all theorems derived from them) are interpreted to be true. Recall that the axioms of PA mentioned five specific objects, that were assumed to be given from the start: a set N, a specific member, 0, a function S from N to itself, and two binary operations on N, + and *. Of course, to actually do arithmetic we interpret N as the set of natural numbers, 0 as the number 0, S as the "successor" function taking in a number n and returning n+1, and + and * as the usual addition and multiplication. Until we provide an interpretation of to what these objects refer, namely a model, they are just symbols! We may prove statements about them, such as the fact that S(0) and S(S(0)) are distinct members of N, but this is just a mathematical sentence resulting as the end product of a series of formal deductive rules.
Any collection A = (NA,0A,SA,+A,*A) of a set NA, a member 0A of the set, a function SA:NA→NA, and two binary operations +A and *A that satisfies the axioms is a model of PA. Of course, we know fairly well what we mean by "natural numbers", namely {0,1,2,...} with 0 the first element, S sending 0 to 1, 1 to 2, etc, and the usual addition and multiplication. The entire point of selecting axioms for PA is to study ℕ = (N,0,S,+,*), the standard natural numbers. A natural question (called the question of categoricity) arises: is the standard model the only type of model for PA, or are there others? The answer is no; there are other, non-standard models A that still satisfy every axiom of PA. These were first discovered by Norwegian mathematician Thoralf Skolem in 1934. To be clear, they are not the natural numbers, at least, not as we intend them to be. Their existence exemplifies another limitation of first-order logic: axiom systems often fail to specify structures uniquely and hence fail to capture some features of the field to be studied.
Non-standard models often serve as an essential tool in independence proofs. First, we know from the previous post that the standard model ℕ of PA does satisfy Goodstein's Theorem (the standard model has the properties the natural numbers possess within the larger field of set theory, the methods of which were used in the proof). This means that the negation of Goodstein's Theorem cannot be a theorem of PA, since there is a model satisfying both the axioms and the theorem. If one could find a model of PA in which the negation of Goodstein's Theorem were true, then this would prove independence, because there would be models in which it is true and others in which it is false! Kirby and Paris used precisely this method in their 1982 proof of the result.
But what do non-standard models of natural numbers actually look like? First, we may infer what they have in common. PA axiom 1 guarantees the existence of a number 0. Axiom 2 gives it successors S(0), S(S(0)), etc. Axiom 3 says that S(n) = S(m) implies m = n. Therefore, all the successors generated from 0 are distinct from one another. This means that any model A has a set of natural numbers NA containing the analogues of 0, 1, 2, and so on. The set of standard natural numbers N is thus contained in NA for every A. The difference is that non-standard models have extra numbers!
At first brush, having additional "non-standard" numbers seems to contradict the Peano axioms, specifically the fifth, the axiom schema of induction. It states that if 0 has some property and that any n having the property implies that n + 1 does as well, then all natural numbers have the property. The spirit of this axiom schema, if not the letter, is that beginning at 0 and knocking down the inductive dominoes will eventually reach every natural number. If we could choose the property to be "is in the set {0,1,2,...} (the standard natural numbers N)" then this would immediately rule out nonstandard models: 0 is this set, and for any n in the set, its successor is also standard, so all of NA is contained in {0,1,2,...} and hence we would have NA = {0,1,2,...}. Unfortunately, it is impossible to define the set {0,1,2,...} inside of the first-order logic formulation. It is also impossible to simply add an axiom "there are no other numbers besides 0, 1, 2, etc." for the same reason. Both approaches require infinitely long logical sentences to formulate, which are forbidden in the finitary system of first-order logic.
Though the axioms of PA cannot rule out non-standard natural numbers, they are forced by the axioms to satisfy some strange conditions. Any nonstandard number c must be greater than all standard numbers. Further, PA can prove that 0 is the only number without a successor, so a "predecessor" to c, which we may call c - 1, must exist. Similarly, c - 2, c - 3, etc. must exist, as must, of course, c + 1, c + 2, etc. These must all be new non-standard numbers. Therefore, the existence of one non-standard number guarantees the existence of a whole non-standard "copy" of the integers: {...,c - 2,c - 1,c,c + 1,c + 2,...}. However, it gets much, much worse. The operation of addition is part of Peano Arithmetic, so there must be a number c + c, that may be proven to be greater than all numbers c + 1, c + 2, and so on. From here, we get another new infinite collection of non-standards {...,c + c - 2,c + c - 1,c + c,c + c + 1,c + c + 2,...}. A similar story occurs for c + c + c = c*3 and larger numbers as well, but we can also go in reverse. One can prove in PA that every number is either even or odd; that is, for any n, there is an m satisfying either m + m = n (if n is even), or m + m + 1 = n (if n is odd). This theorem means that c is even or odd, so there must be a smaller non-standard d with d + d = c or d + d + 1 = c. This d has its own infinite set of non-standard neighbors. The reader may continue this type of exercise and eventually derive the type of picture illustrated above: any non-standard model of natural numbers must contain the standard numbers plus (at least) an infinite number of copies of the integers, ℤ, one for each member of the set of rational numbers, ℚ.
As strange as these models are, they cannot be ruled out in PA, nor is there a natural addition to the axioms that may do so. Rather than being just a defect of first-order logic however, non-standard models are a useful tool for examining the structure of different theories. Now that we have a non-standard model at our disposal, it seems reasonable that Goodstein's Theorem should fail for some non-standard models: "Goodstein sequences" beginning at non-standard natural numbers do not seem likely to terminate at zero. After all, they have infinitely many copies of the integers to move around in! These sequences often cannot be computed explicitly, but using other logical machinery, one can prove the fact that they do not necessarily terminate. This establishes the independence of the theorem from PA.
Goodstein sequences, interesting in their own right for their rapid growth, allow an interesting perspective on Peano Arithmetic and its limitations. The questions of independence and non-standard models arise frequently in the foundations of mathematics, as we seek to define precisely the scope of our mathematical theories.
Sources: http://www.cs.tau.ac.il/~nachumd/term/Kirbyparis.pdf, http://blog.kleinproject.org/?p=674, http://www.ams.org/journals/proc/1983-087-04/S0002-9939-1983-0687646-0/S0002-9939-1983-0687646-0.pdf, http://settheory.net/model-theory/non-standard-arithmetic, http://www.columbia.edu/~hg17/nonstandard-02-16-04-cls.pdf, http://boolesrings.org/victoriagitman/files/2015/04/introToPAModels.pdf, http://lesswrong.com/lw/g0i/standard_and_nonstandard_numbers/
In the previous post, Goodstein's Theorem, a statement about the properties of certain sequences of natural numbers, was proven using infinite ordinals. The use of a method "outside" arithmetic makes it reasonable that this proof cannot be encoded in the language of Peano Arithmetic (PA), the formal logical system for discussing the natural numbers. A stronger statement is also true: there is no proof of Goodstein's Theorem in PA because it cannot be deduced from the axioms of PA.
But how does one go about proving something unprovable? Certainly it is intractable to check every possible method, as the diversity of such attempts could be infinite. Mathematicians take a different approach, using tools from what is called model theory. In mathematical logic, a model of a collection of axioms is a specific structure within which the axioms (and all theorems derived from them) are interpreted to be true. Recall that the axioms of PA mentioned five specific objects, that were assumed to be given from the start: a set N, a specific member, 0, a function S from N to itself, and two binary operations on N, + and *. Of course, to actually do arithmetic we interpret N as the set of natural numbers, 0 as the number 0, S as the "successor" function taking in a number n and returning n+1, and + and * as the usual addition and multiplication. Until we provide an interpretation of to what these objects refer, namely a model, they are just symbols! We may prove statements about them, such as the fact that S(0) and S(S(0)) are distinct members of N, but this is just a mathematical sentence resulting as the end product of a series of formal deductive rules.
Any collection A = (NA,0A,SA,+A,*A) of a set NA, a member 0A of the set, a function SA:NA→NA, and two binary operations +A and *A that satisfies the axioms is a model of PA. Of course, we know fairly well what we mean by "natural numbers", namely {0,1,2,...} with 0 the first element, S sending 0 to 1, 1 to 2, etc, and the usual addition and multiplication. The entire point of selecting axioms for PA is to study ℕ = (N,0,S,+,*), the standard natural numbers. A natural question (called the question of categoricity) arises: is the standard model the only type of model for PA, or are there others? The answer is no; there are other, non-standard models A that still satisfy every axiom of PA. These were first discovered by Norwegian mathematician Thoralf Skolem in 1934. To be clear, they are not the natural numbers, at least, not as we intend them to be. Their existence exemplifies another limitation of first-order logic: axiom systems often fail to specify structures uniquely and hence fail to capture some features of the field to be studied.
Non-standard models often serve as an essential tool in independence proofs. First, we know from the previous post that the standard model ℕ of PA does satisfy Goodstein's Theorem (the standard model has the properties the natural numbers possess within the larger field of set theory, the methods of which were used in the proof). This means that the negation of Goodstein's Theorem cannot be a theorem of PA, since there is a model satisfying both the axioms and the theorem. If one could find a model of PA in which the negation of Goodstein's Theorem were true, then this would prove independence, because there would be models in which it is true and others in which it is false! Kirby and Paris used precisely this method in their 1982 proof of the result.
But what do non-standard models of natural numbers actually look like? First, we may infer what they have in common. PA axiom 1 guarantees the existence of a number 0. Axiom 2 gives it successors S(0), S(S(0)), etc. Axiom 3 says that S(n) = S(m) implies m = n. Therefore, all the successors generated from 0 are distinct from one another. This means that any model A has a set of natural numbers NA containing the analogues of 0, 1, 2, and so on. The set of standard natural numbers N is thus contained in NA for every A. The difference is that non-standard models have extra numbers!
At first brush, having additional "non-standard" numbers seems to contradict the Peano axioms, specifically the fifth, the axiom schema of induction. It states that if 0 has some property and that any n having the property implies that n + 1 does as well, then all natural numbers have the property. The spirit of this axiom schema, if not the letter, is that beginning at 0 and knocking down the inductive dominoes will eventually reach every natural number. If we could choose the property to be "is in the set {0,1,2,...} (the standard natural numbers N)" then this would immediately rule out nonstandard models: 0 is this set, and for any n in the set, its successor is also standard, so all of NA is contained in {0,1,2,...} and hence we would have NA = {0,1,2,...}. Unfortunately, it is impossible to define the set {0,1,2,...} inside of the first-order logic formulation. It is also impossible to simply add an axiom "there are no other numbers besides 0, 1, 2, etc." for the same reason. Both approaches require infinitely long logical sentences to formulate, which are forbidden in the finitary system of first-order logic.
Though the axioms of PA cannot rule out non-standard natural numbers, they are forced by the axioms to satisfy some strange conditions. Any nonstandard number c must be greater than all standard numbers. Further, PA can prove that 0 is the only number without a successor, so a "predecessor" to c, which we may call c - 1, must exist. Similarly, c - 2, c - 3, etc. must exist, as must, of course, c + 1, c + 2, etc. These must all be new non-standard numbers. Therefore, the existence of one non-standard number guarantees the existence of a whole non-standard "copy" of the integers: {...,c - 2,c - 1,c,c + 1,c + 2,...}. However, it gets much, much worse. The operation of addition is part of Peano Arithmetic, so there must be a number c + c, that may be proven to be greater than all numbers c + 1, c + 2, and so on. From here, we get another new infinite collection of non-standards {...,c + c - 2,c + c - 1,c + c,c + c + 1,c + c + 2,...}. A similar story occurs for c + c + c = c*3 and larger numbers as well, but we can also go in reverse. One can prove in PA that every number is either even or odd; that is, for any n, there is an m satisfying either m + m = n (if n is even), or m + m + 1 = n (if n is odd). This theorem means that c is even or odd, so there must be a smaller non-standard d with d + d = c or d + d + 1 = c. This d has its own infinite set of non-standard neighbors. The reader may continue this type of exercise and eventually derive the type of picture illustrated above: any non-standard model of natural numbers must contain the standard numbers plus (at least) an infinite number of copies of the integers, ℤ, one for each member of the set of rational numbers, ℚ.
As strange as these models are, they cannot be ruled out in PA, nor is there a natural addition to the axioms that may do so. Rather than being just a defect of first-order logic however, non-standard models are a useful tool for examining the structure of different theories. Now that we have a non-standard model at our disposal, it seems reasonable that Goodstein's Theorem should fail for some non-standard models: "Goodstein sequences" beginning at non-standard natural numbers do not seem likely to terminate at zero. After all, they have infinitely many copies of the integers to move around in! These sequences often cannot be computed explicitly, but using other logical machinery, one can prove the fact that they do not necessarily terminate. This establishes the independence of the theorem from PA.
Goodstein sequences, interesting in their own right for their rapid growth, allow an interesting perspective on Peano Arithmetic and its limitations. The questions of independence and non-standard models arise frequently in the foundations of mathematics, as we seek to define precisely the scope of our mathematical theories.
Sources: http://www.cs.tau.ac.il/~nachumd/term/Kirbyparis.pdf, http://blog.kleinproject.org/?p=674, http://www.ams.org/journals/proc/1983-087-04/S0002-9939-1983-0687646-0/S0002-9939-1983-0687646-0.pdf, http://settheory.net/model-theory/non-standard-arithmetic, http://www.columbia.edu/~hg17/nonstandard-02-16-04-cls.pdf, http://boolesrings.org/victoriagitman/files/2015/04/introToPAModels.pdf, http://lesswrong.com/lw/g0i/standard_and_nonstandard_numbers/