Retiring President's Lecture
Aspects of one century and prospects for the next
John S. Garavelli
Associate Director, Protein Information Resource
About the Lecture
On August 8, 1900, David Hilbert, one of the preeminent mathematicians of the day, delivered a lecture to the International Congress of Mathematicians being held in conjunction with the Paris Exposition. In the lecture and more fully in the published text, Hilbert presented 23 outstanding problems in mathematics and physics that he thought should and might be resolved in the 20th century. This agenda sparked a research program taken up by his associates and students, and by their associates and students, that continues in many fields to the present day. The results of that program include the most profound principles of modern physics, the most startlingly unexpected results of mathematical logic, and the most influential aspects of computation, communications, economics and molecular biology in modern life. These connections to David Hilbert will be traced through the 20th Century. A list of proposed problems will be considered some of which might and some of which must be resolved in the 21st Century.
About the Speaker
John S. Garavelli received a B.Sc. in chemistry from Duke University in 1969 and, after service at the Walter Reed Army Institute of Research in 1970 and 1971, earned a Ph.D. in biochemistry at Washington University, Saint Louis, in 1975. He did post-doctoral work at the Duke University Marine Laboratory and taught at the University of Delaware and Texas A&M University. He was a National Research Council Senior Research Fellow at the Extraterrestrial Research Division, NASA Ames Research Center. Since 1989 he has been a Senior Research Scientist at the National Biomedical Research Foundation, and has been Associate Director of the Protein Information Resource since 1997. He has conducted research in biotechnology database operation and bioinformatics, computational chemistry, molecular evolution, information theory, and space biology.
[Slide 1] When I knew two years ago that I was probably going to be giving the Retiring President's Address, I started asking people what aspect of my work in the Protein Sequence Database they would be most interested in hearing about. Their responses made me decide to talk about something else.
During the year 1900 various members of the Philosophical Society of Washington prepared reports on developments in the fields of science in which they had expertise. Most of these were rather dry reports on geology and geography, so I thought I would look at a very influential speech given in 1900 by [Slide 2] David Hilbert. Hilbert, born, educated and then teaching at Königsberg, first came to international prominence in 1888 when he produced a proof of the general form of Gordan's Theorem in the field of algebraic invariants. Hilbert's characteristic existence proof, a demonstration that a certain solution logically must exist without actually producing an explicit solution, established him as an innovative thinker who would rely on formal rules to obtain elegant results rather than laboriously search for complicated results embedded in elaborate calculations. Paul Gordan himself was so disappointed in this Alexandrian solution of the Gordian knot that he declared, "Das ist nicht Mathematik. Das ist Theologie!"
Building on his existence proof and using constructions from algebraic number fields, Hilbert in 1892 was able to demonstrate how solutions could be obtained in a finite number of steps for any specific case under Gordan's Theorem. This demonstration of the utility of existence proofs finally led Gordan to concede "I have convinced myself that theology also has its merits."
In this period Hilbert also worked in number theory along with his childhood friend, [Slide 3] Hermann Minkowski, and he was drawn into mathematical physics under Minkowski's influence for the brief time they were together at the University of Königsberg. In 1895 Hilbert took a position at the University of Göttingen at the invitation of [Slide 4] Felix Klein, then the leading mathematician in Germany. In 1902 Minkowski left a teaching position in Zurich to accept a position at Göttingen, and Hilbert remained there for the rest of his life working with these two close personal friends and an astounding number of talented students and associates.
[Slide 5] In 1898 Hilbert turned to geometry publishing in 1899 The Foundations of Geometry in which he introduced the use of metamathematics, the formal logical definition of not just mathematical objects but also the mathematics of the axiomatic system. He introduced the principles that in developing an axiomatic system mathematicians should strive to produce a set of axioms that is logically independent, mutually consistent, and complete, that is, that all true theorems should be logically derivable from the set of initial axioms. Earlier in the nineteenth century it had been established that there were alternative non-Euclidean geometries that were as consistent as Euclidean geometry; if any inconsistency were found to exist in a non-Euclidean geometry, then a corresponding inconsistency must also exist in Euclidean geometry. In his book Hilbert now proved by using analytic geometry that geometry was as consistent as the arithmetic of real numbers.
The clarity and logical power of Hilbert's book established his reputation. He was invited to give one of the opening addresses to the First International Congress of Mathematicians at the Paris Exposition the following year. On Wednesday August 8, 1900 Hilbert addressed the International Congress on "Mathematical Problems". Saying that "the close of a great epoch not only invites us to look back into the past but also directs our thoughts to the unknown future", Hilbert in the published text presented 23 outstanding problem in mathematics and physics that he thought should and might be resolved in the 20th century.
[Slide 6] At Minkowski's suggestion, Hilbert presented only 10 of the 23 problems in the lecture. The problems included:
(Problem 2) establishing the independence, consistency and completeness of the axioms of arithmetic,
(Problem 6) establishing an axiomatic system for the mathematical physics,
(Problem 8) proving the Riemann hypothesis concerning the zeros of the zeta function,
(Problem 10) producing an algorithm with a finite number of steps which would determine the solvability of any polynomial Diophantine equation (Fermat's Last Theorem was mentioned in Hilbert's introduction, but was not presented in the list except in this completely general form), and
(Problem 18) filling various spaces of various dimensions with congruent polyhedra.
[For a complete listing of all 23 problems and their current status, see http://aleph0.clarku.edu/~djoyce/hilbert/ .]
Many mathematicians who attended the talk were excited by Hilbert's boldness in proposing, for example, not just solving Fermat's Last Theorem but finding whether there was an algorithm to solve all polynomial Diophantine equations. The lecture was eagerly translated and reprinted in the journals of many mathematical societies. But as with developments in mathematics, the rest of the world took little notice; there were many other, more interesting things to see at the fair.
[Slide 7] The world of science and technology represented at the fin-de-siecle Paris Exposition was truly marvelous. Earlier in the year, on February 7, Count [Graf] Ferdinand von Zeppelin had flown his first airship.
[Slide 8] In 1895 Guglielmo Marconi had transmitted the first radio signals as wireless telegraph messages, and in 1901 he demonstrated the power of this new technology for the British navy by transmitting radio signals across the Atlantic ocean.
[Slide 9] In 1902, Josiah Willard Gibbs (who had been the first person to receive a Ph.D. in chemistry in the United States in 1863 from Yale University) published Elementary Principles in Statistical Mechanics, a pioneering book that established a firm mathematical foundation for Boltzmann's thermodynamics by basing it not on the behavior of continuous media, but on the statistical behavior of molecules assuming that they obeyed Newtonian laws of mechanics. This was to be critical for several later developments.
[Slide 10] Much to the surprise of everyone, especially Count von Zeppelin, on December 17, 1903, Orville and Wilbur Wright flew the first successful, powered, heavier-than-air craft. Their success depended on three things in particular; (1) on the construction of wind-tunnels to test airfoil aerodynamics, (2) on the fabrication of a light weight engine cast in a special aluminum alloy (as was discussed here not long ago) and (3) on their both being thoroughly experienced glider pilots.
But, there were two other developments in 1900 that are important for my story. One was the rediscovery of Gregor Mendel's obscure 1865 laws of heredity by Hugo de Vries, Carl Correns and Erich Tschermak. The other was the publication by [Slide 11] Max Planck of a paper explaining the theoretical interaction of matter with light known as black body radiation. The prevailing theory at the time was that light was a wave of electromagnetic energy. The wave nature of light was supported by the observation of interference, diffraction and polarization (phenomena all difficult to explain under Newton's corpuscular theory) as well as by Maxwell's theory of electromagnetism. However, the medium supposedly supporting this wave, the luminiferous ether, was under serious reconsideration because of the failure in 1887 of Albert A. Michelson and Edward W. Morley to measure the speed of the earth relative to it. Their observations suggested that movement through the ether could not be detected. The Dutch physicist Hendrik A. Lorentz had attempted to explain the failure by assuming that matter moving through ether experienced a contraction. Planck now deepened everyone's perplexity by reviving the long discredited corpuscular theory of light.
[Slide 12] Max Planck found he could explain the properties of black body radiation only by assuming that light is emitted and absorbed in discrete packets of energy he called quanta. The energy of these quanta were proportional to their frequency, but the proportionality constant was incredibly small, on the order of 10-34 joule-sec, and it was difficult to understand its significance. Five years later in 1905 a former student of Minkowski's in Zurich, [Slide 13] Albert Einstein published three papers in the German Annals of Physics [Annalen der Physik]. The first paper explained the previously unexplained photoelectric effect by invoking Planck's 1900 theory of the quantization of light energy.
The second paper explained the null result of the 1887 Michelson-Morley experiment through the theory that the Lorentz-Fitzgerald transformation was not a contraction of a moving mass, but the transformation in four-dimensional space-time of a moving reference frame, and that the speed of light in a vacuum is a constant in all inertial reference frames. This theory became known as the special theory of relativity.
The third paper explained Brownian motion and provided the first direct experimental evidence for atomic theory based on the statistical thermodynamics of Gibbs.
Although Einstein, a 26 year old Swiss patent clerk, was confident of the importance of these papers, he was not immediately aware of any response from the scientific or general public. In fact the response in Göttingen was electric. Just that spring Minkowski and Hilbert had begun a seminar on mathematical physics. When Einstein's second paper appeared, Minkowski remarked of his former student
[Slide 14] "Ach, der Einstein, der schwänzte immer die Vorlesungen — dem hätte ich das gar nicht zugetraut."
While the papers had little immediate impact, by 1909 the importance of his work was recognized; he became professor of theoretical physics at the University of Zurich, at the German University of Prague in 1911, and the Federal Institute of Technology in Zurich in 1912. He was elected to the Prussian Academy of Sciences in Berlin in 1913 and in 1914 he became professor of physics at the University of Berlin and was named director of the new Kaiser Wilhelm Institute by Kaiser Wilhelm.
[Slide 15] In a follow-up to the second 1905 paper Einstein showed that not only was four-dimensional space-time transformed in a moving reference frame, but that mass and energy were related in a similar transformation, the famous mass-energy equivalence equation that as he said …
["The equation 'E' is equal 'm' 'c' square showed that very small amount of mass may be converted into a very large amount of energy."]
Hilbert's 6th problem, the proposal that the axioms of mathematical physics should be developed, had paid off for Einstein. By applying the formal rules of the Lorentz transformation he had revolutionized physics.
Another person who was attempting to apply Hilbert's program was [Slide 16] Bertrand Russell at Cambridge. First in his book Principles of Mathematics, which he began writing in 1900, and then together with [Slide 17] Alfred North Whitehead in their 1910-1913 books Principia Mathematica, Russell pursued Hilbert's program and attempted to derive a consistent and complete arithmetic from a very small number of axioms in mathematical logic and set theory. They follow Hilbert in constructing a metamathematical description of their work. The metamathematics is so rigorous it takes 90 pages just to get to the first statement of primitive concepts and propositions. In his work Russell found that the concept of number could be based on something the Greeks (but not Nature) so abhorred they never even considered it a number, [Slide 18] zero. The number zero is defined as "the class of the class with no members, the empty set". One of the definitions of the number one is "the class which when a member is removed results in the number zero." [Here it is defined as the class of all things that exist as unique individuals.] It had taken Western logic 2600 years to reach this epiphany, but still there was a problem. Russell realized that set theory carried with it a special paradox, which became known as the Russell Paradox. If a set is defined to consist of all the sets that do not contain themselves as members, does the set contain itself, or not? Russell attempted to exclude such behavior by adopting special rules against it in the metamathematics, but the problem would return to haunt him. (Also remember how these equations look; it will come up later.)
[Slide 19] A younger colleague of Russell's at Cambridge who followed Hilbert's work in geometry and number theory and who worked on some of his proposed problems was G.H. Hardy. Hardy worked at Cambridge from 1906 to 1919, at Oxford from 1919 until 1931 when he returned to Cambridge as Sadleirian professor of pure mathematics. (The biographer Robert Kanigel said that only someone educated in a British public school could sit in a chair like that.)
Hardy is remembered for his long series of publications with J.E. Littlewood and many others delving into numerous topics in number theory, like Diophantine analysis, summation of divergent series, Fourier series, the Riemann zeta function, and the distribution of primes.
He took particular pride in being a "pure" mathematician whose work would probably never have any practical benefit. In his autobiography he said "I have never done anything 'useful'. No discovery of mine has made or is likely to make directly, or indirectly, for good or ill, the least difference to the amenity of the world." Despite this boast, at the beginning of his career in 1908 he derived the laws of genetics that govern how the proportions of dominant and recessive traits are propagated through large populations.
But the brightest event in Hardy's career was his discovery of [Slide 20] Ramanujan. Born in a district capital of rural India he was almost completely self-taught in mathematics. Through the efforts of a devoted local teacher he received a fellowship to the University of Madras in 1903, but lost it the following year because he devoted all his time to mathematics to the neglect of other subjects.
On his own, Ramanujan worked out the Riemann series, elliptic integrals, and zeta function equations, and independently rediscovered the results of Gauss and others on hypergeometric series. On the other hand, because of his lack of formal training he had only a vague concept of what constituted mathematical proof. Underived results tumble out of his notebooks as though received in divine inspiration. But despite many brilliantly insightful results, some of these unsupported conjectures have been found to be wrong.
He managed to gain recognition in India after publishing a paper on Bernoulli numbers in 1911. His papers were sent to several English mathematicians, and by sheer chance this remote genius was recognized by Hardy who arranged to bring him to Cambridge in 1914. One especially notable discovery of Ramanujan working with Hardy was an asymptotic formula for the enumeration of partitions which I will mention later.
[Slide 21] His strict vegetarianism and the scarcity of fresh fruits and vegetables in war-time England probably resulted in a relapse of tuberculosis in 1917. Sick and lonely, after the resumption of passenger ocean transport and a slight improvement in his health, Ramanujan returned to India in 1919 and died there the following year.
Ramanujan left notebooks filled with conjectures that Hardy consulted for many years, and that mathematicians still dig through for discoveries. While we may regret what might have been if he had been born in a more nurturing environment, or if he had been brought to a more hospitable land in peacetime, keep in mind that it was nearly pure chance that he was allowed to express his genius at all.
[Slide 22] There are a few other events from the period around World War I that I want to note. Before the war broke out, Henry Ford in 1912 had realized that the production methods of the day could be more efficiently organized to speed production, take advantage of standardized parts, and incidentally make him a lot more money, by converting the Ford Motor Company factory to the first modern assembly line. The efficient organization of production was to play a significant role in affecting the outcome of the war, and afterwards in reshaping the economies of capitalist and communist countries alike.
[Slide 23] After the war in 1919 John Maynard Keynes, who had studied economics and mathematics at Cambridge with Russell, published "The Economic Consequences of Peace" in which he used economic theory to argue against requiring Germany to pay huge war reparations to France and England which would then be used to repay huge war loans to the United States. He had to resign from the civil service to publish his opinions, but no one in the government listened to him then, or later after the economies of Germany and then England were ruined.
By the end of World War II Keynes' theories of economics were valued much more highly.
[Slide 24] Returning to physics, by 1907 Einstein knew there were problems with his relativity theory. The theory did not encompass accelerating reference frames, forces or gravity. He finally realized he had missed too many of Minkowski's lectures; he did not know enough math to solve the problem. In considering how Newtonian gravitation might be modified to fit special relativity, he proposed the Equivalence Principle, that a gravitational field is equivalent to a uniformly accelerated reference frame. He realized that the bending of light by gravity would be a consequence of the equivalence principle that could be tested by astronomical observation. In 1912 Einstein found that the Lorentz transformations did not apply in this more general situation and that the gravitational field equations must be non-linear infinitesimal transformations. After consulting a friend who brought him up to speed in 1913, Einstein published a paper where he employed the tensor calculus of M.M.G. Ricci and T. Levi-Civita to describe gravitation. In June 1915 Einstein spent a week at Göttingen where he lectured on his (incorrect) 1914 version of general relativity. Hilbert attended his lectures and the race was on. In one paper submitted by Einstein on November 25 and published on December 2, and in a second paper submitted by Hilbert on November 20 (5 days before Einstein) but revised on December 6 (4 days after Einstein's publication) and published on March 31, the full field equations of the general theory of relativity were finally presented. The race between Hilbert and Einstein had been very close indeed, but Hilbert always acknowledged that Einstein had the original inspiration for the Equivalence Principle.
[Slide 25] Using the general theory of relativity Willem de Sitter in 1917 predicted an expanding universe. The prevailing theory at the time was that the universe was static, so Einstein modified one of his equations to keep the universe from expanding. With the general theory of relativity Einstein had been able to explain the anomalous advance of the perihelion of Mercury. In the first experimental test of the general theory, two British expeditions to observe the solar eclipse of 1919 led by Arthur S. Eddington confirmed the predicted bending of star light by the sun's gravity.
By the early 1920's it was known that some spiral nebulae contained individual stars, but it was not known whether they were relatively small collections of stars within our own galaxy, or were separate galaxies as big as our own. In 1924 Edwin Hubble measured the distance to the Andromeda nebula at about a hundred thousand times as far away as the nearest stars. Over the next few years Hubble was able to measure the distances to a handful of other galaxies, using apparent brightness as a rough indication of distance. The relative velocities of the galaxies were measured through their Doppler shifts, and in 1929 Hubble showed that galaxies are moving away from us with a speed proportional to their distance.
Under the general theory of relativity the inescapable conclusion was that all the galaxies in the universe had originated in a big bang billions of years ago, and that the universe was expanding in space-time. Einstein admitted that his modification was a mistake, but some new theories may yet revive it.
[Slide 26] While I am discussing space I will mention Robert Goddard. In 1926 largely with his own funds he succeeded in launching the first liquid-fuel rocket. The rocket reached an altitude of 56 meters and a speed of 97 kilometers per hour. At the same time the scientist Hermann Oberth and the young Wernher von Braun were beginning their development of rockets in German rocket clubs being organized and supported by the private military organizations funded by the National Socialist Party.
[Slide 27] Now I need to go back and pick up the threads of the story for the quantum and atomic theories. In 1913 Niels Bohr proposed a model for the atom with a small, massive positively charged nucleus and electrons moving in orbits of definite energy around them. However, this model did not agree at all with the classical laws of mechanics and electrodynamics.
[Slide 28] Diffraction and interference were well-understood wave properties of light, and considering that light under some conditions seemed to behave like a particle, Louis de Broglie in 1924 proposed that matter on the atomic scale might also have a dual wave-particle nature. This led physicists and mathematicians to attempt applying different mathematical treatments for these wave-particles in the new model of the atom.
[Slide 29] In 1925 Werner Heisenberg developed a quantum mechanics using matrix formulas that was formalized by Max Born and Pascual Jordan, associates of Hilbert at Göttingen. In 1929 Born gave a statistical interpretation of the matrix quantum mechanics, but the mathematics remained hard to understand, difficult to use, and nearly impossible to solve.
[Slide 30] Using this quantum matrix mechanics Heisenberg was able to show that there were limits on determining certain properties of quantum systems. It would not be possible, for example, to determine simultaneously both the position and momentum of a quantum particle. Likewise, the energy could be determined only within limits determined by the period of time in which the measurement was made. Although there were many philosophical opinions on these results, the prevailing interpretation is that these are not merely observational limits, that is limits on what an observer might be able to measure, but phenomenological limits on the existence of the particles themselves.
[Slide 31] Shortly after Heisenberg's matrix mechanics appeared in 1925, Austrian physicist Erwin Schrödinger produced a version of quantum mechanics that employed the differential equations developed to describe the physics of waves along the lines proposed by de Broglie the year before.
[Slide 32] Believe it or not, Schrödinger more or less guessed this equation and found that when it could be solved it did provide what seemed to be the correct answers for many of the atomic problems of the day. Of course to say he guessed it might convey a misconception; Schrödinger understood classical 19th century dynamics and wave mechanics. But he really had no a priori reason to use these constants or to believe that these methods could be applied to the phenomena being measured in the quantum domain. It met the only real requirement of a scientific theory – it worked.
Now, many physicists were perplexed by something else; they had two apparently very different mathematical methods which (when they were applied to the same systems and could be solved) gave the same results. As related by E.U. Condon, this development gave David Hilbert "a great laugh". Around 1900 he had been working on integral equations that led, in 1903-1904, to work on eigenfunctions used in the wave mechanics. In 1905-1906 he had worked on functional theory and developed a mathematics of infinite dimensional spaces for the transformation of functions. In 1924 the book Methods of Mathematical Physics by Richard Courant with Hilbert as coauthor provided the mathematical tools used by both matrix and wave mechanics. When Heisenberg and Born went to Hilbert for help in working with the difficult matrix mechanics, he had told them that they were the results of eigenvalue solutions for boundary-value problems of differential equations, and they should look for those differential equations. Schrödinger beat them to it.
[Slide 33] Now with both pieces in place, it was shown that the matrix mechanics and the wave mechanics would give the same results by a simple transformation process, but why this was so was not immediately obvious. Using functional theory and the infinite dimensional spaces for the transformation of functions, developed by Hilbert and now called Hilbert's spaces, Hilbert's student John von Neumann in his 1932 book Mathematische Grundlagen der Quantenmechanik [Mathematical Foundations of Quantum Mechanics] provides the full mathematical apparatus for proving the equivalence of the two methods.
In 1930 when he was 68 and just retired Hilbert was to be presented a civic honor during a scientific meeting in his home town, Königsberg. He chose to make his farewell speech there, and at the end of it he said, "…there is no such thing as an unsolvable problem. We must know. We shall know." ("Wir müssen wissen. Wir werden wissen.")
Another of Hilbert's associates who worked on the principles of physics was [Slide 34] Emmy Noether. She had a loud, disagreeable voice, and a mind of first rate brilliance. One of her collaborators remarked that "she looked like an energetic and very nearsighted washerwoman." More charitably, Hermann Weyl said in eulogy "the graces did not preside at her cradle." Yet what she did not possess, she created when she proved the most profoundly beautiful theorem of mathematical physics.
Her father, a professor of mathematics at the University of Erlangen, encouraged her interest in the subject and arranged for her to audit classes there. She passed the general German university entrance exams but had go to the University of Göttingen in 1903 because Erlangen would not admit women. She received her Ph.D. in 1907. And because women could not hold regular university faculty positions, she worked from 1908 to 1915 without pay or title at Erlangen where she occasionally taught her father's classes. In 1915 after the retirement of her father, the death of her mother and the conscription of her brother, she joined the Mathematical Institute in Göttingen. She was nominated by Hilbert and Klein as Privatdozent (lecturer). Hilbert argued before the University Senate "Meine Herren, I do not see that the sex of the candidate is an argument against her admission as a Privatdozent. After all, the Senate is not a bathhouse." In spite of this argument, she was not admitted, and lectures announced under Hilbert's name were often delivered by Emmy Noether. Finally after the fall of the German government in 1919 she was officially permitted to lecture, but still without pay until 1922.
[Slide 35] Working with Klein, Hilbert and Einstein on the mathematical foundations of physics and relativity she proved this beautiful theorem. "If the first integral of a function of generalized coordinates and first derivatives is invariant under an infinitesimal transformation, then the first integral of the related Euler-Lagrange equation is a constant." That may not sound beautiful, and the corresponding equations may not look all that beautiful, but it can also be stated like this. "For every observable symmetry in Nature there is a corresponding entity that is conserved. And for every conservation law there is a corresponding symmetry." Because the laws of physics do not change from place to place, or because of relative motion, there is an entity called "momentum" that is conserved. "Energy" is the entity that is constant because the laws of physics do not change with time. The symmetric coordinates and the conserved entities are conjugate quantities in the Heisenberg Uncertainty Relations, but this can be very obscure; Richard Feynman says, for example, that electric charge is the entity that is conserved because physics is symmetric with respect to changes in phase of the quantum wave equation. Go figure.
In 1933 she was denied permission to teach by the Nazi government. She accepted a guest professorship at Bryn Mawr. During the next two years she lectured there and at the Institute for Advanced Study in Princeton. She died unexpectedly after cancer surgery in April 1935.
[Slide 36] The list of people leaving Göttingen at this time is staggering; besides Emmy Noether, there were James Franck, Max Born, Richard Courant, Edward Teller, Edmund Landau, John von Neumann and Hermann Weyl. At a dinner the new Reichsminister for Education asked the distinguished Professor Hilbert, "How is mathematics in Göttingen now that it has been freed of Jewish influence?" Hilbert replied coldly, "There is really none any more." Weyl had arrived at Princeton with Einstein who had left Berlin. After spring semester 1934 Hilbert never returned to the Mathematics Institute he and Klein had built. He died in Göttingen with only his wife beside him in February 1943.
[Slide 38] Another physicist who got away was Enrico Fermi. He studied with Max Born in Göttingen in 1923, returning to Italy in 1924. In 1926, Fermi discovered the statistical laws, now known as Fermi statistics, governing particles with half integer intrinsic spin subject to Pauli's exclusion principle. In 1934, he proposed the theory of b-decay by combining his work on radiation theory with Pauli's neutrino theory. Following the discovery by Curie and Joliot of artificial radioactivity, he demonstrated that nuclear transformation occurs in almost every element subjected to neutron bombardment, except uranium which produced results he did not understand. This work resulted in the discovery of slow neutrons, and soon after to the discovery of nuclear fission by Lise Meitner, Otto Hahn and Fritz Strassmann and Otto Frisch. In 1938 Fermi got permission from the Fascist government of Italy to accept the Nobel prize. After picking up the prize he prudently declined to return to Italy and accepted a position in the United States. He worked with Leo Szilard on assembling the first nuclear reactor in Chicago on December 2, 1942. He was one of the leaders of the Manhattan Project at Los Alamos.
My favorite story about Fermi involves extraterrestrials. Fermi believed there must be extraterrestrial life and reasoned that advanced civilizations should have had enough time to evolve and spread through the galaxy. At Los Alamos one day he asked what has become known as Fermi's Paradox, "Where are they?" After looking around the room Leo Szilard is supposed to have replied, "They are among us, but they call themselves Hungarians."
[Slide 39] Leo Szilard was another of the great intellectual peripatetics of this era. He left Hungary in 1919 to escape a repressive and anti-Semitic regime, arriving in Berlin in 1920. [Shrug.] At the University of Berlin he studied under Einstein, Planck, and von Laue. In 1928 he taught seminars on quantum theory with John von Neumann. In 1929 he filed the German patent for the cyclotron. In that same year he published "On the decrease of entropy in a thermodynamic system by the intervention of intelligent beings". In this case he is not talking about extraterrestrials but Maxwell's demon. In the paper he shows that the basic thermodynamic unit of information is k ln 2. (k is Boltzmann's constant.)
In July 1939, Leo Szilard and Eugene Wigner visited Albert Einstein at his vacation home in Peconic, Long Island. They drafted a letter which Einstein signed on August 2, and was delivered to President Franklin Roosevelt on October 11. In it they explained the significance of uranium fission and urged development of a bomb before Germany. It led to the "Uranium Committee", and Szilard, as I said, worked with Enrico Fermi to build the first nuclear reactor in 1942. It was Szilard who originated the method of arranging the graphite and uranium which made the controlled nuclear reaction possible.
In 1948 he, along with a number of other physicists as I will explain later, turned to molecular biology. He invented the chemostat, an apparatus for the continuous production of bacterial cultures under controlled conditions.
[Slide 40] George Pólya had been a student of Hilbert's in 1912 and of Hardy in 1924. In an extremely long and productive career he wrote no paper more important and widely referenced in so many different fields as his 1937 "Combinatorial Enumeration of Groups, Graphs, and Chemical Compounds" in which he developed the cycle index polynomial in what is known as the Pólya Enumeration Theorem. He also wrote, in 1945, How to Solve It, an immensely readable little book for the mathematically challenged.
On a personal note, in 1985 while I was a Senior Research Fellow at the NASA Ames Research Center in Mountain View I published a paper on an algorithm for the multiplication of symmetric polynomials arising from a molecular isomer counting application of the Pólya Enumeration Theorem. This algorithm relies on the representation of symmetric polynomials by partitions as enumerated by the Hardy-Ramanujan formula. Knowing that Pólya still maintained an office at Stanford in near-by Palo Alto, I sent him a copy of the paper with an acknowledgement. I did not receive a reply, but was saddened a short time later to read his obituary in the newspapers. Later I learned from G. L. Alexanderson, who wrapped up Pólya's affairs at Stanford, that my paper was the last one Pólya had read before he died.
I didn't really think my paper was that bad.
[Slide 41] This is where the story of Hilbert's program takes an unexpected and melancholy turn. In 1930 at the meeting where Hilbert said "Wir müssen wissen", Kurt Gödel, a young mathematician at the University of Vienna, announced a result that he published in a 1931 paper "On Formally Undecidable Propositions of Principia Mathematica and Related System" ["Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme"]. In this paper Gödel proved the Entscheidungs (decidability) theorem (which you will remember was part of Hilbert's 2nd problem) but in the negative; that is, for any logical system comprehensive of arithmetic there are undecidable propositions that are true but formally unprovable. Gödel achieved this logical tour-de-force by applying Hilbert's formalist treatment of mathematics with a vengeance. Gödel said let us take all the different symbols that Russell and Whitehead use in Principia Mathematica (Do you remember those equations for '0' and '1'?) and assign each of them a unique number in such a way that every formula in it could be analyzed to provide a unique identifying number, called the Gödel number, that would encode the original formula. Each metamathematical statement about these Gödel numbers, such as "The formula with Gödel number 'x' is not true", would also have its own unique Gödel number. Using this system Gödel then proved that it was possible to construct the metamathematical statement equivalent to "This statement is unprovable"; neither the statement nor its negative could be proven if the original axioms were consistent. He left Austria a few years later and worked at the Institute for Advanced Study in Princeton.
[Slide 42] Alan Turing entered Cambridge in 1930 and began the mathematics curriculum in 1931, the same year that Hardy returned from Oxford. One of his dissertation advisors was A.S. Eddington who had confirmed the gravitational bending of light predicted by Einstein. In 1937 Alan Turing published "On Computable Numbers, with an Application to the Entscheidungsproblem". Building on Gödel's 1931 result, Turing showed that no finite step algorithm could be constructed which would be able to determine whether an arbitrary algorithm would ever halt. More devastatingly he showed that there was no finite method for solving all mathematical statements; there were incomputable numbers and unsolvable problems. Turing was able to prove this by developing the mathematical concept of a finite state machine that read and wrote characters on a tape. He then applied to the mathematical model the same procedures that Gödel had used in constructing Gödel numbers. This abstract "Turing" machine is the basis of the mathematical field of computability. The "Turing" machine was also the model for the electronic computers that were to follow.
[Slide 43] Just before the outbreak of the second world war, the Polish army managed to obtain secretly the highest level German coding machine, code named Enigma. It was an electromechanical machine capable, if properly used, of generating a continuously changing substitution cipher with an embedded initialization sequence. Eventually the machine was obtained by the British Code and Cypher School which had been evacuated from London and set up at Bletchley Park, a large Victorian mansion on the railway line midway between Oxford and Cambridge. This group with the mission of receiving all intercepted Enigma radio transmissions, deciphering them, providing an initial estimation of their usefulness, and passing the information — in protected form — to the most appropriate recipient. Since they had the encoding machine, they thought they should be able to make a decoding machine, but it was not obvious how to do it. In 1939 the British recruited Alan Turing to undertake Enigma code decipherment at Bletchley Park. This whole story is immensely complex, of the greatest importance in determining the outcome of the war in Europe, and as thrilling as the best British spy novel — it is the best British spy novel. But it was also the best-kept secret of the war (the Manhattan project never had such good cover) and the story was not told publicly for thirty years. It was Alan Turing who designed the first electromechanical decoding machines and the procedures used to find the correct decipherment settings. By 1943 digital computers were being constructed to do the decipherment and Turing aided in their design.
[Slide 44] After the war Turing took positions at the National Physical Laboratory and Manchester University where he worked on the project to build the first British programmable electronic digital computer. He did research on models of the brain and machine intelligence. In 1950 he proposed the definitive test for determining if a machine might be said to "think", referred to as the Turing test. Nothing was ever publicly reported in his lifetime about his wartime role.
In 1952 after a police investigation into a minor burglary of his home, he was charged with and, to avoid a public trial, pleaded guilty to a charge of gross indecency for an act which some years later would be decriminalized in Britain. He was placed on probation on condition that he receive an experimental course of steroid hormone treatments. The effect of these treatments were so devastating that he committed suicide.
[Slide 45] Vannevar Bush joined the faculty of the Massachusetts Institute of Technology as an electrical engineer in 1919. In 1932 he became an MIT vice president and dean of the engineering school. Bush invented the differential analyzer, the first reliable analog computer that solved differential equations.
In 1940 he left MIT to become President of the Carnegie Institution. He was able to convince the government to establish the National Defense Research Committee, and later the Office of Scientific Research and Development (OSRD). It was as the director of the OSRD that he guided much of the US weapons research during World War II.
In 1945 Vannevar Bush published two important papers. One was "Science, the Endless Frontier" which was important for establishing the post-war system for funding science research and in setting up the NSF in 1950. The other was a popular magazine article "As We May Think" in which he described the future with computers.
Bush later lost favor as a shaper of science policy when he opposed development of ballistic missiles as weapons, military reconnaissance satellites, and the abandonment of analog for digital computers.
[Slide 46] Norbert Wiener earned a Ph.D. from Harvard at the age of 18 with a dissertation on mathematical logic. He went to Cambridge to study with Russell and Hardy, then to Göttingen to study with Hilbert. He taught at the Massachusetts Institute of Technology from 1919 to 1960. He worked with Vannevar Bush in constructing the differential analyzer before the war. Wiener also studied how the nervous system and machines perform the functions of communication and control and made some of the earliest contributions to the field of information theory. He (re)originated the term "cybernetics" using it as the title of a book he published in 1948.
[Slide 47] Grace Murray Hopper earned an MA in 1930 and a Ph.D. in 1934 from Yale under Oystein Ore. (Ore had studied under Hilbert and attended his Königsberg lecture in 1930). She became an Associate Professor at Vassar, in 1941 received a research fellowship at the Courant Institute of Mathematics at New York University. In 1943 she resigned her position at Vassar, joined the WAVES, was commissioned a lieutenant and was assigned to the Bureau of Ordnance Computation Project at Harvard University under Howard H. Aiken. She was one of the first two programmers for the Mark I computer, and wrote for it the first computer manual. After the war she retained her commission and remained at Harvard as a research fellow. In 1949 she joined the Eckert-Mauchly Corporation, which became Remington-Rand, which became Sperry-Rand, which became UNIVAC. She is credited with inventing the compiler in the early 1950's and directed work that led to COBOL. She "retired" from business in 1971. The Navy retired her, realized their mistake and recalled her to active duty, not just once but several times. In 1983 she assumed command of the Ada Project and was commissioned an Admiral (Commodore). When she retired from the Navy for the last time, she was the oldest commissioned officer on active duty, and had more time in service than Admiral Rickover had at his retirement.
[Slide 48] Although she had the right stuff to command, she was not above using feminine charms when required. Her favorite advice was "It's usually easier to do what needs to be done and apologize later, than it is to get permission beforehand." She would then put on a coy look, bat her eyes and say "Oh, I'm sorry. I didn't know." Hopper finally retired from the Navy in 1986. She died on January 1, 1992 and was buried with full military honors in Arlington.
I met her on several occasions, and once naively asked for her opinion of John von Neumann. She gave me a look that would have made the barnacles fall off my hull, and she very quietly said, "I think he was somewhat over-rated." Only several years later did I realize that she and von Neumann had been on opposite sides of Eckert and Mauchly's unsuccessful EDVAC patent case.
[Slide 49] Claude Shannon graduated with a B.S. from Michigan University in 1936. Shannon earned both a master's and a doctorate at the Massachusetts Institute of Technology in 1940. For his master's degree in electrical engineering, he applied George Boole's binary logic algebra found in Russell and Whitehead's Principia Mathematica to the problem of telephone switching circuits. At that time Boolean arithmetic was little known or used outside the field of mathematical logic, now it is the basic arithmetic used by every computer in the world. For his doctorate, he applied mathematics to genetics.
Shannon joined Bell Telephone Laboratories as a research mathematician in 1941. There while working on the problem of efficiently transmitting communications, he formulated a theory quantifying information. The Mathematical Theory of Communication (1948) extended the concept of entropy by demonstrating that it is equivalent to the information content (a degree of uncertainty) in a message and created the field of Information Theory. His work was based on the treatment of "information" by John von Neumann in 1932, Leo Szilard in 1929, R.V.L. Hartley in 1928, and ultimately to Gibbs' 1902 work in statistical mechanics.
[Slide 50] The most important concept of Shannon's theory is entropy, which is defined as the lower limit on the expected number of symbols required to code for the outcome of an event regardless of the method of coding, and is thus the unique measure of the quantity of information. In this discrete form and in the continuous form, it is equivalent to the physical entropy of the second law of thermodynamics. This is the fundamental equation of information theory and has applications not only in computer science, communications, and biological information systems (including nucleic acids, proteins, and metabolic signaling), but also to subjects in which language as communication is important such as linguistics, phonetics, cognitive psychology, and cryptography.
[Slide 52] Linus Pauling received his Ph.D. in 1925 and went to study with Arnold Sommerfeld in Munich. In Germany Pauling met Schrödinger, Bohr, Born, Heisenberg, and J. Robert Oppenheimer. He returned to California, worked on quantum chemistry, and published The Nature of the Chemical Bond in 1939. He began studying proteins, hemoglobin in particular, in 1934.
In 1944 Erwin Schrödinger published What Is Life? based on a lecture he had delivered the previous year. This little book is regarded as the beginning of molecular biology because it seems to have convinced so many physicists that there was something interesting enough going on in biology to change fields. You will recall that Leo Szilard followed this path. [In fact, at about this time the cost of doing any "interesting" physics, but theoretical physics, was becoming so prohibitive that many people thought it would be easier to make a living in the new field. The various techniques of chromatography made the previously difficult tasks of isolation and identification much easier. The use of radioisotope labeling made it possible to work out the steps of metabolism, something previously only dreamed of. So the field was probably ripe.] Pauling, of course, was already there. In 1948 he worked out the structure of the protein alpha helix. In 1949 he identified Sickle Cell Anemia as a genetic disease arising from a point mutation resulting in the substitution of a single amino acid in a protein sequence. In the early 1940's it had been established by Salvador Luria, Max Delbrük and others that, not proteins, but nucleic acids (in particular polydeoxyribonucleic acid) carried the genetic information. Now Pauling set out to determine its structure. However, [Slide 53] James Watson and Francis Crick got there first in another tight scientific race.
Today of course, we are on the verge of having the entire human genome, but I want to defer discussing this for a few minutes.
[Slide 55] Richard Feynman earned a Ph.D. degree from Princeton in 1942. From there he went directly to worked on the Manhattan project at Los Alamos. After the war he was one of the three people who, working independently, developed an improved theory of quantum electrodynamics in the late 1940's. This theory holds the scientific record for being the most precisely verified physical theory.
[Slide 56] In 1963 some sense was finally made of the bewildering zoo of subatomic particles when Murray Gell-Mann and Georg Zweig proposed the quark theory of meson and baryon structure. The proposal was based ultimately on the Noether Theorem because of the conservation of certain quantum properties and the observed symmetries of the particles. Indeed, most high-energy physics since 1950 has relied on the observation of symmetry conservation or symmetry breaking in one way or another. The "unified field theory" that Hilbert and Einstein sought has been replaced as the Holy Grail by the "theory of everything".
One of the modern knights in this quest is [Slide 57] Stephen Hawking. (Born on January 8, 1942, exactly 300 years to the day after the death of Galileo.) When he was diagnosed with Lou Gehrig's disease in 1963 during his first term at Cambridge doctors predicted that he would not live long enough to complete his doctorate in general relativity and cosmology. In 1984 he caught pneumonia; an emergency tracheotomy saved his life but took his voice. For sixteen years he has continued to make valuable scientific contributions by using special robotic wheel chairs and a computer synthesized electronic voice.
Between 1965 and 1970 Hawking developed new mathematical techniques to study gravitational singularities in general relativity. In 1970 Hawking began applying these techniques to study black holes. By using principles from quantum mechanics, general relativity and thermodynamics he showed that black holes must emit radiation. In 1971 Hawking predicted that, following the big bang, large numbers of "mini" black holes would have been produced and subsequently "evaporated" by the quantum radiation process. He became the Lucasian Professor of Mathematics at Cambridge in 1979. In 1983 he proposed that space-time was unbounded, "… that both time and space are finite in extent, but they don't have any boundary or edge. … there would be no singularities, and the laws of science would hold everywhere, including at the beginning of the universe."
[Slide 58] Although Hilbert mentioned Fermat's Last Theorem in the introduction to his 1900 talk, he did not explicitly include it in the list of problems. [The Tenth Problem is a generalization of it.] For many years Hilbert was the judge who would have decided whether a large prize be awarded for its solution, and he confided to a friend that he hoped it would not be solved because he had discretionary use of the interest on the prize for departmental projects.
In June 1993 Andrew Wiles gave a series of lectures in which using Mazur's deformation theory of Galois representations, Serre's conjecture on the modularity of Galois representations, and certain arithmetical properties of Hecke algebras, he succeeded in proving that all semistable elliptic curves defined over the rational numbers are modular, thereby proving Fermat's Last Theorem. [This is paraphrased from http://www-history.mcs.st-andrews.ac.uk/history/Mathematicians/Wiles.html .] Soon after the lectures a small but very real gap in the proof was found. Working with R. Taylor, a former student, over the next year he was able to complete the proof and publish it in 1995, thus dashing the hopes for fame of many amateur mathematicians.
But certainly there are some problems to be resolved in the next century, some old problems still hanging around and some new ones we just learned we had.
[Slide 59] Most societies have constructed belief systems that placed humanity as the "beloved only child" at the center of a universe obviously brought into being expressly for their conscious existence. Science for the past four hundred years has remorselessly revealed the deluded conceit of those beliefs.
First, astronomy showed that the earth was not the center of the universe and that our solar system was almost certainly not either.
Then in the first quarter of this century, we came to realize that our own stellar galaxy, where we found ourselves in a trailing outer fringe, was just one of an almost unimaginably vast number of galaxies, most of which were larger and more spectacular than ours.
And within the last quarter century we became aware that the very matter we consisted of could not be the principal matter of the universe. Somewhere out there, we don't yet know where or what, there must be ten to one hundred times as much "stuff" and it is probably not the same as the stuff we are made of. Our planet, our star, our galaxy and our substance have one-by-one been evicted from the center of the universe.
The last vestige of human centrality in the physical universe has been granted through the theory of relativity and the big bang hypothesis. Everything else we can see in the universe must appear to be moving away from us, and the farther away it is, the faster it will be fleeing. No matter where intelligence emerges, it will always find itself in the middle with everything else running away.
With the realization of the myriad vastness of stellar systems in the universe came the increasingly strengthened conviction that we were most certainly not on the only planet harboring life and (what we liked to regard as) intelligence.
Although the theory of evolution by natural selection was proposed a century and a half ago, it was and still is too embarrassingly shocking for many belief systems. Some trained scientists, mentioning in particular Teilhard de Chardin, have sought to recover some dignity … But Stephen J. Gould convincingly argues in Full House that the emergence of intelligence was, given only enough time, very likely but not certain, and certainly not directed. Within the last ten years we have come to realize that the most significant form of life on earth two billion years ago is still the most significant form of life, the bacteria. Mankind is at best a complex multicellular nucleate organism that is probably over-specialized and therefore doomed. (One reassuring thought to emerge from this is that even if mankind were to commit nuclear suicide and incinerate all life on the surface of the plant, life would still go on and eventually reemerge from the floor of the sea and the bowels of the earth.)
Science during the nineteenth century had succeeded in advancing the principle of uniformitarianism, that the geologic record of change in nature generally does not reflect wide-spread catastrophic changes but rather very gradual changes spanning extremely long stretches of time on the scale of millions of years. This principle was set up in opposition to the fundamentalist Biblical doctrines of a rapid and comparatively recent creation and a global, catastrophic flood within historic times. While uniformitarianism acknowledged local catastrophic events, such as floods and volcanic eruptions, they acted on short time scales; global scale events acted only gradually and over very long periods. During the twentieth century there were several revisions of this accepted principle. It was realized that the universe including the space-time continuum must have had an instantaneous and ultimately dramatic beginning. In addition it came to be realized that the major geological epochs may have been marked by sudden and world-wide events, such as very large asteroid impacts.
Here are the problems which I think might and some of which must be resolved in the 21st Century.
What about those zeros of the Riemann Zeta function?
In each century man has found ever more efficient ways to kill. In this century we have succeeded in finding several ways in which we would be able to achieve complete self-annihilation. We must in the next century find a way in which we may survive. There is only one alternative, permanent expansion of man beyond the Earth.
Within the last quarter century we became aware that the very matter we consist of could not be the principal matter of the universe. Somewhere out there, we don't yet know where or what, there must be ten to one hundred times as much "stuff" and it is probably not the same as the stuff we are made of. What is it?
We need to pursue the manned exploration of space, but I cannot be hopeful that we will be going anywhere very soon. Analogies with the European exploration and domination of the New World suggest that the process could take several hundred years unless there is an immediate pay-off, and it is doubtful that quick financial rewards can be found in constructing space stations, or setting up bases on the Moon and Mars that would drive the process. We will probably get back to the moon and have a permanent base, and go to Mars, but we probably will not have colonized by the end of the century.
Three related environmental questions:
Did we cause global warming and can we stop it?
Can we restore the ozone layer?
The accuracy of predicting weather is bound to improve; can we control it?
Two of the questions involve communication.
Are there other species on earth that have developed communication of abstract thought, and is there another species capable of communicating abstract thought with us? It is almost certain these questions will be resolved, possibly within the next twenty years.
Is there life elsewhere in the universe?
Is there intelligence somewhere else in the universe?
I think there is only a slight probability that in the next century we will learn whether intelligent life capable of communicating with us has evolved elsewhere in the universe. The reason why is a conundrum related to the old saw about "the Cabots and the Lodges". For a young civilization that has spread only over its home world (referred to in one classification system as a "type I civilization") it would usually be highly advantageous to detect an ET signal because it is most probable that any signal detected will be coming from a more advanced civilization. (A less advanced civilization would not be transmitting, and an approximately equally advanced one would not be transmitting loudly enough to be heard. Lemma: the farther away a transmitting civilization, the more advanced it is likely to be.) Having detected an ET signal, they will then be in a position to acquire a great deal of "free" information. However, that information should be carefully evaluated to determine whether it would be best to attempt to establish contact by sending a signal in turn. Assuming the advanced civilization is either not exploitatively inclined or otherwise too far distant to present a threat (Lemma: return contact should only be attempted if the relative state of advancement is less than the distance divided by the speed of light.), then "the Cabots and the Lodges" conundrum comes into play. Just as it is in the best interest of a relatively young civilization to learn — at a distance — from a relatively advanced one, a relatively advanced one will gain nothing from a relatively younger one but the knowledge that it exists. Directed high-power transmission is expensive and they may not be worth talking to unless exploitation is a serious option. The reason why any concern about exploitation can be discounted is that, so far as we are aware, there is nothing we have that would be worth the time and expense for someone to come here and exploit.
Four related biochemical questions:
Knowing the sequence of a gene, can we accurately and reliably predict the sequence of a protein? This is essentially done, but is not yet perfected sufficiently to be an infallible tool.
Knowing the sequence of a protein, can we reliably predict its structural conformations? This should be doable within the century, and even sooner if George Rose has his way.
Knowing the structure of a protein, can we predict the mechanism of its function? This should be answered within a few years of the previous, and they may be only solvable together.
Knowing the functional mechanism of a protein, the timing and location of its expression, and the ensemble of molecules with which it coexists, can we predict its metabolic role? I suspect this may not be solved for quite a few years.
The social problem and the role of science in society. The development of science in this century has increasingly depended upon fostering the education and training of scientists from among those who have all too frequently been ignored, marginalized, or victimized because of societal problems irrelevant to the pursuit of science. It will probably be critical to survival of our civilization that these barriers be completely eliminated in the next century.
Consider how even thirty years ago severely handicapped people like Stephen Hawking may not have been able to express their genius despite our best intentions.
Consider how many geniuses like Ramanujan we have lost through neglect in ignorance and poverty, and then think of how little Hardy had to spend in nurturing him for us to gain as much as we did.
Consider how many geniuses like Emmy Noether have been suppressed by gender stereotyping and ethnic prejudice, and then think of how little Hilbert had to expend in fighting to keep her productive.
Consider how many geniuses like Alan Turing have been destroyed by intolerance and hatred, and how many times there has been no one brave enough and wise enough to care.
Created by John S. Garavelli
Last Modified on 2 March 2005
[Thanks to Tom Schneider and his father for catching several typos.]
[Thanks to Richard McCracken for catching a typo in a googlewhack.]
[Thanks to Alexander v. Daniels for catching twelve errors and defenestrating another googlewhack.]
[Thanks to Gunther Rudenberg for catching an incorrect photo identification.]