The Limits of Science
Can Scientific Discovery Be Automated?
Professor, Computing and Information Science
& Mechanical and Aerospace Engineering
About the Lecture
For centuries, scientists have attempted to identify and document analytical laws that underlie physical phenomena in nature. Can this discovery process be automated? Despite the prevalence of computing power, the process of finding natural laws and their corresponding equations has resisted automation. A key challenge to finding analytic relations automatically is defining algorithmically what makes a correlation in observed data important and insightful. By seeking dynamical invariants, however, we can go from finding just predictive models to finding deeper conservation laws. This approach has been demonstrated by automatically searching motion-tracking data captured from various physical systems, ranging from simple harmonic oscillators to chaotic double-pendula. Without any prior knowledge about physics, kinematics, or geometry, the algorithm discovered Hamiltonians, Lagrangians, and other laws of geometric and momentum conservation. The discovery rate accelerated as laws found for simpler systems were used to bootstrap explanations for more complex systems, gradually uncovering the “alphabet” used to describe those systems. Applications to modeling physical and biological systems will be shown.
Recognizing the implications of this work, the New York Times said, “Theoretical physicists are not yet obsolete, but scientists have taken steps toward replacing themselves.” But there’s a catch. While the computer can discover new laws, will we still understand them? Our ability to have insight into science may not keep pace with the rate and complexity of automatically-generated discoveries. Are we entering a post-singularity scientific age, where computers not only discover new science, but now also need to find ways to explain it in a way that humans can understand?
About the Speaker
HOD LIPSON is an Associate Professor of Mechanical & Aerospace Engineering and Computing & Information Science at Cornell University in Ithaca, NY. He directs the Computational Synthesis group, which focuses on novel ways for automatic design, fabrication and adaptation of virtual and physical machines. He has led work in areas such as evolutionary robotics, multi-material functional rapid prototyping, machine self-replication and programmable self-assembly. Lipson received his Ph.D. from the Technion – Israel Institute of Technology in 1998, and did postdoctoral work at Brandeis University and MIT. His research focuses primarily on biologically-inspired approaches, as they bring new ideas to engineering and new engineering insights into biology. He has authored and presented over 160 papers, edited three books and contributed chapters to many others. His work has been recognized by a variety of technical award and he is the recipient of the DARPA MTO Young Faculty Award and the Merrill Educator Award, among others. His work has been discussed in the national press, such as the New York Time, and in the popular press. Discover Magazine named his work one of the 25 most important discoveries of 2009. Popular Mechanics awarded his work the Breakthrough Award in 2007. And he was picked as one of Esquire’s Best and Brightest in 2007. In addition, he is an inventor in several patents and the co-founder of two wireless GPS-companies. For more information about Mr. Lipson visit his website at http://www.mae.cornell.edu/lipson.
President Robin Taylor called the 2,271st meeting to order at 8:24 pm September 10, 2010 in the Powell Auditorium of the Cosmos Club, after the hardware trials. Ms. Taylor introduced three new members of the Society. She announced the deaths of two members. She delivered some tidbits about the Fall Program.
The minutes of the 2,269th meeting were read and approved.
Ms. Taylor then introduced the speaker of the evening, Mr. Hod Lipson of Cornell University. Mr. Lipson spoke on “The Limits of Science – Can Scientific Discovery Be Automated?”
Mr. Lipson began by describing some of the projects he and his colleagues have been working on. Most robots, millions of them, are in factories. They are superhuman in almost any way you measure. They are faster, they are very accurate, they are very powerful, they work in any environment, and so on. However, they do not adapt.
He has five of those Roombas, robot vacuum cleaners, in his house, and he says they work less well when there are five of them. A house is challenging; there is a different floor plan every day; there are more objects around, and his kids like to ride on the Roombas.
They decided to bring in the mother of designers, evolution. They set up processes to try to get robots to evolve.
They considered machines of different forms, and all of the shapes were interesting, curious things. The first one they tried was an amalgamation of triangles. The first one they got to work, that is, to move by itself, was a collection of parts that, put together, looked vaguely like a cross between an airplane and a crayfish or lobster. It moved in a kind of jerky slither, which, I guess, is pretty good for a first-draft robot.
Then there was one that had many parts, so it could move in many ways. It had nine parts sticking out so it could rest on the floor in different orientations and its parts could be moved in different ways. The goal was to have it find a way to change the movements of its parts so the whole thing would move. It had data input from a camera produce feedback of movement to the computer that controlled the movements. In short, they were able to build and program robots that were able to evolve to “walk.” And, with additional time, they “learned” to “walk” better.
However, the evolution was slow. It took many trials and there were many errors as the the machines learned to move. It would not be possible to get NASA to let them send these things to Mars; it would take to long for their abilities to develop there and the risks to the mission would be too high. On Mars, one problem is too many.
So they tried another approach. They took one of the machines and programmed it to work as a simulator. It did not work very well, but they began to collect data on how well it worked and used the data to develop better simulators. At this point, they had not just a simulator, they had a simulator that “learned” to simulate better based on the data. The development of better robot and better simulator proceeded in tandem.
Following this paradigm, they found a basic machine composed of four legs projected at right angles from the “body,” a “brain,” and a motor. This machine did not “know” what it looked like, and it did not seem like it could possibly walk. It did have sensors to tell it which way it was tilting.
It was programmed to produce models of itself and to select actions that would cause the most disagreement between models. Mr. Lipson called this “... thinking like a scientist.” Then it tested the models and used the data to repeat the process. The hope was that it would improve its ability to walk as the poor models were eliminated.
The machine did learn to walk.
To further test it, they removed one of its legs. It quickly learned to walk with three legs. What happened, Mr. Lipson said, was that it changed its self-model and learned to walk again.
He listed many analogies to humans and animals. It learned to walk without an initial self-model. It adapted to the loss of the leg, using self-models that it changed in response to events.
After the talk, Ms. Taylor presented a plaque commemorating the occasion and thanked Mr. Bement on behalf of the Society.
Ms. Taylor made the usual housekeeping announcements and invited guests to apply for membership. Finally, at 9:34 pm, she adjourned the 2,271st meeting to the social hour.
The weather: Beautiful
The temperature: 21°C
Ronald O. Hietala,