Opening a Window of Discovery on the Dynamic Universe



Above, section of the Bayeux Tapestry: “They marvel at the star.” Harold imagines his doom as the fleet of William the Conqueror arrives on his shores (lower right of figure).

The Large Synoptic Survey Telescope

Harold II had recently crowned himself king despite his father’s promise to cede England to William, Duke of Normandy. Now King Harold awaited the powerful Duke’s response. His astronomers scanned the heavens for some portent of their sovereign’s success or a harbinger of his doom. On the 24th of April 1066, they noticed a bright new star --in fact an apparition of Halley’s Comet in its 76-year orbit about the sun. Cognizant of massing Norman forces across the channel, the astronomers foretold Harold’s defeat. In a depiction from the Bayeaux Tapestry, he is shown receiving this news amid visions of William’s invading fleet arriving on his shores.

In modern times,  though astronomers have traded augury for insight, scanning the heavens for change is increasingly proving key to understanding a dynamic universe. Daily brightness fluctuations first revealed the existence of supermassive black holes in the cores of quasars. Stellar explosions, lasting only a few weeks yet visible across much of the universe, have recently provided evidence for a previously unknown force of nature. Though searching the heavens for signs of doom had fallen into disrepute since the Enlightenment, even this practice has been given new urgency with the realization that Earth’s collisions with other solar system bodies continues to play a major role in its evolution, even posing a threat on human time scales to civilization itself.

Observations of change in the universe are often difficult to obtain. The most fundamental obstacle is that much of this change is so slow it could never be observed directly. Much as geological change is inferred on Earth, long-term change is perceived in the heavens by recognizing a temporal progression among seemingly disparate objects. Astronomers have become adept at this process and have built up a remarkably detailed picture of the evolution of stars, galaxies, and even the universe as a whole.

Despite the slow progress of cosmic evolution, many of the most remarkable astronomical events occur on human, and even daily, time scales, yet these changes have proven the most difficult to observe. The impediment to observing rapid change and to the more detailed insight it engenders lies in the nature of the tools currently available to astronomers. Modern large telescopes are truly marvels of design, with light-gathering power improving on the naked eye more than a million-fold.

Yet remarkable as they are, they have all been designed to look very deeply at very small parts of the sky. Their small field of view means that any one observation is not likely to catch a transient event in the act — we are always looking somewhere else. A small field also means that an impractically large number of separate observations is required to map the entire sky to the depth these telescopes permit and reveal the rare missing links among more common objects. These facilities are few in number and viewing time on them is in great demand worldwide. With the assignment of only a few nights per year to each astronomer, it is difficult to make progress on a wide variety of fronts.

This lack of continuous access and a global view means we are almost certainly missing most of what’s going on in the universe. Our all-sky maps are made with small telescopes, inexpensive enough to be dedicated to a single purpose, but limited in the depth and detail they can achieve. Their limited light-gathering capability also means that such maps take years to complete, making it nearly impossible to detect change. Such slow progress across the sky gives serendipity little chance most of what we know about transient events is discovered accidentally. Since cosmic cartography is limited not by distance but by the amount of light which can be collected on Earth, we are as ignorant of faint nearby objects as we are of bright objects at the edge of the observable universe.

Current large telescopes are the high temples of astronomy, the inner sancta to which only the Initiated have access. Most observatories practice some sort of outreach, and remarkable images are widely available, but there is no way schools and the general public can look deeply and at will into the heavens.

LSST: A New Telescope Concept

“If we knew what the discoveries were likely to be, it would make no sense to build such a telescope.”

The Large Synoptic Survey Telescope (LSST), has been designed to overcome many of these difficulties. It will open up the “time domain” to astronomy by mapping the entire sky deeply, rapidly, and continuously. It will provide all-sky maps of unprecedented depth and detail, and keep doing so frequently and for years to come. By providing immediate public access to all the data it obtains, it will provide everyone, the professional and the “just curious” alike, a deep and frequent window on the entire sky. Cosmic cartography will become cosmic cinematography, forever changing the way we view the heavens.

This change will be much like the paradigm shift of predicting the weather from a single ground-based station to a geosynchronous satellite. LSST will change the way we observe the sky. Rather than using other telescopes to follow up its discoveries, LSST, with its unique capability of frequent, deep imaging of the entire visible sky, will provide its own follow-up.

Whenever we look upon the world in a new way, it reminds us of its inexhaustible richness. Some new discoveries lead immediately to better understanding, while others provide hints of new wonders to explore. In 1928 George Ellery Hale proposed to build a new telescope with the unprecedented aperture of 200 inches. A member of the Rockefeller Foundation’s International Education Board asked Hale: “What discoveries will you make?” Hale answered: “If we knew what the discoveries were likely to be, it would make no sense to build such a telescope.”

Serendipity is the life blood of science, but we must plan for serendipity, building new ways to encourage it and preparing ourselves to recognize it when it appears. LSST will make the unusual commonplace and the singular observable. The greatest advances to come from LSST are thus almost surely unanticipated.

Such new capabilities are made possible by the confluence of several technological developments. New fabrication techniques for large optics developed for the most recent generation of large telescopes can be extended to novel optical designs which allow large fields of view. New detector technologies allow the construction of cameras which can capture these wide-angle images on focal planes paved with billions of high-sensitivity pixels (picture elements). Recent phenomenal advances in microelectronics and data storage technologies provide greatly enhanced facilities for digital computation, storage, and communication, and new software innovations enable fast and efficient searches of billions of megabytes of data.

It is now possible to fabricate large mirrors of very deep curvature accurately and inexpensively. Large mirrors collect more light, enabling detection of fainter sources. Deep curvature brings light to a focus only a short distance above the mirror’s surface. This short focal length significantly decreases the overall length of the telescope. A shorter telescope is lighter, stiffer, and thus more resistant to image-blurring vibration. It is also less expensive to construct and fits in a smaller building, further reducing costs. 

LSST’s innovative science relies upon another aspect of short focal lengths well known to photographers; for a given image size, shorter focal lengths provide wider fields of view. Combining a large diameter with a wide field leads to an optimal design for surveying the cosmos. With the effective light-collecting area of a telescope seven meters in diameter, LSST will have a field of view encompassing ten square degrees of sky, roughly 50 times the area of the full moon. Such a field is over a thousand times that of existing large telescopes, yet the light-gathering capability will be among the largest in the world. This wide field will be achieved through a three-mirror design; light gathered by the 8.4-meter primary mirror will be reflected back up to a 3.4-meter convex secondary mirror, and down to a 5.2-meter tertiary mirror before being directed upward again to a camera at the center of the secondary. This triple-folded design is even more compact than traditional telescopes; the 8.4-meter LSST would fit comfortably in an enclosure like that of the 6.5-meter Magellan telescopes in Chilé.




Moore’s Law

In 1965, Gordon Moore, co-founder of Intel, noted that the number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented. He predicted that this trend would continue well into the future. To date, the density of circuits has doubled about every 18 months, and most experts expect this to continue for at least another two decades. Since integrated circuits are the basis of both computer memory and processors, Moore’s Law is roughly equivalent to saying that the capability of computers will double every 18 months for the forseeable future. Remarkably, disk storage technology has outpaced even Moore’s Law over the past decade.


Astronomers, like oceanographers, have stumbled onto their discoveries by taking samples. We look at the sky with blinders: narrow fields of view which often require many hours of exposure. Occasionally we find something new. With luck we later follow up these chance single discoveries on other telescopes, if the objects are still emitting light. We are thus guaranteed to overlook a vast area of exploration: all objects which are faint, rare, change their brightness, or move. This is an enormous discovery space to miss. LSST will change all this. It will have the unique capability of imaging the entire visible sky to unprecedented faint limits multiple times per month. In a radical shift in paradigm, LSST will follow up its own discoveries.

The proposed explorations which will drive the development of the novel LSST facility have one requirement in common: the need to image wide swaths of the sky faint and fast. In optics there is a figure of merit for this capability, called “throughput” or “etendue”, the product of the telescope capture area in square meters and the camera field of view in square degrees. Previous attempts to maximize throughput have focused on one or the other of these quantities.

Building telescopes of huge aperture has resulted in great light-gathering power, but at the expense of limited fields of view. Smaller telescopes with very wide fields have been constructed, but with limited apertures. These limitations are imposed by optics. The requirement of crisp images makes it impossible in traditional optical systems to achieve large throughput by simultaneously having large aperture and large field of view. LSST breaks this logjam. With its novel three-mirror optics and gigapixel camera, LSST will have an throughput of 319 meter^2 deg^2. This represents a fifty-fold increase over the best wide-field capability currently available, and makes possible the novel astronomy LSST will pursue. LSST is thus far more than a telescope. At the core of LSST is a camera with over three billion pixels. Driven by the need to capture faint, crisp images over the entire ten-square-degree field of one exposure, this will be the world’s largest imager. While feasible by present technology, no imager this large has ever been attempted.

THE ORIGIN OF TODAY’S pervasive electronic imager — from the familiar digital snapshot camera to the hundred-megapixel, wide-field imagers used in astronomy — was a device invented in 1970 for the purpose of storing an audio message. Within hours of hearing of the need for a solid-state scrolling memory, George Smith and Willard Boyle at Bell Labs invented the charge-coupled device (CCD) using silicon integrated-circuit technology: a “bucket brigade” for electrons.

While fifty times more efficient at detecting light, the first CCDs covered 4000 times less area than previous detectors: photographic plates. Only in the last decade have electronic imagers, in the form of mosaics of many CCDs, grown large enough to be useful for LSST prototype wide-area surveys. The development in the 1990s of the four-to eight-megapixel CCD used in these mosaics depended on decades of R&D and improvements in microelectronics. This R&D has continued, enabling new types of sensitive electronic detectors. In addition to a new generation of panchromatic-sensitive CCDs, we now have CMOS (Complimentary Metal-Oxide Semiconductor) arrays of equal sensitivity. These low-noise, self-shuttering CMOS devices clock the photo-electrons down through several transistors under each pixel, rather than the bucket brigade use in CCDs.

Existing wide-field telescopes and cameras are stalled at a two- to four-meter telescope aperture and a fraction of a square degree per exposure. There are several reasons for this. Larger fields of view have proven impossible using the traditional one- or two-mirror plus multilens-corrector telescope optics. Hundred-megapixel CCD mosaics using early 1990’s technology have hit a size limit imposed by wiring and electronics complexity and heat dissipation.

The LSST camera will use a different approach, similar to state-of-the-art microelectronics. All the control and processing electronics will be co-located with the individual detectors, thus avoiding a wiring nightmare. These hybrid building blocks will employ either the new generation CCD or CMOS photodetector arrays hybridized to an underlying CMOS ASIC (Application Specific Integrated Circuit). These megapixel imager modules will dissipate very litte heat into the cooled camera body and will be easy to replace if they fail. Up to one thousand of these modules populating the 64-centimeter diameter field of view will supply a parallel data stream. LSST’s camera will produce 20 million megabytes of data every night. By 2006, Moore’s Law (see sidebar) ensures that data processing and analysis hardware will routinely handle this data rate. Equally exciting is the progress made by recent surveys in automatic data-pipeline software. Breakthroughs in large optics, microelectronics, and software have come together. A new view of our universe will be the result.



A primary goal of LSST is to detect change. Objects in the sky can vary both in brightness and in position, and searching for these changes presents design challenges. LSST must go faint fast. The ability to detect faint sources is directly proportional to the amount of light that can be captured at the source’s image on the telescope focal plane. The total amount of light captured is the product of the intensity of the light and the exposure time. One can increase the intensity by harvesting light over a larger area, by building an optical system with larger aperture. When observing objects which do not change, one can go fainter by exposing for a longer time, adding up the light falling on the detector until enough is accumulated to show the object against the background brightness of the sky.

Transient objects, however, may not linger long enough for extended exposures. To catch these objects before they fade, the only recourse is increased collecting area. Long exposures also limit the amount of information which can be obtained about a transient source. A faint blip on a long exposure might have been caused by a faint, persistent source. It might equally well have come from a bright flash which lasted only a small fraction of the exposure time. To discriminate between these possibilities, exposure times must be as short as possible. Again, to go faint fast, over all of the sky, a large value for the product of telescope aperture and field-of-view is the only recourse.

Objects which move during an exposure spread their light across the image. More exposure time results in a longer, but no brighter, streak. The faster the object moves, the less time is spent at any one point in the image. The less light is accumulated, the fainter the trail. The only way faint, fast-moving objects can be seen is again to increase the intensity through increased collecting area. The objects of interest to astronomers with the fastest motion in the sky are the Near-Earth Objects, or NEOs, asteroids whose orbits carry them close to, and even into, the Earth. For some of these objects, an exposure of 15 seconds leads to trailing in the image; longer exposures will not result in detecting fainter objects. It will take two seconds to transfer the image from the camera to the image-processing computers and five more seconds to re-point the telescope whenever necessary. Taking shorter exposures means that an increasing fraction of the telescope’s time is spent reading the camera or re-pointing, and not looking at the sky. 15 seconds is an appropriate compromise between making efficient use of telescope time and detecting as many transient objects as possible. Telescopes with less than LSST’s throughput would require longer exposures.

These same data will be a treasure trove for breakthroughs in other areas of astronomy. Because modern detectors simply count the number of photons striking their photosensitive surface, images are represented by the number of photons detected at each point. Two or more images of the same location on the sky can be combined simply by adding these numbers pixel-by-pixel so that the result is virtually identical to that of a single, longer exposure. By “co-adding” images in this way, the ten-second exposure time required for catching transient sources in the act will not limit LSST’s ability to detect very faint, persistent sources through long exposures.


Counting Bytes ...

Today’s large disk drives have capacities measured in hundreds of gigabytes, but LSST will generate terabytes of data every night and eventually store more than 50 petabytes. To keep these numbers straight and give some sense of scale, here is glossary of storage terms: 

- Megabyte (MB)=106 bytes = a Ph.D. thesis’ worth of text; 

- Gigabyte (GB)=109 bytes = forty (four-drawer) file cabinets full of text, or two compact discs’ worth of music; 

- Terabyte (TB)=1012 bytes = forty thousand file cabinets of text, or a feature film stored in digital form; 

- Petabyte (PB)=1015 bytes = forty million file cabinets of text, or all of CNN’s news footage for five years.


LSST will tile the sky repeatedly (each “visit” is a pair of 15 second exposures) with overlapping images of approximately ten-square-degrees. It will take two bytes of data to represent the amount of light falling on each of LSSTs 3.2 billion pixels. The telescope will make pairs of 15-second exposures, with each requiring an additional 2-seconds to read the image from the detector. While the second exposure is being read out the telescope moves to the next position on the sky in an average of 5 seconds. Current estimates indicate LSST will create 12.8 gigabytes (GB) of data every 39 seconds, a sustained data rate of 330 megabytes (MB) per second. While such a rate is not unheard of by modern internet standards, it represents a dramatic increase for astronomy. The highest data rate in current astronomical surveys is approximately 4.3 MB per second, in the Sloan Digital Sky Survey (SDSS).

Over a ten-hour winter night, LSST will thus collect up to 13 terabytes (million megabytes, TB) of 16 bit image data. While this seems a daunting amount of data to process, examine, store, and disseminate, its magnitude is not unprecedented. A feature-length High-Definition Television (HDTV) movie, before editing, requires several terabytes to store in raw form; by the time LSST is in operation, Hollywood and others will routinely be dealing in similar amounts of data!

The data reduction and analysis for LSST will be done in a way unlike that of most current observing programs. The data from each visit will be analyzed and new sources detected in the minute before the next pair of exposures is ready. This will allow interrupting the normal schedule of operations to follow any new, rapidly-varying events as they occur. It will also allow nearly-instantaneous notification to other observing resources such as radio and infrared telescopes and X-ray and gamma-ray observatories in space. 



Two images of a cluster of galaxies, taken three weeks apart, are subtracted to reveal that a supernova has exploded in one of the galaxies. All of the persistent information in the two images is removed by the subtraction. LSST will detect events as faint as 24th magnitude in ten seconds ñ equivalent to the brightness of a golf ball at the distance of the Moon. (Images courtesy of the ESSENCE project.)

As each image becomes available, it will be corrected for geometric distortions and any small variations in sensitivity across the detectors. Ambient light from the night sky will be removed. The image will then be added to data previously collected from the same location in the sky to build up a very deep master image. The collection of these master images will become a key data product of LSST: a very deep map of the entire sky visible from its remote mountain site.

The master image will also be subtracted from each individual image as it comes in. The result will be an image which contains only the difference between the sky at that time and its average state: a picture containing only what has changed. Objects in this difference image will be classified according to their appearance and, by looking into a database of all previous classifications and images, according to their evolution in time. These data, and the individual exposures themselves, will then be added to the database.

Quality control will be an important aspect of data processing. Major problems will be relatively simple to diagnose automatically from the data stream. These include the effects of atmospheric blurring and of the mechanical and electronic health of the system. Experience with current automated surveys, which can be seen as precursor projects to LSST, has shown that such simple measures are not enough to ensure that data quality remains at the highest possible level. Subtle problems manifest themselves only through using the data to do science. Rather than playing the passive role of providing data to the community, the LSST team will engage in several key scientific projects to guarantee data quality.

A single exposure will detect sources at 24th magnitude. This is much fainter than the faintest sources detectable on the photographic plates, exposed for many hours on the Mt. Wilson 100-inch telescope, used by Edwin Hubble to discover the expansion of the universe. At this level of brightness, the most common objects in the sky are not stars but galaxies — 60,000 of them per square degree on the sky. In one pass across the visible sky (20,000 square degrees, or about three nights of observation), LSST will detect and classify 840 million persistent sources. Over time, LSST will survey 31,000 square degrees. By adding together the first five years of data, the all-sky map will reach 27th magnitude, and its database will contain over three billion sources, not counting transient events.

Information about the color of each source allows, for example, an estimate of the distance to each galaxy, or of the mass and evolutionary state of most stars. LSST will provide this data by observing in five colors, using filters in front of the camera. Further properties such as brightness, size, orientation and shape will be measured for each object in each color, allowing a much more detailed object classification. If 100 parameters are measured, after a single pass over the entire sky, the database will contain about 150 TB of data. In order to study change, however, such data will be retained from each pass over the sky, leading to over 5 PB of classification data in five years.

In addition to this object database, the individual images will themselves be retained, in an image database of over 150 TB for each individual pass, or 30 petabytes (PB, a thousand million megabytes) in five years. The image database will be a movie of the entire sky visible from the site of LSST...true cosmic cinematography.

Changes discovered by image subtraction will be compared against the database of known objects, allowing the type of change to be classified. Is this a new source? If not, how is it changing? Notification of specific types of events will automatically be sent to a variety of research programs, some of them automated in themselves. For example, a new source brightening over a period of a few days with a particular color signature will be identified as one of the several hundred thousand supernovae LSST is expected to discover each year.

If a series of new sources can be recognized by software as a single object moving across the sky, it will be tagged as a potential solar system object. New and archived data for these objects will be combined and a preliminary orbit determined. If the orbit satisfies certain criteria, the object will be classified as an Near-Earth Object and the orbital data will be sent on automatically to several projects currently in progress, which will assess the risk it may pose to Earth.

The object and image databases themselves will become a powerful tool for observational astronomy. One will be able to ask new questions and perform new surveys without needing to perform new observations. By retaining a time-dependent picture of the whole sky, one need not anticipate every sort of change to be discovered before observations are made. LSST will make unusual events commonplace and the rarest of events observable. Such “data mining” allows exploiting LSST to its fullest potential, but implementing the software to accomplish this stands as a daunting challenge.

THE ABILITY TO DO FAST analyses on petabytes of data will revolutionaize how we detect faint moving objects or probe the underlying dark mass-energy of our universe. Weak gravitational lensing, the deflection of light by intervening clumps of dark matter, causes distortions in the observed shapes of galaxies. LSST’s high throughput and multiple short exposures will enable unprecedented control and rejection of systematic errors in image shape distortion. These data may then be processed to yield a mass map of the intervening universe. Closer to home, potentially devastating near-Earth objects now go undetected. New techniques of extracting relevant source parameters can be used on the imaging data to automatically find such objects. Similar image-probing techniques are important to other areas of science as well (satellite observations, biology, oceanography, etc), and the software tools developed to mine the LSST data resource will find wide application.

The high data rate, combined with the need for real-time analysis and later data exploration, requires a fresh approach, making use of the best technology and developing innovative software for optimal data management. While much headway can be made in efficient algorithms and associated software, there will also be hardware challenges in processing and storing this much data. It will be particularly effective to have data analysis innovations in place when the telescope and camera systems are first put into use.

Current projects show that approximately 5000 mathematical operations are required per pixel of the image to process and classify survey data. Scaling this to the size of the LSST data stream shows that approximately a thousand of today’s high-end processors will be required — a feasible proposition. Advances in processor power over the next five years will reduce this number to a few hundred, by which time the required LSST computer system will seem quite pedestrian. Storing this data is also well within even today’s technology. At current prices, a one-petabyte disk storage system costs less than $1 million; in five years this price should drop to well below $100,000. Keeping all of the LSST data online will certainly be affordable.

More interesting challenges are presented by data mining. We need now to discover ways to search for correlations in such a massive database, an ability which will be key to extracting unanticipated science. While the software required by LSST science programs presents challenges, assuring opportunity for unanticipated science using such huge databases presents far greater ones. Designing optimal datahandling and search routines will be an exciting aspect of this project, for many science programs may need access to the full imaging data archive. One example of this is a search for what appear to be collections of faint point sources of light but which in reality are all part of a single, extended but low-surface-brightness object. Another example is the search for patterns in the appearance of objects transient in time and space.

THE GOAL OF THE LSST project is to make all of the data available to anyone who is interested, anywhere in the world. How much of the data is interesting to how many is a question which must guide the way data is distributed. The overwhelming majority of users will probably not be professional astronomers. They will be interested in browsing deep color images, the most recently acquired images, or an all-sky map or what changed last night — this could range between several GB to 100 TB of data. They will not have the very high-speed internet access available to research institutions, so they will need to use tools designed to browse the sky at low resolution before “zooming in” on a particularly interesting area. LSST can accommodate these users by deploying one or more large but otherwise conventional web sites. Most research applications of the LSST databases will likewise require relatively small amounts of data at a time. Searching catalogs of objects and sporadic downloads of images by the professional community can also be served by web-based access to one or more LSST data centers and will become the cornerstone of the National Virtual Observatory, a project to make all astronomical data widely available.

When a project requires data more rapidly than internet access can provide, “sneaker net” — writing data to disks and physically shipping them to the project — may provide the most cost-effective solution. For example, an astronomy department may wish to have a copy of the databases and the deep all-sky map for local use. Planetariums might wish to keep large quantities of data on-site for use in developing exhibits and shows. Storing such data sets would require disk space costing a few tens of thousands of dollars. This would be a small fraction of the cost associated with providing the computing power necessary to make use of the information. Large institutions, or even countries, might provide their scientists access to copies of the entire data set. Projects which require access to all of the data at once will almost certainly also require significant computational resources to achieve their goals. For these projects, the cost of acquiring a complete copy will be quite small compared with their overall budget.

Finally, one might envision an investigation which requires rapid access to new data as well as the data set as a whole. A small number of these projects will be accommodated at LSST data center. Investigators will be able to bring their own computers to the center and tap into the primary LSST data stream. Space for this will be limited, so a national committee will judge these projects competitively based on their scientific merit. This is the way limited telescope resources are allocated today. The only difference is that the sky will be down here on Earth and the telescope is now the data connection.

Achieving this will require the efforts not only of astronomers but also of experts on statistics and algorithm development, computer science, and data mining and visualization. The effort invested in software, data system design, tools for visualizing and analyzing data, and, of course, making sense of the data, may be comparable to that spent on the telescope hardware itself.

Enabling New Science

Image of asteroid 951 Gaspra taken from the Galileo spacecraft. The object is about 19x12x11 kilometers in size, similar to those whose impacts cause mass extinctions on Earth. 

Image Credit: NASA/JPL/USGS

The opportunity for scientific discovery enabled by this confluence of technologies is enormous. Panels of astronomers and physicists have identified key scientific questions which will be answered by LSST in virtually all areas of astronomy. LSST has been recommended as a high national priority by the National Academy of Sciences in decadal surveys in astronomy, astrophysics and solar system exploration. LSST’s contribution to fundamental physics (dark energy) was emphasized by the recent NAS report “Quarks to Cosmos” which strongly recommended construction of LSST. The breadth and scope of the science which LSST will address is too great to cover in any detail here. As in any significant advance in science, it is likely that the most transforming discoveries LSST will make are not on our current list of compelling science needs.

It is possible, for example, that by opening the time window on the universe, LSST will discover energetic events which we cannot imagine. Instead, we provide a description of two of the many areas of scientific discovery in which LSST will certainly play a major role. By exploring the near-Earth environment, LSST will provide insight into the formation of the solar system and play a major role in protecting Earth from the threat of asteroid collisions. By providing very deep images of the entire sky, LSST will enable new understanding of the nature and origin of the universe itself.



Supercomputer simulation of a 1.4 kilometer asteroid striking Earth a glancing blow 25 kilometers south of Brooklyn, N.Y. The first image is 0.4 seconds after impact, the next 2.4 seconds, and the final is at 8.4 seconds. In these images, material heated to over 5000 degrees is bright orange and water vapor is colored white. After 2 seconds, a fireball has swept across much of Long Island, vaporizing everything in its path. After 8 seconds, vast amounts of the Atlantic have been lifted into suborbital trajectories, and some material has achieved escape velocity from Earth. (Calculations performed at Sandia National Laboratory. Images courtesy of David Crawford.)



Meteor Crater in northern Arizona, excavated by the 50-megaton impact of an iron meteoroid 30 meters in diameter. (Photo by D. Roddy, Lunar and Planetary Institute.)

Above: Frequency of asteroid impact as a function of object diameter. Impacts with energies comparable to the largest hydrogen bombs have occurred many times in human history.




Below: Comet Hyakutake came within nine million miles of Earth in 1996. Comets occasionally strike the Earth, but since they spend most of their time at large distances from the sun, they contribute only about a ten-percent additional threat compared with Near-Earth Objects. Cometary debris accounts for most terrestrial meteor showers. (Image courtesy of Chris Shur.) 


Despite great progress over the past 50 years, there is yet much to learn about our solar system. While the great majority of its mass is contained in the sun and giant planets, by number the overwhelming content of the solar system is relatively small, dark bodies like comets and asteroids.

Such objects are intact samples of the material from which the solar system formed and thus hold important clues to the origin of the sun, planets and, indeed, of life itself. Asteroids and comets reflect little light, and thus, despite their proximity, much about their origin and dynamics remains uncertain and difficult to study. Indeed, it has proven impossible even to provide a reasonably complete census of these objects. Such uncertainty is naturally troubling to scientists. Over the past twenty years, however, this uncertainty has taken on new importance with the recognition that Earth suffers collisions with these objects.

Comets consist primarily of water and carbon monoxide ice that is converted to gas (sublimated) as they approach the Sun. Asteroids are composed primarily of more refractory materials: rocky, solid objects that vaporize only at the high temperatures they experience when well within the orbit of Mercury. Both comets and asteroids may collide with the Earth at extremely high velocities — on the order of 20 to 30 kilometers per second. At such speeds, the kinetic energy of ordinary materials vastly exceeds the chemical energy of an equivalent amount of high explosive. Upon impact, the bulk of this kinetic energy is immediately dissipated in the form of heat and an associated shock, or pressure, wave that can cause devastation on continental and even global scales.

It is now widely believed that such an impact caused the mass extinctions which form the transition from the Cretaceous to the Tertiary period, the so-called K-T boundary, 65 million years ago.

An object about ten kilometers in diameter struck the Earth in what are now coastal waters off Mexico’s Yucatan Peninsula. Numerical simulations of this event suggest that the impact created a shock front which spread out across the North and South American continents, heating the atmosphere to incandescence. Directly above the impact site, a hole was blown in the atmosphere and a vast quantity of the Earth’s crust was thrown into space. The heat from this material re-entering the atmosphere caused forest fires throughout the world. Soot from these fires and the dust generated by the explosion probably caused a significant global drop in temperature, and nitrous oxide formed during the shock wave’s passage through the atmosphere lead to widespread acid rain.

The results were calamitous for life, and not only for the dinosaurs: as much as 85 percent of all species on Earth became extinct within a short time. The geological record shows many such impacts over the past billion years, some considerably larger than the K-T boundary event. 250 million years ago an impact occurred which likely destroyed more than 95 percent of all living species.

Impacts which cause destruction on a global scale are, of course, rare, occurring roughly every 50 to 100 million years in the geological record. This translates to a one-in-a-million chance of one occurring within our lifetime. Smaller, more frequent, impacts can still have global consequences, however. There is a one-in-a-thousand chance of a one- to two-kilometer meteoroid striking the Earth within the century. Such an impact would release ten to one hundred times the explosive energy of all of the nuclear weapons ever produced. The devastation from such an event would be continental in scale; its effects on climate would be global and likely last for centuries.

AS THE SIZE OF THE BODY DECREASES, the frequency of events continues to rise. Fifty thousand years ago, a 30-meter object comprised mostly of iron struck northern Arizona. Because of its high density, the meteor penetrated the atmosphere. The resulting 50-megaton (MT) explosion excavated a crater more than a kilometer in diameter and 540 feet deep, and devastated hundreds of square kilometers. Similar craters are found throughout the world, testifying to the frequency of such events. In 1908, a larger (50 meters) but less-dense object struck over Tunguska, Siberia. The object disintegrated high in the atmosphere, but the resulting 10-20 MT airburst burned and flattened over 1000 square kilometers of forest. If the collision had happened a few hours later, it could well have devastated northern Europe.

Events of this magnitude happen every two to three centuries, so the probability of an impact of this size within the next century is thus 30 to 50 percent. There is a one percent chance of a 250-meter impact during the same period. Such an object would certainly penetrate to ground level. The resulting 1000 MT explosion would cause catastrophic devastation over a large area. If the impact were to occur on land, the resulting crater would be three to five kilometers across. At sea, such an impact would result in a tsunami of unprecedented magnitude, most likely devastating coastal populations.

Numerically, and somewhat paradoxically, the overall risk to human life is greatest from the largest impacts. The cost in human life rises more rapidly with the size of the event than its frequency declines; there would be few if any survivors of a ten-kilometer impact. Thus, the odds of dying in an impact of global proportion are about equal to that of dying in an earthquake or an airplane crash, while those of dying in a city-sized event are somewhat less. Nonetheless, the probability of regional or city-scale calamities is far from zero and merits serious attention.

The U.S., British, and other governments have recognized the threat from asteroid and comet impacts. Congress held hearings to study the NEO impact hazard in 1993, 1998, and 2002, and NASA has formed a NEO program office. To date, however, the funding from governmental sources has been limited. Professional asteroid and comet astronomers have hoped that recognition of the impact threat might spur the U.S. government to provide funds for an early-warning system to identify objects which could be on a collision course with Earth. While Hollywood has responded with popular and spellbinding simulations of collisions in films like “Deep Impact” and “Armageddon,” the public and the U.S. government have so far done little to identify such potential hazards.

Contemporary efforts to survey the night sky for comets and asteroids are limited because instruments for this purpose do not have the required light-gathering capacity or area coverage to find all the dangerous asteroids in a reasonable time. It is estimated that half of the most dangerous asteroids capable of striking Earth, those larger than about a kilometer in diameter, have been identified. At current rates of discovery it will probably be two decades until more than 90 percent of these one-kilometer objects are found.

Smaller objects are just too faint for small telescopes to detect with any efficiency, and this has concentrated attention on the largest bodies. The focus on NEOs larger than one kilometer ignores the threat from more numerous smaller objects. While there are thousands of one-kilometer objects, there are estimated to be more than a million objects with diameters greater than 50 meters, and more than ten thousand larger than 250 meters. The vast majority of these are uncharted. Most of those which are detected are subsequently “lost” as they move out of range of current surveys before accurate orbits can be determined.

With its ability to go faint fast, LSST will find virtually all one-kilometer NEOs in less than a year. In a decade of operation, it will find 90 percent of all NEOs down to 140 meters in diameter.

DETECTING NEOs IS ONLY the first stage of the early-warning process. When current surveys detect an object, they provide only a crude estimate of the object’s trajectory. This in turn allows only a crude estimate of how closely the object will approach the Earth. At this stage, many NEOs seem potentially dangerous. Follow-up observations over time, usually with larger telescopes, are necessary to provide better orbital data. Better data allow a more accurate risk assessment, and this usually shows that the object is not, in fact, headed for Earth. If the initial orbit is not sufficiently accurate or if time on a larger telescope cannot be obtained, the object is lost before it can be ruled out as a threat.

Because LSST will survey a much larger volume of space more frequently than other systems, its repeated observations will continue to observe detected NEOs, refining our knowledge of their orbits to an extent not currently possible for most objects. LSST automatically does its own follow-up and allows more accurate risk assessments even for objects detectable by other surveys.

LSST can also contribute to understanding what might be done when an object is eventually found on a collision course with Earth. Little is known about the mechanical properties of asteroids, yet these properties are crucial to understanding how to deal with such a threat. For example, if asteroids are solid bodies with high mechanical strength, a rocket motor might be attached and the object’s orbit nudged away from collision. If asteroids are instead piles of dust and rubble with little cohesive strength, adding a motor might break the object into several parts, potentially transforming a bullet into grapeshot.

LSST will find a significant number of much smaller bodies, down to a few meters in diameter. Some of these will be found shortly before they collide with Earth to become meteors. Knowing the size and initial velocity of an object and then observing its destruction in the atmosphere will allow us to determine its mechanical properties, knowledge essential to plans for countering larger threats.

The large aperture, wide field, and full-time observing schedule of the LSST system provide a capability unrivaled by any other project for detecting and assessing the risk to Earth presented by NEOs and for the studies of these objects crucial to mitigation efforts. This aspect of its mission will be accomplished automatically and with the same observations that will bring new scientific insights to a wide variety of other fields.



The vast areas between the stars and galaxies appear empty and dark, but this impression is misleading. We now know that 96 percent of the universe is dominated by unknown and unseen forms of mass and energy: dark matter and dark energy. Two fundamental goals of cosmology are to determine the composition of the energy and matter in the universe. It appears that the simplest expectation, a universe made of ordinary matter (so-called “baryons” — the familiar neutrons and protons) is wrong. Instead, we appear to live in a universe which challenges our understanding of physics.

Modern cosmology has been built on two pillars of radiation: the residue from the big bang and the distribution and spectra of stars and galaxies. Yet it has long been known that mass, not luminosity, is the key to the structure of the universe. This is because gravity plays a central role in the formation of structure. Over cosmic time, “over-dense” regions become still denser. Tiny ripples of density existing 300,000 years after the big bang have grown into the complexity of mass structure — galaxies to clusters to super-clusters of galaxies — we see in today’s universe some 14 billion years later. On the largest of scales, the overall expansion history of the universe is governed by its mass-energy (Einstein taught us that mass and energy are related). Since mass could not be seen directly, astronomers have until recently used the luminosity of the trace amounts of ordinary matter in existence as a proxy for total mass in cosmological studies. Yet this ordinary matter, the baryons we are made of, cannot be the chief component of most of the mass in the universe.

A huge amount of dark matter — roughly ten times as much mass as there is in all the stars and gas and dust — controls the early evolution of structure in the universe. The dark matter is thought to be some very different kind of particle created during the hot big bang that interacts only weakly with the familiar particles of “normal” matter. In the earliest moments of the universe, corresponding to temperatures and energies far higher than any attainable or even imaginable on Earth, a legacy in the form of dark-matter particles was created. This legacy is detectable today in its cumulative gravitational effects on large-scale structures in the universe.

OVER THE LAST FEW years the composition of the universe has become even more puzzling, as the observed luminosity of Type Ia supernovae at high red shift, and other observations, appear to imply an acceleration of the universe’s expansion in recent times. In order to explain such an acceleration, we need dark energy with large negative pressure to generate a repulsive gravitational force. Even more puzzling is the fine-tuning of parameters which seems to be required to explain why the dark-energy density today is about the same as that of dark matter, whereas it evolves very differently with time. Moreover, this density is only a factor of ten larger than that of baryons and neutrinos. This may imply, as the Ptolemaic epicycles did, that we are lacking a sufficiently deep understanding of fundamental physics. It is possible that what we call dark matter and dark energy arise from some unknown aspect of gravity. Thus, the highest energies and the universe on the largest scales are connected. Today the worlds of particle physics and cosmology are coming together in a transformed world view: Copernicus dethroned Earth from a central position in the cosmos, and Edwin Hubble demoted our galaxy from any significant location in space. Now, even the notion that the galaxies and stars comprise most of our universe is being abandoned. Emerging is a universe largely governed by dark matter and, we are beginning to think, by an even stranger dominance of a smoothly-distributed and pervasive dark energy.

HOW DO WE STUDY dark matter if we cannot see it? These mountains of mass will bend light rays from background galaxies like a lens and create “cosmic mirages.” This tool was made possible by the discovery of a population of distant galaxies so numerous on the sky that they could be used as sources for the statistical reconstruction of an image of the foreground dark matter that served as the lens. From their warping of the visible matter behind them, we now can see the dark matter clumps, map them and chart their development over cosmic time. To see the dark matter, we have to “invert” the cosmic mirage it produces. We have to look deep enough and wide enough into the background universe so that there are thousands of galaxies projected near every foreground lens. Exploring the full range of mass structures will require a new facility which will image billions of distant galaxies.

How much dark matter in our universe resides outside clusters of galaxies? If most of the mass in clusters is in a smooth distribution extending out millions of light years, perhaps most of the dark matter of the universe is distributed more broadly than clusters. To make an analogy closer to home, clusters are the Mount Everests of the universe. But most of the mass in Earth’s mountains resides in the more numerous foothills. Though we are currently limited to fields of view less than a degree across, telescopes are nevertheless being used to probe this universal dark-matter distribution.

Our new observational mass maps cover a limited range of scales. There are big and small clusters of mass, and there are what appear to be filaments of mass — enough mass to add up to about a third of the density that, in the absence of dark energy, would be required to ultimately slow the expansion of the universe to zero. Through these first probes, we have glimpsed a complex universe of mass: dilute filaments of mass coexisting with piles of dark matter centered on clusters of galaxies. This complex dark-matter structure took billions of years to grow. Its growth rate is predictable for a given model of the expanding universe. Probes of other features of the universe — from primeval deuterium to tiny fluctuations in the heat left over from the big bang to supernovae at large distances — suggest that some form of dark energy, when combined with the gravity of the dark matter, creates a flat cosmic geometry in which parallel light rays remain parallel. To determine what this dark energy is and how we can probe its physics, we look at its influence on the expansion rate of the universe. Dark energy acts against gravity, tending to accelerate the expansion.

To test theories of dark energy we would like to measure the way some volume which is co-moving with the Hubble flow changes with cosmic time. For a universe with a given mass density, the time history of the expansion encodes information on the amount and nature of dark energy. Measuring this change in co-moving volume by taking “snapshots” of mass clusters at different cosmic times would provide clues to the nature of dark energy. In turn, this would tell us something about physics at the earliest moments of our universe, setting the course for its future evolution.

The world of quantum gravity at a fraction of a second after the big bang, when the universe was so hot and dense that even protons and neutrons were broken up into a hot soup of quarks, connects to the world as we now see it — a vast expanding cosmos extending out 14 billion light-years. Dark energy and dark matter are relics of the first moments when unfamiliar physics of quantum gravity ruled. A route to understanding dark matter and probing the nature of dark energy is to count the number of mass clusters over the last half of the age of the universe — at a time when dark energy apparently had its greatest influence. LSST does this in several independent ways. These probes of the nature of dark energy by LSST are complimentary to those of space missions measuring the cosmic microwave background and very distant supernovae. Indeed, since we understand so little about dark energy, it is prudent to pursue all these lines of investigation.


The faintest galaxies have a range of colors, each one’s color depending on its type and its distance from us. The most distant galaxies have their spectra shifted to longer, redder wavelengths by the Hubble expansion, and their light has taken up to ten billion years to travel to us. Using the colors of the galaxies, it is possible to gauge the distance to the background galaxies. Mirages also rely on distance. This is the clue that unlocks the universe of mass in three dimensions; the more distant the source, the more warped its image. If there is a foreground mass, the mirage effect on the background galaxies is stronger for more distant galaxies.

By measuring both the warp and the distances to the background galaxies, it is possible to reconstruct the mass map and also to place the mass at its correct distance. This enables the exploration of mass in the universe, independent of light, since only the light from the background galaxies is used. By exploring mass in the universe in three dimensions we are also exploring mass at various cosmic ages. This is because mass seen at great distance is mass seen at a much earlier time. So we can chart the evolution of dark matter structure with cosmic time.

Surveying the numbers of cosmic mass clusters in our universe will ultimately lead to precision tests of theories of dark energy. To fully open this novel window of the three-dimensional universe of mass history, we need a new telescope and camera very unlike what we have now. We need LSST. Advances in technology have equipped us to mine the distant galaxies for data — in industrial quantity. LSST’s wide-angle gravitational lens survey will generate millions of gigabytes of data and intriguing opportunities for unique understanding of the development of cosmic structure. Our challenge is twofold. These galaxies are faint, and we need to capture images of billions of them. LSST’s combination of large light-collecting capability and unprecedented field of view will for the first time open this unique window on the physics of our universe. LSST will provide a wide and deep view of the universe, allowing us to conduct full 3-D mass tomography to chart not only dark matter, but the presence and influence of dark energy.

Do we trust our current view of the universe? Combining these results with other cosmic probes will lead to multiple tests of the foundations of our model for the universe. What will our concept of the universe be when those answers are in? Perhaps the most interesting outcome will be the unexpected; a clash between different precision measurements might prove to be a hint of a grander structure, possibly in higher dimensions. LSST provides that opportunity.


How can gravity be repulsive? In Newtonian gravity, the gravitational force exerted by an element of a massive medium is proportional to its mass density. Since this density is always positive, the force never changes sign and classical gravity is always attractive. Any relativistic generalization of the gravitational force has not only to involve the energy density (instead of the mass density) but also the momentum density (as energy and momentum can be transformed into each other by changing the reference frame). Within Einstein’s framework of General Relativity, the gravitational force exerted by an element of an isotropic medium is proportional to the sum of its energy density and three times its local pressure (which measures the momentum flow).

A medium can have a negative pressure: a common example is a rubber ball that is forced to expand beyond its equilibrium radius. If this negative pressure is large enough (greater in magnitude than a third of the energy density), it can thus produce a repulsive gravitational force! In particular, vacuum energy where the pressure is equal and opposite to the energy density (Einstein’s cosmological constant is an example) will produce such repulsive force. If such vacuum energy is dominant, it would generate an accelerated expansion of the universe. Another important example is the case of a particle field that is highly out of equilibrium. This is the mechanism believed to have produced the inflation in the early universe.

A cluster of galaxies with its huge mass of invisible dark matter makes an easily-identifiable gravitational lens, since it has so much mass that it bends light rays from other galaxies by about half a degree, producing a strong warp in their images. This artist’s rendering of the process shows how multiple images of the same distant galaxy can be formed. This is often called “strong gravitational lensing.” A straight-through image, as well as several arrayed around an “Einstein ring” are formed. Note how the images of the source galaxy which appear around the Einstein ring are distorted — stretched along the ring. This tangential distortion is readily visible in strong lensing, but it happens also in the general case where the lensing is so weak that it merely moves background galaxy images to a new place on the sky. In either case, computer analysis of the resulting background galaxy distortion patterns enables the mapping of the foreground mass concentrations. (NASM, Smithsonian Institution. Artwork by Keith Soares/Bean Creative.)

Gravity at work. Shown above in yellow is a deep Hubble Space Telescope image of the cluster of galaxies, CL0024+1654, some four billion light-years distant. Also seen in this picture are multiple images of a single background blue galaxy. The huge mass of the invisible dark matter in the cluster has bent the light rays from the background galaxy whose image now appears clearly five times (one in the center and four arrayed around the Einstein ring). Mass clusters are scattered throughout our universe, deflecting and distorting the images of the background forest of distant galaxies. (Courtesy of J. A. Tyson, W. Colley, E. L. Turner, and NASA.)

Analysis of all the distorted images in the HST picture above produced a unique map of the space-time warp caused by the unseen dark matter in the foreground cluster. Here this warp can be visualized by its effect on an imaginary background sheet of ordinary graph paper placed far behind the cluster.

Here the distribution of mass from the previous image is shown as a two-dimensional surface, where the height of the orange surface represents the amount of mass at that point in the image. This mass distribution shows that most of the dark matter is not clinging to the galaxies in the cluster (the narrow, high peaks), but instead is smoothly distributed. After years of study we still do not know what makes up the dark matter, but observations of the way it clumps can rule out theories of what it is. LSST will image large-scale dark matter and will chart the distribution of mass in the universe with precision.

A glimpse of a universe of mass. Shown here is a map of mass obtained by gravitational lens mass tomography. This 2x2 degree field of mass, obtained in 15 hours of 4-meter telescope exposures in the Deep Lens Survey, would fit easily inside LSST’s single snapshot field of view.

LSST will find hundreds of thousands of these massive clusters in a stunning 3-D view of the universe of mass extending over 20,000 square degrees of sky and back to half the age of the universe. Sharp constraints on cosmology and the physics of dark energy will result.


An International Facility and Collaboration

The effort to build the LSST is overseen by the LSST Corporation, a partnership between Research Corporation and several other institutions. The Corporation is actively seeking additional member institutions who can make major contributions to the project.


In addition to our institution partners, the LSST is actively being supported and developed by more than one hundred astronomers, physicists, and engineers throughout the country who see the LSST as the next big leap in charting the heavens, an exciting technological challenge, and a new model for doing big science.


Inset is a three-color image of spiral galaxy M109 taken with the 6.5 meter MMT telescope, an example of the superb image quality attainable by modern large telescopes. The background image, from the Palomar Sky Survey, was taken in the 1950s with a 48 inch telescope using photographic plates. Each LSST exposure will cover the same area of the sky as the background image to the same high quality as the inset. This coupling of high resolution, wide aperture, and very wide field makes the LSST unique.


LSST represents a new astronomy unlike what we now have. The changing universe will be unveiled, and people everywhere will derive new meaning and understanding from it. In principle, all of the data taken by LSST will be immediately available to the public, and access through the internet will make it a truly international project. It will allow astronomers everywhere access to high-quality scientific data, without regard to their nationality or the wealth of their home institution. Accessing petabytes of data over the internet will have practical limitations. Internet access to the master maps, to the catalogues of objects discovered, and to the most recent few weeks of images is possible with present internet technology. Accessing petabytes of data over the internet will have practical limitations; some users will want disks containing many terabytes of data.

Full access to the database and the processing power to perform the most complex queries will be accommodated by a grants program similar to that used to allocate telescope time today. Scientists will place their own computers next to LSST disk farms located around the world, for data mining and analysis. The LSST team of scientists will pursue several key scientific investigations with their own deliverables, ensuring LSST data quality control.

A movement toward a Global Virtual Observatory is underway — an attempt to join all astronomical data archives in one great astronomical web. Such a vision will both benefit by, and contribute to, LSST, whose database will become a large part of such a system. Conversely, the data sources and software developed for the Virtual Observatory will contribute in an important way to the LSST project.

For non-scientists, LSST will become a remarkable window on the universe. The public will have web access to derived data products such as up-to-date digital movies of the changing sky. “LSST at Home” software could allow the home PC of the future to process time streams of data from one patch of sky. Museums and planetariums will have “video walls” showing LSST’s wide-sky dynamic view of the changing universe. A web-based atlas of the universe would show not only the latest images, but also identifications of as many objects as the database provides, linked to descriptions of these objects and suggestions for further reading. This would prove an invaluable resource for teaching science at all levels. A fifth-grade class will be able to choose part of the sky to “observe” periodically, searching for change and discovering for themselves new supernovae, asteroids, or comets. High-school students will be able to “fly” through a three-dimensional map of our solar neighborhood with tens of thousands of new asteroids.

OVER 100 scientists and engineers, with expertise ranging from optics to computer science, are collaborating on the LSST design, which is driven by the requirements of the key science missions of LSST. These disparate science programs — from surveys of the near-Earth environment to cosmic dark energy to cataclysmic explosions at the edge of the visible universe — all require a facility which can image very faint objects quickly. Indeed, they each require the same data, for different reasons: multiple short exposures of every patch of visible sky with frequent revisits. Thus, a single optimized data acquisition strategy will supply data to these key programs for specialized analysis. The LSST collaboration is committed to minimizing risk and the cost while maximizing the science. To be run more like an industrial production line than like current multi-user observatories, LSST will be manned by multiple shifts of operations and data professionals, and will deliver a steady stream of processed data to the key projects and the LSST data archive. By distributing this data immediately to the community, LSST will ensure the widest opportunity for great science and serendipitous discovery.

Financial support for LSST comes from the National Science Foundation (NSF) through Cooperative Agreement No. 1258333, the Department of Energy (DOE) Office of Science under Contract No. DE-AC02-76SF00515, and private funding raised by the LSST Corporation. The NSF-funded LSST Project Office for construction was established as an operating center under management of the Association of Universities for Research in Astronomy (AURA).  The DOE-funded effort to build the LSST camera is managed by the SLAC National Accelerator Laboratory (SLAC).
The National Science Foundation (NSF) is an independent federal agency created by Congress in 1950 to promote the progress of science. NSF supports basic research and people to create knowledge that transforms the future.   

Contact   |   We are Hiring   |   Business with LSST

Admin Login

Back to Top