Friday, February 26, 2010

Star formation and black hole formation

in 1920, astronomers Herber Curtis
and Harlow Shapley famously debated
whether certain faint nebulae in the
sky were external galaxies or structures
within our own Milky Way1. This was
resolved a few years later by Edwin Hubble
who determined that the Andromeda
nebula was an external galaxy by
identifying known types of pulsating
stars therein2. Although Curtis’s view that
the Universe contained many galaxies
was vindicated, Shapley’s insistence
that our Milky Way was larger than
previously thought also turned out
to be correct, once stellar distance
measurements improved.
We have long since learned3 that the
Universe contains many billions of galaxies,
each a gravitationally bound collection of
billions of stars with a central supermassive
black hole. And, if we accept conventional
gravity theory, the total mass in galaxies
is actually dominated by a dark-matter
halo that reveals itself only through the
motion of the stars and gas on which
it pulls4. Understanding how galaxies
acquire their constituents has been a longstanding
goal. Focusing on spiral galaxies,
Larson proposes a simple paradigm5
that could prove essential for progress in
this field.
Spiral galaxies (Fig. 1) display an
extended disc of stars and gas orbiting
at speeds of the order 150–250 km s–1.
That the rotation speed remains constant
over most of a galaxy’s disc is thought
to result from the gravitational effect of
a dark-matter halo4. Towards the centre
is a dense quasispherical bulge with less
prominent rotation, as its stars move on
more randomly oriented orbits. In the
Milky Way for example, this bulge extends
out to one quarter the distance of the Sun’s
location in the Galaxy and contains about
one quarter of the Galaxy’s total 80 billion
solar masses of stars. At the bulge core is
a supermassive black hole6 of less than a
billionth of the 1017 km bulge diameter, but
with 1/5,000 of the mass. Many galaxies,
including Andromeda, have more massive
black holes7.
Observations have established two
empirical constants common to spiral
galaxies that astronomers have struggled
to explain. First, the ratio of total stellar
mass to the fourth power of the disc
rotation speed is a constant8 (the so called
Tully–Fisher relation). Second, the ratio
of their inferred central black-hole mass
to the fourth power of the mean random
stellar velocities in the bulge is a second
constant9 (the so-called MH–σ relation).
Larson argues that the two constants
may be determined by understanding the
conditions in which gravitationally bound
gas in a protogalaxy forms stars and when
it would instead form a black hole. A
proto-galaxy forms out of the expanding,
cooling Universe when a region of gas with
sufficient density stops expanding owing
to its own self-gravity. By assuming that
proto-galaxies are quasispherical and that
the gas at all radii is a fixed fraction of the
dark-matter mass, Larson uses elementary
mechanics to relate the total gas mass
inside a given radius to the fourth power of
the gas velocity, and inversely related to the
surface density at that radius as required for
the Tully–Fisher and MH–σ relations. The
proportionality has been derived before,
but Larson gives important new insight
into how the surface density determines the
proportionality constants.
Much as regions of the expanding
Universe that become proto-galaxies
must achieve a critical density, regions
within a galaxy that become stars
must be dense enough for gravity to
overcome a competing force of thermal
expansion. At the required densities for
star formation, ions interact, radiate away
heat and combine to form molecules.
Star-forming regions are cooler, denser
and predominantly molecular compared
to the progenitor gas from which they
form. Remarkably, Larson finds that the
critical density required for the gas to form
stars equals that required to explain the
Tully–Fisher relation if the gas within the
galactic radius of critical density turns into
the stellar mass entering that relation.
Larson argues that a second critical
density arises in the very inner galactic
core. Here the dusty gas is so dense and
opaque that it traps heat, which at lower
densities would escape by radiative
cooling. Where cooling is inhibited, star
formation is quenched. Larson estimates
that this occurs when the surface density
exceeds a critical value of an extraordinary
coincidence: if all the gas that is as dense
as or denser than this critical value
forms a black hole, Larson estimates
this is just what is needed to explain the
MH–σ relation.
Larson has thus potentially explained
the two hitherto unexplained Tully–Fisher
and MH–σ empirical relations by
identifying two critical surface densities
that distinguish the formation of stars
from that of a central black hole. The
paradigm also predicts a maximum stellar
surface density in galaxies, owing to the
derived critical density above which star
formation is quenched. The predictions
are in general agreement with present
observational trends.
But even if Larson has pinpointed a
‘necessary’ condition for distinguishing star
formation from black-hole formation, the
complete set of ‘necessary and sufficient’
conditions remains to be determined. A
key issue, acknowledged by Larson, is how
the gas that forms the black hole loses its
angular momentum: the orbital angular
speed of gas, which moves inward and
conserves angular momentum, would
increase inversely to the square of the
distance from the centre. A small orbital
speed at large distances would become
Keplerian, and thus prohibitive for further
inward motion, well before the gas could
form a black hole. The dominant angularmomentum
transport mechanism is
not yet known but likely involves some
combination11,12 of gravitational and
magnetohydrodynamic instabilities
or outflows.
There are other uncertainties. Larson’s
numbers assume a standard dust-to-gas
mass ratio but, as dust determines the
opacity of the gas, tighter constraints
on its properties and mass fraction in
individual sources is desired. Furthermore,
if gravitational instabilities make the
distribution of dusty gas clumpy, more
free-streaming of radiation for a given
surface density might occur, increasing
the predicted critical density above which
a black hole forms. The extent to which
very massive stars may then be allowed to
form even if low-mass-star formation is
suppressed remains to be determined. It
is also natural to wonder whether Larson’s
paradigm applies to elliptical galaxies,
which have larger surface densities.
Finally, there remains an often
disdained but lingering possibility that
our understanding of gravity on galactic
scales and beyond is incomplete, and that
the rotation curves in galaxies usually
cited as evidence for dark matter may
instead partly highlight an incomplete
understanding of Newtonian gravity at
low accelerations. Among the predictions
of the most developed alternative12 to
conventional gravity is the Tully–Fisher
relation, independent of the gas surface
density. This directly contrasts Larson’s
approach in which the surface density is
crucial and a conventional role of dark
matter is assumed.
Future study of the complex interactions
between gravity, magnetized gas dynamics
and radiation transport in the formation of
stars and black holes in galaxies continue
to benefit from computer simulations11.
Simulations are often the closest tool
astronomers have to experiments, but
simple paradigms such as the one proposed
by Larson are essential in guiding the
construction and interpretation of these
simulations, and comparison of the results
with observations. ❐
Eric G. Blackman is in the Department of Physics
and Astronomy, University of Rochester, Rochester,
New York 14627-0171, USA.
e-mail: blackman@pas.rochester.edu
References
1. Trimble, V. Publ. Astron. Soc. Pac. 107, 1133–1144 (1995).
2. Hubble, E. Astrophys. J. 64, 321–369 (1926).
3. Sparke, L. S. & Gallagher, J. S. Galaxies in the Universe: An
Introduction 26–46 (Cambridge Univ. Press, 2007).
4. Sofue, Y. & Rubin, V. Annu. Rev. Astron. Astrophys.
39, 137–174 (2001).
5. Larson, R. B. Nature Phys. 6, 96–98 (2010).
6. Ghez, A. M. et al. Astrophys. J. 689, 1044–1062 (2008).
7. Bender, R. et al. Astrophys. J. 631, 280–300 (2005).
8. Kassin, S. A. et al. Astrophys. J. 660, L35–L38 (2007).
9. Gültekin, K. et al. Astrophys. J. 698, 198–221 (2009).
10. Larson, R. B. Rep. Prog. Phys. 73, 014901 (2010).
11. Hopkins, P. F. & Quataert, E. Preprint at
(2009).
12. Bekenstein, J. D. Contemp. Phys. 47, 387–403 (2006).

Monday, January 25, 2010

Climate chief admits error over Himalayan glaciers

The head of the Intergovernmental Panel on Climate Change (IPCC) has been forced to apologise for including in its 2007 report the claim that there was a "very high" chance of glaciers disappearing from the Himalayas by 2035.

Rajendra Pachauri, the chairman of the IPCC, conceded yesterday that "the clear and well-established standards of evidence required by the IPCC procedures were not applied properly" when the claim was included in the 900-page assessment of the impacts of climate change.

The paragraph at issue reads: "Glaciers in the Himalaya are receding faster than in any other part of the world and, if the present rate continues, the likelihood of them disappearing by the year 2035 and perhaps sooner is very high."

Single source

The report's only cited source was a 2005 report by the environment group WWF, which in turn cited a 1999 article in New Scientist.

The New Scientist article quoted senior Indian glaciologist Syed Hasnain, the then vice-chancellor of Jawaharlal Nehru University in New Delhi, who was writing a report on the Himalayas for the International Commission for Snow and Ice. It said, on the basis of an interview with Hasnain, that his report "indicates that all the glaciers in the central and eastern Himalayas could disappear by 2035". The claim did not, however, appear in the commission's report, which was only made available late last year.

This week a group of geographers, headed by Graham Cogley of Trent University at Peterborough in Ontario, Canada, have written to the journal Science pointing out that the claim "requires a 25-fold greater loss rate from 1999 to 2035 than that estimated for 1960 to 1999. It conflicts with knowledge of glacier-climate relationships, and is wrong."

The geographers add that the claim has "captured the global imagination and has been repeated in good faith often, including recently by the IPCC's chairman". The IPCC's errors "could have been avoided had the norms of scientific publication, including peer review and concentration upon peer-reviewed work, been respected", they say.

Several of those involved in the IPCC review process did try to question the 2035 date before it was published by the IPCC. Among them was Georg Kaser, a glaciologist from the University of Innsbruck, Austria, and a lead author of another section of the IPCC report. "I scanned the almost final draft at the end of 2006 and came across the 2035 reference." Kaser queried the reference but believes it was too late in the day for it to be reassessed.

Publicly available IPCC archives of the review process show that during the formal review, the Japanese government also questioned the 2035 claim. It commented: "This seems to be a very important statement. What is the confidence level/certainty?" Soon afterwards, a reference to the WWF report was added to the final draft. But the statement otherwise went unchanged.

Grey literature

One of the IPCC authors, Stephen Schneider of Stanford University, California, this week defended the use of so-called "grey" literature in IPCC reports. He told New Scientist that it was not possible to include only peer-reviewed research because, particularly in the chapters discussing the regional impacts of climate change, "most of the literature is not up to that gold standard".

The Himalaya claim appeared in the regional chapter on Asia. "There are only a few authors in each region, so it narrows the base of science," Schneider says.

This week Hasnain has claimed, for the first time, that he was misquoted by New Scientist in 1999.

What is gravity?

Newton said gravity is a force at distance, which was refuted by Einstein, who stated that gravity is no more than a manifestation of warped space-time. Now the story goes on. Below is a description of new proposal forwarded by a string physicist, who proposed that gravity might be a thermodynamic-type force based on probability theory. what does the words in red mean?

WHAT exactly is gravity? Everybody experiences it, but pinning down why the universe has gravity in the first place has proved difficult.

Although gravity has been successfully described with laws devised by Isaac Newton and later Albert Einstein, we still don't know how the fundamental properties of the universe combine to create the phenomenon.

Now one theoretical physicist is proposing a radical new way to look at gravity. Erik Verlinde of the University of Amsterdam, the Netherlands, a prominent and internationally respected string theorist, argues that gravitational attraction could be the result of the way information about material objects is organised in space. If true, it could provide the fundamental explanation we have been seeking for decades.

Verlinde posted his paper to the pre-print physics archive earlier this month, and since then many physicists have greeted the proposal as promising (arxiv.org/abs/1001.0785). Nobel laureate and theoretical physicist Gerard 't Hooft of Utrecht University in the Netherlands stresses the ideas need development, but is impressed by Verlinde's approach. "[Unlike] many string theorists Erik is stressing real physical concepts like mass and force, not just fancy abstract mathematics," he says. "That's encouraging from my perspective as a physicist."

Newton first showed how gravity works on large scales by treating it as a force between objects (see "Apple for your eyes"). Einstein refined Newton's ideas with his theory of general relativity. He showed that gravity was better described by the way an object warps the fabric of the universe. We are all pulled towards the Earth because the planet's mass is curving the surrounding space-time.

Yet that is not the end of the story. Though Newton and Einstein provided profound insights, their laws are only mathematical descriptions. "They explain how gravity works, but not where it comes from," says Verlinde. Theoretical physics has had a tough time connecting gravity with the other known fundamental forces in the universe. The standard model, which has long been our best framework for describing the subatomic world, includes electromagnetism and the strong and weak nuclear forces - but not gravity.

Many physicists doubt it ever will. Gravity may turn out to be delivered via the action of hypothetical particles called gravitons, but so far there is no proof of their existence. Gravity's awkwardness has been one of the main reasons why theories like string theory and quantum loop gravity have been proposed in recent decades.

Verlinde's work offers an alternative way of looking at the problem. "I am convinced now, gravity is a phenomenon emerging from the fundamental properties of space and time," he says.

To understand what Verlinde is proposing, consider the concept of fluidity in water. Individual molecules have no fluidity, but collectively they do. Similarly, the force of gravity is not something ingrained in matter itself. It is an extra physical effect, emerging from the interplay of mass, time and space, says Verlinde. His idea of gravity as an "entropic force" is based on these first principles of thermodynamics - but works within an exotic description of space-time called holography.

Like the fluidity of water, gravity is not ingrained in matter itself. It is an extra physical effect

Holography in theoretical physics follows broadly the same principles as the holograms on a banknote, which are three-dimensional images embedded in a two-dimensional surface. The concept in physics was developed in the 1970s by Stephen Hawking at the University of Cambridge and Jacob Bekenstein at the Hebrew University of Jerusalem in Israel to describe the properties of black holes. Their work led to the insight that a hypothetical sphere could store all the necessary "bits" of information about the mass within. In the 1990s, 't Hooft and Leonard Susskind at Stanford University in California proposed that this framework might apply to the whole universe. Their "holographic principle" has proved useful in many fundamental theories.

Verlinde uses the holographic principle to consider what is happening to a small mass at a certain distance from a bigger mass, say a star or a planet. Moving the small mass a little, he shows, means changing the information content, or entropy, of a hypothetical holographic surface between both masses. This change of information is linked to a change in the energy of the system.

Then, using statistics to consider all possible movements of the small mass and the energy changes involved, Verlinde finds movements toward the bigger mass are thermodynamically more likely than others. This effect can be seen as a net force pulling both masses together. Physicists call this an entropic force, as it originates in the most likely changes in information content.

This still doesn't point directly to gravity. But plugging in the basic expressions for information content of the holographic surface, its energy content and Einstein's relation of mass to energy leads directly to Newton's law of gravity. A relativistic version is only a few steps further, but again straightforward to derive. And it seems to apply to both apples and planets. "Finding Newton's laws all over again could have been a lucky coincidence," says Verlinde. "A relativistic generalisation shows this is far deeper than a few equations turning out just right."

Verlinde's paper has prompted praise from some physicists. Robbert Dijkgraaf, a prominent mathematical physicist also at the University of Amsterdam, says he admires the elegance of Verlinde's concepts. "It is amazing no one has come up with this earlier, it looks just so simple and yet convincing," he says.

The jury is still out for many others. Some believe that Verlinde is using circular reasoning in his equations, by "starting out" with gravity. Others have expressed concern about the almost trivial mathematics involved, leaving most of the theory based on very general concepts of space, time and information.

Stanley Deser of Brandeis University in Waltham, Massachusetts, whose work has expanded the scope of general relativity, says Verlinde's work appears to be a promising avenue but adds that it is "a bombshell that will take a lot of digesting, challenging all our dogmas from Newton and Hooke to Einstein."

Verlinde stresses his paper is only the first on the subject. "It is not even a theory yet, but a proposal for a new paradigm or framework," he says. "All the hard work comes now."

Editorial: A gravity story to take us out of Newton's orchard

Apple for your eyes

"We went into the garden and drank thea, under some apple trees... he told me he was just in the same situation, when the notion of gravitation came into his mind. 'Why should that apple always descend perpendicularly to the ground,' thought he to himself."

So wrote archaeologist and biographer William Stukeley in 1752, recounting the famous story as told to him by a young Isaac Newton. Newton went on to show that, on a large scale, two masses are attracted in proportion to their individual mass, and the force between them falls off with the square of their distance.

Now the original manuscript featuring the story, entitled Memoirs of Sir Isaac Newton's Life, is available for all to read. As part of its 350th anniversary celebration, London's Royal Society has published a digital version of the document, which is tucked away in their archives. See royalsociety.org/turning-the-pages.

Plasmons: a review

Here is a review on Plasmon and some references can be found to dig it up.

IT'S a laser, but not as we know it. For a start, you need a microscope to see it. Gleaming eerily green, it is a single spherical particle just a few tens of nanometres across.

Tiny it might be, but its creators have big plans for it. With further advances, it could help to fulfil a long-held dream: to build a super-fast computer that computes with light.

Dubbed a "spaser", this minuscule lasing object is the latest by-product of a buzzing field known as nanoplasmonics. Just as microelectronics exploits the behaviour of electrons in metals and semiconductors on micrometre scales, so nanoplasmonics is concerned with the nanoscale comings and goings of entities known as plasmons that lurk on and below the surfaces of metals.

To envisage what as plasmon is, imagine a metal as a great sea of freely moving electrons. When light of the right frequency strikes the surface of the metal, it can set up a wavelike oscillation in this electron sea, just as the wind whips up waves on the ocean. These collective electron waves - plasmons - act to all intents and purposes as light waves trapped in the metal's surface. Their wavelengths depend on the metal, but are generally measured in nanometres. Their frequencies span the terahertz range - equivalent to the frequency range of light from the ultraviolet right through the visible to the infrared.

Gleaming eerily green, this laser is a single spherical particle just tens of nanometres across

In 2003, their studies of plasmons led theorists Mark Stockman at Georgia State University in Atlanta and David Bergman at Tel Aviv University in Israel to an unusual thought. Plasmons behaved rather like light, so could they be amplified like light, too? What the duo had in mind was a laser-like device that multiplied single plasmons to turn them into powerful arrays of plasmons all oscillating in the same way (see "From laser to spaser").

The mathematics of it seemed to work. By analogy with the acronym that produces the word laser, they dubbed their brainchild "surface plasmon amplification by the stimulated emission of radiation" - spaser - and published a paper about it (Physical Review Letters, vol 90, p 027402).

The spaser might have remained just a theoretical curiosity. Around the same time, however, physicists were waking up to the potential of plasmonics for everything from perfect lenses to sensitive biosensors (see "What have plasmons ever done for us?"). The spaser idea was intriguing enough that Mikhail Noginov, an electrical engineer at Norfolk State University in Virginia, and some of his colleagues set out to build one.

It was not an easy task. Light is long-lived, so it is relatively easy to bounce it around in a mirrored chamber and amplify it, as happens inside a laser. Plasmons, by contrast, are transient entities: they typically live for mere attoseconds, and cannot travel more than a few plasmon wavelengths in a metal before their energy is absorbed by the ocean of non-oscillating electrons around them. It was not at all clear how we might get enough of a handle on plasmons to amplify them at all.

In August 2009, Noginov and his colleagues showed how. Their ingenious solution takes the form of a circular particle just 44 nanometres across. It consists of a gold core contained within a shell of silica, speckled with dye molecules that, excited initially by an external laser, produce green light. Some of that light leaks out to give the nanoparticles their characteristic green glow; the rest stimulates the generation of plasmons at the surface of the gold core.

In the normal way of things, these plasmons are absorbed by the metal almost as soon as they are produced. But their tickling influence also stimulates the dye molecules in the silica shell to emit more light, which in turn generates more plasmons, which excites more light and so on. With a sufficient supply of dye, enough plasmons can exist at the same time that they start to reinforce each other. The signature of a laser-like multiplication of plasmons within the device is a dramatic increase in green laser light emitted from the nanoparticle after only a small increase in the energy supplied from the external laser - the signature Noginov and his colleagues reported last year (Nature, vol 460, p 1110).

And they were not the only ones. In October 2009, Xiang Zhang, a mechanical engineer at the University of California, Berkeley, and his colleagues unveiled a similarly tiny device that exploits plasmons to produce laser light (Nature, vol 461, p 629).

These innovations generated headlines at the time as an entirely new type of lasing device more compact than any yet seen and which, in theory, required a lot less power than a conventional device. That's an exciting development in its own right, but just one in a list of promising advances in the bustling business of laser technology.

Crucially, though, the development of spasers has sparked the hope that one of the great scientific disappointments of the past decades - the unfulfilled promise of optical computing - may yet be turned into triumph.

On the face of it, optical computers, which use light rather than currents of electrons to process information, are a great idea. Electrons are easy to manipulate and process, but they tend to get bogged down as they pass through metals and semiconductors, colliding with atoms and bouncing off them in ways that limit the speed and fidelity of information transmission. Photons, by contrast, can withstand interference, and are above all fast, in theory zipping around a chip at close to the cosmic speed limit.

In the 1990s, various groups claimed to be getting close to making the dream of optical computing a reality. That included a concerted effort at the world-famous Bell Laboratories in Murray Hill, New Jersey, where the building block of microelectronic circuits, the transistor, was invented in 1947. Researchers there and elsewhere hit a snag, however. The very fleet-footedness that made photons perfect for high-speed communications made them almost impossible to pin down and use for sensible processing of data.

"Optical computing has a chequered history, particularly the boondoggle at Bell Labs," says Harry Atwater, a physicist at the California Institute of Technology in Pasadena. All the efforts foundered when it came to producing anything like a transistor: a tiny, low-power device that could be used to toggle light signals on and off reliably.

In theory, a controllable laser would do this trick, if not for one problem - lasers devour power. Even worse, they are huge, relatively speaking: they work by bouncing photons around a mirrored cavity, so the very smallest they can be is about half the wavelength of the light they produce. For green light, with a wavelength of 530 nanometres, that means little change from 300 nanometres. Electrical transistors, meanwhile, are approaching one-tenth that size.

You see where this is leading. Spasers are a tiny source of light that can be switched on and off at will. At a few tens of nanometres in size, they are just slightly bigger than the smallest electrical transistors. The spaser is to nanoplasmonics what the transistor is to microelectronics, says Stockman: it is the building block that should make optical information-processing possible.

The spaser is to plasmonics what the transistor is to microelectronics

Inevitably, there will be many hurdles to overcome. For a start, Noginov's prototype spaser is switched on and off using another laser, rather than being switched electrically. That is cumbersome and means it cannot capitalise on the technology's low-power potential. It is also unclear, when it comes to connecting many spasers together to make a logic gate, how input and output signals can be cleanly separated with the resonant spherical spasers that have so far been constructed.

Mutual benefit

The most intriguing aspect of spasers, however, is the one that could make or break them as the basis of a future computing technology: they are made of metal. In one sense, that is a bad thing, because making a plasmonic chip would require a wholly different infrastructure to that used to make silicon chips - an industry into which billions in research money has been poured.

Silicon's predominance has not necessarily been a bar to other technologies establishing themselves: the radio signals used for cellphone communication, for example, are of a frequency too high for silicon chips to cope with, so an entirely separate manufacturing process grew up to make the gallium arsenide chips that can. To justify the initial investment costs, another upstart chip-architecture needs a similar "killer application": something it can do that silicon cannot.

Stockman reckons the extra processing speed promised by plasmonic devices will generate such applications in areas like cryptography. "Having faster processors than everyone else will be a question of national security," he says. And he points to another reason why the spooks might be interested. One problem with semiconductors is that their delicate conduction capabilities are vulnerable to ionising radiation. Such rays can send avalanches of electrons streaming through delicate electronic components. At best, this corrupts data and halts calculations. At worst, it fries transistors, permanently disabling them.

This is where the metallic nature of a plasmonic chip would come into its own. The extra electrons that ionising radiation can produce are mere drops in the ocean of free electrons from which plasmons are generated in a metal. A plasmonic device would be able to process and store information in the harshest radioactive environments: in orbiting satellites, in nuclear reactors, during nuclear conflict.

Perhaps the most likely outcome, though, is that rather than the one superseding the other, plasmonics and electronics come to coexist to mutual advantage in a single chip. As the transistors in chips become smaller, the wires that connect them over distances of just a few nanometres become a significant bottleneck for data. That is one reason why chips are currently spinning their wheels at speeds of about 3 gigahertz. "Wires limit the speed at which electrons can deliver information," says Atwater. "So an obvious solution is to replace them with photonic connections."

The problem with such connections to date has been converting electronic signals into photonic ones and back again with a speed and efficiency that makes it worthwhile. Plasmons, which owe their existence to the easy exchange of energy between light and electrons, could be just the things for the job, making a hybrid electrical-optical chip a genuine possibility.

As well as that, says Atwater, we should work out how to manipulate plasmons using devices that can be made in the same way, and on the same fabrication lines, as ordinary silicon chips. Early last year, he and his colleagues at Caltech revealed an electrically controlled device dubbed the plasmostor that can vary the intensity of plasmons as they pass through it, and which has an architecture very similar to that of conventional transistors (Nano Letters, vol 9, p 897). Just this month, a Dutch group has announced that they have produced an electrically powered source of plasmons fully compatible with existing silicon chip fabrication technology (Nature Materials, vol 9, p 21).

It's very early days, so such innovations have yet to match the performance of purely electronic components. The plasmostor, for instance, flips between its on and off states more slowly than a conventional transistor, and the signals have an annoying tendency to leak out of the device and get lost. There is still a long way to go to a computer that runs on anything other than electrons. But it is a start, says Atwater. "You're challenging a hugely successful technology. It's audacious to think that you can just replace it."

But if a tiny round green light isn't a signal to go ahead and give it a try, what is?

From laser to spaser

This year marks the golden jubilee of a ruby trailblazer: it was on 16 May 1960 that Theodore Maiman of Hughes Research Laboratories in Malibu, California, coaxed a synthetic ruby to produce the first ever laser light. The first laser to produce light from gas - a mixture of helium and neon - followed later that same year.

Half a century later, and there's hardly an area of human endeavour that doesn't depend on lasers in some way or another: CD and DVD players, metal cutting and welding, barcode scanners and corrective eye surgery to name but a few.

Early lasers were essentially made up of a mirrored box containing a "gain medium" such as a crystal or gas. Zapped with light or an electric current, electrons in this medium absorb energy, releasing it again as photons. These photons bounce around the box and stimulate further electrons to emit more photons. This self-reinforcing increase in light energy is "light amplification by the stimulated emission of radiation" - laser action, for short.

Spasers use the same principle, except rather than amplifying light directly, they amplify surface plasmons - the wavelike movements of free electrons on and near the surfaces of metals - using that in turn to emit light.

What have plasmons ever done for us?

Plasmons might sound esoteric, but it is not just with spasers (see main story) that they are seeing practical application.

Take molecular sensing. The amount and colour of light absorbed by a plasmonic nanoparticle is extremely sensitive to the surrounding molecular environment. This property has been exploited to build sensing devices that detect levels of anything from the protein casein, an indicator of the quality of milk products, to glucose in the blood.

What's significant about these plasmonic sensors is that they can make continuous measurements, unlike chemical tests which usually give a single snapshot. A plasmonic implant could one day help diabetics to monitor and control their blood glucose levels in real time.

Plasmons should also be useful for increasing the efficiency of certain kinds of flat-screen displays. In June 2009, Ki Youl Yang and his colleagues at the Korea Advanced Institute of Science and Technology in Daejeon showed how silver nanoparticles deposited onto organic light-emitting diodes used in some displays increases the amount of light they emit.

More impressive yet, plasmonic devices might also help to tackle cancer, if tests in mice are anything to go by. Plasmonic nanoparticles laced with antibodies can be made to latch onto tumours. When blasted with a focused beam of infrared light precisely tuned to the plasmon frequency, the nanoparticles heat up, killing the attached cancer cells while leaving the surrounding healthy tissue unharmed (Accounts of Chemical Research, vol 41, p 1842).

Critical Casirmir effect

Sticky situations

synopsis imageIllustration: A. Gambassi et al., Phys. Rev. E (2009)

Critical Casimir effect in classical binary liquid mixtures

A. Gambassi, A. Maciołek, C. Hertlein, U. Nellen, L. Helden, C. Bechinger, and S. Dietrich

Phys. Rev. E 80, 061143 (Published December 31, 2009)


ShareThis Statistical Mechanics Soft Matter


When two conducting plates are brought in close proximity to one another, vacuum fluctuations in the electromagnetic field between them create a pressure. This effective force, known as the Casimir effect, has a thermodynamic analog: the “critical Casimir effect.” In this case, thermal fluctuations of a local order parameter (such as density) near a continuous phase transition can attract or repel nearby objects when they are in confinement.

In 2008, a team of scientists in Germany presented direct experimental evidence for the critical Casimir effect by measuring the femtonewton forces that develop between a colloidal sphere and a flat silica surface when both are immersed in a liquid near a critical point [1]. Now, writing in Physical Review E, Andrea Gambassi, now at SISSA in Trieste, Italy, and collaborators at the Max Planck Institute for Metals Research, the University of Stuttgart, and the Polish Academy of Sciences, follow up on this seminal experiment and present a comprehensive examination of their experimental results and theory for the critical Casimir effect.

Success in fabricating MEMS and NEMS (micro- and nanoelectromechanical systems) made it possible to explore facets of the quantum Casimir effect that had for many years only been theoretical curiosities. With the availability of tools to track and measure the minute forces between particles in suspension, scientists are able to do the same with the critical Casimir effect. In fact, it may be possible to tune this thermodynamically driven force in small-scale devices so it offsets the attractive (and potentially damaging) force associated with the quantum Casimir effect. Given its detail, Gambassi et al.’s paper may well become standard reading in this emerging field. – Jessica Thomas

[1] C. Hertlein et al., Nature 451, 172 (2008).

Saturday, January 23, 2010

Black holes formed of colliding particles

Surprise ! So interesting !

Colliding Particles Can Make Black Holes

By Adrian Cho
ScienceNOW Daily News
22 January 2010

You've heard the controversy. Particle physicists predict the world's new highest-energy atom smasher, the Large Hadron Collider (LHC) near Geneva, Switzerland, might create tiny black holes, which they say would be a fantastic discovery. Some doomsayers fear those black holes might gobble up Earth--physicist say that's impossible--and have petitioned the United Nations to stop the $5.5 billion LHC. Curiously, though, nobody had ever shown that the prevailing theory of gravity, Einstein's theory of general relativity, actually predicts that a black hole can be made this way. Now a computer model shows conclusively for the first time that a particle collision really can make a black hole.

"I would have been surprised if it had come out the other way," says Joseph Lykken, a physicist at the Fermi National Accelerator Laboratory in Batavia, Illinois. "But it is important to have the people who know how black holes form look at this in detail."

The key to forming a black hole is cramming enough mass or energy into a small enough volume, as happens when a massive star collapses. According to Einstein's theory of general relativity, mass and energy warp space and time, or spacetime, to create the effect we perceive as gravity. If a large enough mass or energy is crammed into a small enough space, that warping becomes so severe that nothing, not even light, can escape. The object thus becomes a black hole. And two particles can make a miniscule black hole in just this way if they collide with an energy above a fundamental limit called the Planck energy.

Or so physicists have assumed. Researchers have based that prediction on the so-called hoop conjecture, a rule of thumb that indicates how much an object of a given mass has to be compressed to make a black hole, says Matthew Choptuik of the University of British Columbia in Vancouver, Canada. A calculation from the 1970s also suggested a particle collision could make a black hole, Choptuik notes, but it modeled the particles themselves as black holes and thus may have been skewed to produce the desired result.

Now Choptuik and Frans Pretorius of Princeton University have simulated such collisions, including all the extremely complex mathematical details from general relativity. For simplicity and to make the simulations generic, they modeled the two particles as hypothetical objects known as boson stars, which are similar to models that describe stars as spheres of fluid. Using hundreds of computers, Choptuik and Pretorius calculated the gravitational interactions between the colliding particles and found that a black hole does form if the two particles collide with a total energy of about one-third of the Planck energy, slightly lower than the energy predicted by hoop conjecture, as they report in a paper in press at Physical Review Letters.

Does that mean the LHC will make black holes? Not necessarily, Choptuik says. The Planck energy is a quintillion times higher than the LHC's maximum. So the only way the LHC might make black holes is if, instead of being three dimensional, space actually has more dimensions that are curled into little loops too small to be detected except in a high-energy particle collision. Predicted by certain theories, those extra dimensions might effectively lower the Planck energy by a huge factor. "I would be extremely surprised if there were a positive detection of black-hole formation at the accelerator," Choptuik says. Physicists say that such black hole would harmlessly decay into ordinary particles.

"It's a real tribute to their skill that they were able to do this through a computer simulation," says Steve Giddings, a gravitational theorist at the University of California, Santa Barbara. Such simulations could be important to study particle collisions and black hole formation in greater detail, he says. Indeed, they may be the only way to study the phenomenon if space does not have extra dimensions and the Planck energy remains hopelessly out of reach.

Wednesday, January 20, 2010

Influential physicists

If you ask a bunch of people on who is the most influential physicist of, let's say, since the beginning of 1900, you would get the usual answers: Einstein, Feynman, Bohr, Heisenberg, Dirac, etc... all the big names. However, consider this: there is only ONE person who has won the Nobel Prize for Physics twice; this person is a co-inventor of the most important device that is now the foundation of our modern society that we use everyday; and this person is not on that list above.
Although I agree Bardeen is a great and influential physicist, I don't like the number of Nobel Prizes to speak. It is reasonable to state that, many other physicists may deserve even more than two Nobel Prizes. But that does not count much. Surely, it is great contributions to humans. Nonetheless, I think, it is rare to see physicists who reach the height of Einstein, who has refreshed not only our daily life, but also how we perceive the world. Not only technically, but also spiritually.

But anyway, it is inappropriate to make such useless comparisons, because every one has his own taste and judgments. My hero is Einstein, for sure.