Monday, January 25, 2010

Climate chief admits error over Himalayan glaciers

The head of the Intergovernmental Panel on Climate Change (IPCC) has been forced to apologise for including in its 2007 report the claim that there was a "very high" chance of glaciers disappearing from the Himalayas by 2035.

Rajendra Pachauri, the chairman of the IPCC, conceded yesterday that "the clear and well-established standards of evidence required by the IPCC procedures were not applied properly" when the claim was included in the 900-page assessment of the impacts of climate change.

The paragraph at issue reads: "Glaciers in the Himalaya are receding faster than in any other part of the world and, if the present rate continues, the likelihood of them disappearing by the year 2035 and perhaps sooner is very high."

Single source

The report's only cited source was a 2005 report by the environment group WWF, which in turn cited a 1999 article in New Scientist.

The New Scientist article quoted senior Indian glaciologist Syed Hasnain, the then vice-chancellor of Jawaharlal Nehru University in New Delhi, who was writing a report on the Himalayas for the International Commission for Snow and Ice. It said, on the basis of an interview with Hasnain, that his report "indicates that all the glaciers in the central and eastern Himalayas could disappear by 2035". The claim did not, however, appear in the commission's report, which was only made available late last year.

This week a group of geographers, headed by Graham Cogley of Trent University at Peterborough in Ontario, Canada, have written to the journal Science pointing out that the claim "requires a 25-fold greater loss rate from 1999 to 2035 than that estimated for 1960 to 1999. It conflicts with knowledge of glacier-climate relationships, and is wrong."

The geographers add that the claim has "captured the global imagination and has been repeated in good faith often, including recently by the IPCC's chairman". The IPCC's errors "could have been avoided had the norms of scientific publication, including peer review and concentration upon peer-reviewed work, been respected", they say.

Several of those involved in the IPCC review process did try to question the 2035 date before it was published by the IPCC. Among them was Georg Kaser, a glaciologist from the University of Innsbruck, Austria, and a lead author of another section of the IPCC report. "I scanned the almost final draft at the end of 2006 and came across the 2035 reference." Kaser queried the reference but believes it was too late in the day for it to be reassessed.

Publicly available IPCC archives of the review process show that during the formal review, the Japanese government also questioned the 2035 claim. It commented: "This seems to be a very important statement. What is the confidence level/certainty?" Soon afterwards, a reference to the WWF report was added to the final draft. But the statement otherwise went unchanged.

Grey literature

One of the IPCC authors, Stephen Schneider of Stanford University, California, this week defended the use of so-called "grey" literature in IPCC reports. He told New Scientist that it was not possible to include only peer-reviewed research because, particularly in the chapters discussing the regional impacts of climate change, "most of the literature is not up to that gold standard".

The Himalaya claim appeared in the regional chapter on Asia. "There are only a few authors in each region, so it narrows the base of science," Schneider says.

This week Hasnain has claimed, for the first time, that he was misquoted by New Scientist in 1999.

What is gravity?

Newton said gravity is a force at distance, which was refuted by Einstein, who stated that gravity is no more than a manifestation of warped space-time. Now the story goes on. Below is a description of new proposal forwarded by a string physicist, who proposed that gravity might be a thermodynamic-type force based on probability theory. what does the words in red mean?

WHAT exactly is gravity? Everybody experiences it, but pinning down why the universe has gravity in the first place has proved difficult.

Although gravity has been successfully described with laws devised by Isaac Newton and later Albert Einstein, we still don't know how the fundamental properties of the universe combine to create the phenomenon.

Now one theoretical physicist is proposing a radical new way to look at gravity. Erik Verlinde of the University of Amsterdam, the Netherlands, a prominent and internationally respected string theorist, argues that gravitational attraction could be the result of the way information about material objects is organised in space. If true, it could provide the fundamental explanation we have been seeking for decades.

Verlinde posted his paper to the pre-print physics archive earlier this month, and since then many physicists have greeted the proposal as promising (arxiv.org/abs/1001.0785). Nobel laureate and theoretical physicist Gerard 't Hooft of Utrecht University in the Netherlands stresses the ideas need development, but is impressed by Verlinde's approach. "[Unlike] many string theorists Erik is stressing real physical concepts like mass and force, not just fancy abstract mathematics," he says. "That's encouraging from my perspective as a physicist."

Newton first showed how gravity works on large scales by treating it as a force between objects (see "Apple for your eyes"). Einstein refined Newton's ideas with his theory of general relativity. He showed that gravity was better described by the way an object warps the fabric of the universe. We are all pulled towards the Earth because the planet's mass is curving the surrounding space-time.

Yet that is not the end of the story. Though Newton and Einstein provided profound insights, their laws are only mathematical descriptions. "They explain how gravity works, but not where it comes from," says Verlinde. Theoretical physics has had a tough time connecting gravity with the other known fundamental forces in the universe. The standard model, which has long been our best framework for describing the subatomic world, includes electromagnetism and the strong and weak nuclear forces - but not gravity.

Many physicists doubt it ever will. Gravity may turn out to be delivered via the action of hypothetical particles called gravitons, but so far there is no proof of their existence. Gravity's awkwardness has been one of the main reasons why theories like string theory and quantum loop gravity have been proposed in recent decades.

Verlinde's work offers an alternative way of looking at the problem. "I am convinced now, gravity is a phenomenon emerging from the fundamental properties of space and time," he says.

To understand what Verlinde is proposing, consider the concept of fluidity in water. Individual molecules have no fluidity, but collectively they do. Similarly, the force of gravity is not something ingrained in matter itself. It is an extra physical effect, emerging from the interplay of mass, time and space, says Verlinde. His idea of gravity as an "entropic force" is based on these first principles of thermodynamics - but works within an exotic description of space-time called holography.

Like the fluidity of water, gravity is not ingrained in matter itself. It is an extra physical effect

Holography in theoretical physics follows broadly the same principles as the holograms on a banknote, which are three-dimensional images embedded in a two-dimensional surface. The concept in physics was developed in the 1970s by Stephen Hawking at the University of Cambridge and Jacob Bekenstein at the Hebrew University of Jerusalem in Israel to describe the properties of black holes. Their work led to the insight that a hypothetical sphere could store all the necessary "bits" of information about the mass within. In the 1990s, 't Hooft and Leonard Susskind at Stanford University in California proposed that this framework might apply to the whole universe. Their "holographic principle" has proved useful in many fundamental theories.

Verlinde uses the holographic principle to consider what is happening to a small mass at a certain distance from a bigger mass, say a star or a planet. Moving the small mass a little, he shows, means changing the information content, or entropy, of a hypothetical holographic surface between both masses. This change of information is linked to a change in the energy of the system.

Then, using statistics to consider all possible movements of the small mass and the energy changes involved, Verlinde finds movements toward the bigger mass are thermodynamically more likely than others. This effect can be seen as a net force pulling both masses together. Physicists call this an entropic force, as it originates in the most likely changes in information content.

This still doesn't point directly to gravity. But plugging in the basic expressions for information content of the holographic surface, its energy content and Einstein's relation of mass to energy leads directly to Newton's law of gravity. A relativistic version is only a few steps further, but again straightforward to derive. And it seems to apply to both apples and planets. "Finding Newton's laws all over again could have been a lucky coincidence," says Verlinde. "A relativistic generalisation shows this is far deeper than a few equations turning out just right."

Verlinde's paper has prompted praise from some physicists. Robbert Dijkgraaf, a prominent mathematical physicist also at the University of Amsterdam, says he admires the elegance of Verlinde's concepts. "It is amazing no one has come up with this earlier, it looks just so simple and yet convincing," he says.

The jury is still out for many others. Some believe that Verlinde is using circular reasoning in his equations, by "starting out" with gravity. Others have expressed concern about the almost trivial mathematics involved, leaving most of the theory based on very general concepts of space, time and information.

Stanley Deser of Brandeis University in Waltham, Massachusetts, whose work has expanded the scope of general relativity, says Verlinde's work appears to be a promising avenue but adds that it is "a bombshell that will take a lot of digesting, challenging all our dogmas from Newton and Hooke to Einstein."

Verlinde stresses his paper is only the first on the subject. "It is not even a theory yet, but a proposal for a new paradigm or framework," he says. "All the hard work comes now."

Editorial: A gravity story to take us out of Newton's orchard

Apple for your eyes

"We went into the garden and drank thea, under some apple trees... he told me he was just in the same situation, when the notion of gravitation came into his mind. 'Why should that apple always descend perpendicularly to the ground,' thought he to himself."

So wrote archaeologist and biographer William Stukeley in 1752, recounting the famous story as told to him by a young Isaac Newton. Newton went on to show that, on a large scale, two masses are attracted in proportion to their individual mass, and the force between them falls off with the square of their distance.

Now the original manuscript featuring the story, entitled Memoirs of Sir Isaac Newton's Life, is available for all to read. As part of its 350th anniversary celebration, London's Royal Society has published a digital version of the document, which is tucked away in their archives. See royalsociety.org/turning-the-pages.

Plasmons: a review

Here is a review on Plasmon and some references can be found to dig it up.

IT'S a laser, but not as we know it. For a start, you need a microscope to see it. Gleaming eerily green, it is a single spherical particle just a few tens of nanometres across.

Tiny it might be, but its creators have big plans for it. With further advances, it could help to fulfil a long-held dream: to build a super-fast computer that computes with light.

Dubbed a "spaser", this minuscule lasing object is the latest by-product of a buzzing field known as nanoplasmonics. Just as microelectronics exploits the behaviour of electrons in metals and semiconductors on micrometre scales, so nanoplasmonics is concerned with the nanoscale comings and goings of entities known as plasmons that lurk on and below the surfaces of metals.

To envisage what as plasmon is, imagine a metal as a great sea of freely moving electrons. When light of the right frequency strikes the surface of the metal, it can set up a wavelike oscillation in this electron sea, just as the wind whips up waves on the ocean. These collective electron waves - plasmons - act to all intents and purposes as light waves trapped in the metal's surface. Their wavelengths depend on the metal, but are generally measured in nanometres. Their frequencies span the terahertz range - equivalent to the frequency range of light from the ultraviolet right through the visible to the infrared.

Gleaming eerily green, this laser is a single spherical particle just tens of nanometres across

In 2003, their studies of plasmons led theorists Mark Stockman at Georgia State University in Atlanta and David Bergman at Tel Aviv University in Israel to an unusual thought. Plasmons behaved rather like light, so could they be amplified like light, too? What the duo had in mind was a laser-like device that multiplied single plasmons to turn them into powerful arrays of plasmons all oscillating in the same way (see "From laser to spaser").

The mathematics of it seemed to work. By analogy with the acronym that produces the word laser, they dubbed their brainchild "surface plasmon amplification by the stimulated emission of radiation" - spaser - and published a paper about it (Physical Review Letters, vol 90, p 027402).

The spaser might have remained just a theoretical curiosity. Around the same time, however, physicists were waking up to the potential of plasmonics for everything from perfect lenses to sensitive biosensors (see "What have plasmons ever done for us?"). The spaser idea was intriguing enough that Mikhail Noginov, an electrical engineer at Norfolk State University in Virginia, and some of his colleagues set out to build one.

It was not an easy task. Light is long-lived, so it is relatively easy to bounce it around in a mirrored chamber and amplify it, as happens inside a laser. Plasmons, by contrast, are transient entities: they typically live for mere attoseconds, and cannot travel more than a few plasmon wavelengths in a metal before their energy is absorbed by the ocean of non-oscillating electrons around them. It was not at all clear how we might get enough of a handle on plasmons to amplify them at all.

In August 2009, Noginov and his colleagues showed how. Their ingenious solution takes the form of a circular particle just 44 nanometres across. It consists of a gold core contained within a shell of silica, speckled with dye molecules that, excited initially by an external laser, produce green light. Some of that light leaks out to give the nanoparticles their characteristic green glow; the rest stimulates the generation of plasmons at the surface of the gold core.

In the normal way of things, these plasmons are absorbed by the metal almost as soon as they are produced. But their tickling influence also stimulates the dye molecules in the silica shell to emit more light, which in turn generates more plasmons, which excites more light and so on. With a sufficient supply of dye, enough plasmons can exist at the same time that they start to reinforce each other. The signature of a laser-like multiplication of plasmons within the device is a dramatic increase in green laser light emitted from the nanoparticle after only a small increase in the energy supplied from the external laser - the signature Noginov and his colleagues reported last year (Nature, vol 460, p 1110).

And they were not the only ones. In October 2009, Xiang Zhang, a mechanical engineer at the University of California, Berkeley, and his colleagues unveiled a similarly tiny device that exploits plasmons to produce laser light (Nature, vol 461, p 629).

These innovations generated headlines at the time as an entirely new type of lasing device more compact than any yet seen and which, in theory, required a lot less power than a conventional device. That's an exciting development in its own right, but just one in a list of promising advances in the bustling business of laser technology.

Crucially, though, the development of spasers has sparked the hope that one of the great scientific disappointments of the past decades - the unfulfilled promise of optical computing - may yet be turned into triumph.

On the face of it, optical computers, which use light rather than currents of electrons to process information, are a great idea. Electrons are easy to manipulate and process, but they tend to get bogged down as they pass through metals and semiconductors, colliding with atoms and bouncing off them in ways that limit the speed and fidelity of information transmission. Photons, by contrast, can withstand interference, and are above all fast, in theory zipping around a chip at close to the cosmic speed limit.

In the 1990s, various groups claimed to be getting close to making the dream of optical computing a reality. That included a concerted effort at the world-famous Bell Laboratories in Murray Hill, New Jersey, where the building block of microelectronic circuits, the transistor, was invented in 1947. Researchers there and elsewhere hit a snag, however. The very fleet-footedness that made photons perfect for high-speed communications made them almost impossible to pin down and use for sensible processing of data.

"Optical computing has a chequered history, particularly the boondoggle at Bell Labs," says Harry Atwater, a physicist at the California Institute of Technology in Pasadena. All the efforts foundered when it came to producing anything like a transistor: a tiny, low-power device that could be used to toggle light signals on and off reliably.

In theory, a controllable laser would do this trick, if not for one problem - lasers devour power. Even worse, they are huge, relatively speaking: they work by bouncing photons around a mirrored cavity, so the very smallest they can be is about half the wavelength of the light they produce. For green light, with a wavelength of 530 nanometres, that means little change from 300 nanometres. Electrical transistors, meanwhile, are approaching one-tenth that size.

You see where this is leading. Spasers are a tiny source of light that can be switched on and off at will. At a few tens of nanometres in size, they are just slightly bigger than the smallest electrical transistors. The spaser is to nanoplasmonics what the transistor is to microelectronics, says Stockman: it is the building block that should make optical information-processing possible.

The spaser is to plasmonics what the transistor is to microelectronics

Inevitably, there will be many hurdles to overcome. For a start, Noginov's prototype spaser is switched on and off using another laser, rather than being switched electrically. That is cumbersome and means it cannot capitalise on the technology's low-power potential. It is also unclear, when it comes to connecting many spasers together to make a logic gate, how input and output signals can be cleanly separated with the resonant spherical spasers that have so far been constructed.

Mutual benefit

The most intriguing aspect of spasers, however, is the one that could make or break them as the basis of a future computing technology: they are made of metal. In one sense, that is a bad thing, because making a plasmonic chip would require a wholly different infrastructure to that used to make silicon chips - an industry into which billions in research money has been poured.

Silicon's predominance has not necessarily been a bar to other technologies establishing themselves: the radio signals used for cellphone communication, for example, are of a frequency too high for silicon chips to cope with, so an entirely separate manufacturing process grew up to make the gallium arsenide chips that can. To justify the initial investment costs, another upstart chip-architecture needs a similar "killer application": something it can do that silicon cannot.

Stockman reckons the extra processing speed promised by plasmonic devices will generate such applications in areas like cryptography. "Having faster processors than everyone else will be a question of national security," he says. And he points to another reason why the spooks might be interested. One problem with semiconductors is that their delicate conduction capabilities are vulnerable to ionising radiation. Such rays can send avalanches of electrons streaming through delicate electronic components. At best, this corrupts data and halts calculations. At worst, it fries transistors, permanently disabling them.

This is where the metallic nature of a plasmonic chip would come into its own. The extra electrons that ionising radiation can produce are mere drops in the ocean of free electrons from which plasmons are generated in a metal. A plasmonic device would be able to process and store information in the harshest radioactive environments: in orbiting satellites, in nuclear reactors, during nuclear conflict.

Perhaps the most likely outcome, though, is that rather than the one superseding the other, plasmonics and electronics come to coexist to mutual advantage in a single chip. As the transistors in chips become smaller, the wires that connect them over distances of just a few nanometres become a significant bottleneck for data. That is one reason why chips are currently spinning their wheels at speeds of about 3 gigahertz. "Wires limit the speed at which electrons can deliver information," says Atwater. "So an obvious solution is to replace them with photonic connections."

The problem with such connections to date has been converting electronic signals into photonic ones and back again with a speed and efficiency that makes it worthwhile. Plasmons, which owe their existence to the easy exchange of energy between light and electrons, could be just the things for the job, making a hybrid electrical-optical chip a genuine possibility.

As well as that, says Atwater, we should work out how to manipulate plasmons using devices that can be made in the same way, and on the same fabrication lines, as ordinary silicon chips. Early last year, he and his colleagues at Caltech revealed an electrically controlled device dubbed the plasmostor that can vary the intensity of plasmons as they pass through it, and which has an architecture very similar to that of conventional transistors (Nano Letters, vol 9, p 897). Just this month, a Dutch group has announced that they have produced an electrically powered source of plasmons fully compatible with existing silicon chip fabrication technology (Nature Materials, vol 9, p 21).

It's very early days, so such innovations have yet to match the performance of purely electronic components. The plasmostor, for instance, flips between its on and off states more slowly than a conventional transistor, and the signals have an annoying tendency to leak out of the device and get lost. There is still a long way to go to a computer that runs on anything other than electrons. But it is a start, says Atwater. "You're challenging a hugely successful technology. It's audacious to think that you can just replace it."

But if a tiny round green light isn't a signal to go ahead and give it a try, what is?

From laser to spaser

This year marks the golden jubilee of a ruby trailblazer: it was on 16 May 1960 that Theodore Maiman of Hughes Research Laboratories in Malibu, California, coaxed a synthetic ruby to produce the first ever laser light. The first laser to produce light from gas - a mixture of helium and neon - followed later that same year.

Half a century later, and there's hardly an area of human endeavour that doesn't depend on lasers in some way or another: CD and DVD players, metal cutting and welding, barcode scanners and corrective eye surgery to name but a few.

Early lasers were essentially made up of a mirrored box containing a "gain medium" such as a crystal or gas. Zapped with light or an electric current, electrons in this medium absorb energy, releasing it again as photons. These photons bounce around the box and stimulate further electrons to emit more photons. This self-reinforcing increase in light energy is "light amplification by the stimulated emission of radiation" - laser action, for short.

Spasers use the same principle, except rather than amplifying light directly, they amplify surface plasmons - the wavelike movements of free electrons on and near the surfaces of metals - using that in turn to emit light.

What have plasmons ever done for us?

Plasmons might sound esoteric, but it is not just with spasers (see main story) that they are seeing practical application.

Take molecular sensing. The amount and colour of light absorbed by a plasmonic nanoparticle is extremely sensitive to the surrounding molecular environment. This property has been exploited to build sensing devices that detect levels of anything from the protein casein, an indicator of the quality of milk products, to glucose in the blood.

What's significant about these plasmonic sensors is that they can make continuous measurements, unlike chemical tests which usually give a single snapshot. A plasmonic implant could one day help diabetics to monitor and control their blood glucose levels in real time.

Plasmons should also be useful for increasing the efficiency of certain kinds of flat-screen displays. In June 2009, Ki Youl Yang and his colleagues at the Korea Advanced Institute of Science and Technology in Daejeon showed how silver nanoparticles deposited onto organic light-emitting diodes used in some displays increases the amount of light they emit.

More impressive yet, plasmonic devices might also help to tackle cancer, if tests in mice are anything to go by. Plasmonic nanoparticles laced with antibodies can be made to latch onto tumours. When blasted with a focused beam of infrared light precisely tuned to the plasmon frequency, the nanoparticles heat up, killing the attached cancer cells while leaving the surrounding healthy tissue unharmed (Accounts of Chemical Research, vol 41, p 1842).

Critical Casirmir effect

Sticky situations

synopsis imageIllustration: A. Gambassi et al., Phys. Rev. E (2009)

Critical Casimir effect in classical binary liquid mixtures

A. Gambassi, A. Maciołek, C. Hertlein, U. Nellen, L. Helden, C. Bechinger, and S. Dietrich

Phys. Rev. E 80, 061143 (Published December 31, 2009)


ShareThis Statistical Mechanics Soft Matter


When two conducting plates are brought in close proximity to one another, vacuum fluctuations in the electromagnetic field between them create a pressure. This effective force, known as the Casimir effect, has a thermodynamic analog: the “critical Casimir effect.” In this case, thermal fluctuations of a local order parameter (such as density) near a continuous phase transition can attract or repel nearby objects when they are in confinement.

In 2008, a team of scientists in Germany presented direct experimental evidence for the critical Casimir effect by measuring the femtonewton forces that develop between a colloidal sphere and a flat silica surface when both are immersed in a liquid near a critical point [1]. Now, writing in Physical Review E, Andrea Gambassi, now at SISSA in Trieste, Italy, and collaborators at the Max Planck Institute for Metals Research, the University of Stuttgart, and the Polish Academy of Sciences, follow up on this seminal experiment and present a comprehensive examination of their experimental results and theory for the critical Casimir effect.

Success in fabricating MEMS and NEMS (micro- and nanoelectromechanical systems) made it possible to explore facets of the quantum Casimir effect that had for many years only been theoretical curiosities. With the availability of tools to track and measure the minute forces between particles in suspension, scientists are able to do the same with the critical Casimir effect. In fact, it may be possible to tune this thermodynamically driven force in small-scale devices so it offsets the attractive (and potentially damaging) force associated with the quantum Casimir effect. Given its detail, Gambassi et al.’s paper may well become standard reading in this emerging field. – Jessica Thomas

[1] C. Hertlein et al., Nature 451, 172 (2008).

Saturday, January 23, 2010

Black holes formed of colliding particles

Surprise ! So interesting !

Colliding Particles Can Make Black Holes

By Adrian Cho
ScienceNOW Daily News
22 January 2010

You've heard the controversy. Particle physicists predict the world's new highest-energy atom smasher, the Large Hadron Collider (LHC) near Geneva, Switzerland, might create tiny black holes, which they say would be a fantastic discovery. Some doomsayers fear those black holes might gobble up Earth--physicist say that's impossible--and have petitioned the United Nations to stop the $5.5 billion LHC. Curiously, though, nobody had ever shown that the prevailing theory of gravity, Einstein's theory of general relativity, actually predicts that a black hole can be made this way. Now a computer model shows conclusively for the first time that a particle collision really can make a black hole.

"I would have been surprised if it had come out the other way," says Joseph Lykken, a physicist at the Fermi National Accelerator Laboratory in Batavia, Illinois. "But it is important to have the people who know how black holes form look at this in detail."

The key to forming a black hole is cramming enough mass or energy into a small enough volume, as happens when a massive star collapses. According to Einstein's theory of general relativity, mass and energy warp space and time, or spacetime, to create the effect we perceive as gravity. If a large enough mass or energy is crammed into a small enough space, that warping becomes so severe that nothing, not even light, can escape. The object thus becomes a black hole. And two particles can make a miniscule black hole in just this way if they collide with an energy above a fundamental limit called the Planck energy.

Or so physicists have assumed. Researchers have based that prediction on the so-called hoop conjecture, a rule of thumb that indicates how much an object of a given mass has to be compressed to make a black hole, says Matthew Choptuik of the University of British Columbia in Vancouver, Canada. A calculation from the 1970s also suggested a particle collision could make a black hole, Choptuik notes, but it modeled the particles themselves as black holes and thus may have been skewed to produce the desired result.

Now Choptuik and Frans Pretorius of Princeton University have simulated such collisions, including all the extremely complex mathematical details from general relativity. For simplicity and to make the simulations generic, they modeled the two particles as hypothetical objects known as boson stars, which are similar to models that describe stars as spheres of fluid. Using hundreds of computers, Choptuik and Pretorius calculated the gravitational interactions between the colliding particles and found that a black hole does form if the two particles collide with a total energy of about one-third of the Planck energy, slightly lower than the energy predicted by hoop conjecture, as they report in a paper in press at Physical Review Letters.

Does that mean the LHC will make black holes? Not necessarily, Choptuik says. The Planck energy is a quintillion times higher than the LHC's maximum. So the only way the LHC might make black holes is if, instead of being three dimensional, space actually has more dimensions that are curled into little loops too small to be detected except in a high-energy particle collision. Predicted by certain theories, those extra dimensions might effectively lower the Planck energy by a huge factor. "I would be extremely surprised if there were a positive detection of black-hole formation at the accelerator," Choptuik says. Physicists say that such black hole would harmlessly decay into ordinary particles.

"It's a real tribute to their skill that they were able to do this through a computer simulation," says Steve Giddings, a gravitational theorist at the University of California, Santa Barbara. Such simulations could be important to study particle collisions and black hole formation in greater detail, he says. Indeed, they may be the only way to study the phenomenon if space does not have extra dimensions and the Planck energy remains hopelessly out of reach.

Wednesday, January 20, 2010

Influential physicists

If you ask a bunch of people on who is the most influential physicist of, let's say, since the beginning of 1900, you would get the usual answers: Einstein, Feynman, Bohr, Heisenberg, Dirac, etc... all the big names. However, consider this: there is only ONE person who has won the Nobel Prize for Physics twice; this person is a co-inventor of the most important device that is now the foundation of our modern society that we use everyday; and this person is not on that list above.
Although I agree Bardeen is a great and influential physicist, I don't like the number of Nobel Prizes to speak. It is reasonable to state that, many other physicists may deserve even more than two Nobel Prizes. But that does not count much. Surely, it is great contributions to humans. Nonetheless, I think, it is rare to see physicists who reach the height of Einstein, who has refreshed not only our daily life, but also how we perceive the world. Not only technically, but also spiritually.

But anyway, it is inappropriate to make such useless comparisons, because every one has his own taste and judgments. My hero is Einstein, for sure.

Tuesday, January 19, 2010

where does dark matter hide ?

I'm much interested in the so-called dark matter, though about which i know only a little. It does not interact with light. Neutrino does not interact with light as either. But neutrinos can be detected by, e.g., weak interaction. But dark matter even does not lend itself to weak interaction. Nevertheless, it is supposed to engage in gravitational force, which is exactly why it is stipulated. I'm even not sure if this matter really exists, or it is just that we need new theory. Dark matter is required for the present physical theory, but perhaps dispensable with a new theory. Anyway, it is theory, it is our language, that decides what can be observed and what we can speak of.

After all, a recent work suggests that, large galaxy hides more dark matter that small ones, seeming a result of gravitational attraction.

Astrophysicists know that 83% of the matter in the universe is dark matter—invisible stuff as yet undetected. The other 17% is detectable "baryonic matter," the atoms and ions that make up stars, planets, dust, and gas. To astronomers' surprise, the ratio of baryonic matter to dark matter seems to vary from galaxy to galaxy like the ratio of chocolate chips to dough in different batches of home-baked cookies. Now, a team led by Stacy McGaugh at the University of Maryland, College Park, has determined that the proportion varies by scale: The largest galaxies have the highest percentage of baryonic matter, although not quite 17%; whereas the smallest galaxies have less than 1%.

McGaugh and colleagues compiled the ratios for more than 100 galaxies ranging from supermassive ones to dwarfs. Researchers infer the amount of dark matter in a galaxy from the motion of its stars. They estimate its baryonic mass from the amount of light the galaxy emits, which can be converted to the total mass of its stars, and a measure of atomic hydrogen in the galaxy, which provides an estimate of the interstellar gas.

"What we find is that there is a very systematic variation in the ratio with scale," says McGaugh, who presented the findings at the American Astronomical Society meeting in Washington, D.C., last week.* "When you go to the very large galaxies, the baryonic matter can be as much as 14%. As you go down in size, you see that galaxies fall short of the cosmic fraction [17:83] by an ever-increasing amount." In galaxies the size of the Milky Way, "all the stars and gas add up to only a third of the baryonic matter you would expect," which is about 5%. And in the smallest dwarfs, baryonic matter is a hundredth of what's expected—as minuscule as 0.2%. "These are very interesting results" that quantify the "missing baryonic matter problem," says Joel Bregman, an astronomer at the University of Michigan, Ann Arbor.

Where is all the missing baryonic matter lurking? One hypothesis is that its particles are interspersed within the galaxy's dark matter halo in the form of undetectable hot gas. Another is that supernova explosions have blown it into intergalactic space. This second idea would square with McGaugh's findings: Large galaxies, with stronger gravitational pulls, would be able to retain more of their baryonic matter, whereas smaller galaxies would let more escape. But so far, McGaugh says, that explanation is just one of several lines of speculation.

Monday, January 18, 2010

Steven Winberg, a physicist as he is

Steven Weinberg

Steven Weinberg, Author
Photograph by Jeff Wilson

The Nobel Prize—winning physicist’s second collection of essays, Lake Views, covers a range of topics, from religion and Armageddon to the “Future of Science, and the Universe.” A professor in the physics and astronomy departments at the University of Texas at Austin, he is widely published in the scientific community yet writes for the general public with well-reasoned clarity and a good deal of grace and humor.

How did you begin writing on topics outside of physics? Even though most of the essays in Facing Up and Lake Views are not about theoretical physics, many of them did come out of my experiences as a scientist. I used to moonlight as a consultant for the Department of Defense and the Arms Control and Disarmament Agency, and that, along with my amateur interest in military history and technology, led me to write about missile defense, nuclear security, and glory-seeking in war. Later, in the late eighties and early nineties, I lobbied for the construction of the Superconducting Super Collider in Texas. That forced me to explain to myself and to others what we are trying to do in elementary particle physics. It also brought me into conflict with trendy philosophers and sociologists who took a skeptical view of the pretensions of science, so I wrote about that. Over the years, I’ve been led to write about all sorts of things, from the latest discoveries in cosmology to the ancient tension between science and religion.

Speaking of that friction, do scientists experience personal conflicts between fact and faith? There are some very good scientists who are quite religious, like Francis Collins, who led the U.S. Human Genome Project, and Charles Townes, who invented the maser. But polls indicate that religious belief is much less common among scientists than in the whole U.S. population. When you spend your life seeking and sometimes finding naturalistic explanations for facts about the world, you get out of the habit of relying on supernatural explanations.

In 1979 you won the Nobel Prize in physics. Can you explain in layman’s terms the work that led to the award? In one sentence, this work unified two of the four fundamental forces of nature. But I should say a little more than that. There are four kinds of force that act on elementary particles. One is familiar: gravity. Another is almost as familiar: the electromagnetic force. There are two less familiar kinds: the strong nuclear force, which keeps particles together inside atomic nuclei, and the weak nuclear force, which allows one kind of particle in a nucleus to change into another kind and is responsible for the nuclear reactions that heat the sun. In 1967 I proposed a theory of the weak nuclear force, independently suggested a year later by [Pakistani physicist] Abdus Salam. It turned out it was a theory that unified the weak and electromagnetic forces—that is, it described them as different aspects of what came to be called an electroweak force. Together with a theory of the strong nuclear forces that was developed a little later, the electroweak theory is now known as the Standard Model. Right now I’m working on gravity.

Give us a glimpse into your writing routine. I do all my research and writing at home. If you see me on the UT campus, it’s because I’m giving a class or meeting with colleagues or students. But even when I’m at my desk at home, I often just spin my wheels, so I need something to keep me sitting there. My desk looks over Lake Austin, and I have a television set that I keep on while I’m working. Between watching old movies and enjoying the view of the lake, I generally manage to stay at my desk until I think of something worth doing. Harvard University Press, $25.95

Friday, January 15, 2010

Supersonic jet produced when a marble is dropped in a liquid

For imaginative and inquiring minds, even very familiar daily events may contain some totally unexpected shocks, which give them great pleasures. Every one may think he is quite conversant with what might happen when he throws a stone in a basin of water. But, any way, if you read this interesting piece of work, you'll still feel surprised and may proclaim 'O, I did not anticipate this at all !' This kind of work should always be holy in themselves. They challenge common knowledge and refreshes minds and refuels one's passion and curiosity. Now read a review of it.

When scientists speak of a “jet” they are usually referring to a fast flowing column of material, typically air or water. These jets range from the mundane, like water rushing out of a hose, to the exotic, such as the relativistic plasma jets that beam out from quasars or monster black holes. A turbo-jet, for example, pushes an aircraft forward using the supersonic thrust of air that streams out of the back of the engine.

Writing in Physical Review Letters, Stephan Gekle and colleagues at the University of Twente, The Netherlands, in collaboration with the Universidad de Sevilla, Spain, have found a supersonic jet in a surprising place: the collapsing splash from an object falling into water [1]. Their general setup is easy to reproduce by dropping a marble into a deep bowl of water (a billiard ball into a full bucket works even better). This effort is rewarded with not one, but three jets (see Fig.1): First, one of upward streaming supersonic air, followed by an obvious upward jet of water, along with a less evident downward jet of water toward the marble. Although Gekle et al. perform a more controlled experiment—they pull a disk downward through the liquid surface at a controlled speed—the general features of what they find are the same. Moreover, the disk enables Gekle et al. to have good control of the experimental conditions.

In the kitchen version of the experiment, the marble creates a crown-shaped splash and crater as it falls into the liquid. The crater deepens to the point at which the walls start to contract. This is due to both the weight of the water outside and possibly surface tension, both of which create pressure gradients that force the collapse. Air inside this collapsing neck must escape upward or downward as the neck approaches pinch-off. It is in this escaping air that Gekle et al. found supersonic velocities—the first jet in this simple experiment (see Video 1).

The shape of the neck plays an interesting role. As the air escapes through the neck, right before the neck closes, it is accelerated to supersonic speeds as it is driven by high pressures from the collapsing cavity below to low pressures in the air above the water’s surface. Engineers have designed a similar process and shape into the convergent-divergent nozzle (or, De Laval nozzle) that is used as the exhaust port in many rocket engines. In our situation, however, the nozzle forms naturally and is quickly changing shape.

At the moment of pinch-off, very large pressures accompany the impact of the water surface on itself. In fact, the pinch-off is a type of near singularity where nearly all observables—velocities, surface curvatures, pressure gradients, etc.,—become very large [2]. Consequently, just after the pinch-off, large pressures accelerate the water upward and downward to very high velocities [3, 4]. This drives two spikes of liquid—called rebound jets—up above the water surface and another one downward into the cavity following the marble. These are our second and third jets produced by this experiment. In a low viscosity fluid such as water, the jets often quickly break up into a spray of droplets. Each of these fission events also involves a near singular pinch-off of a fluid neck [2]. In fact, there are cases where the upward spray goes higher than the initial height of the dropped marble [5].

In each case these jets are the consequence of the kinetic energy density locally rapidly rising. The analysis of these processes involves the interplay of inertia and in some cases surface tension. The root cause of these self-focusing events is that the surface is changing its topology. When the crater collapses, the surface cleaves from one sheet to one sheet plus a bubble. These high velocities (and high surface curvatures) all occur right around the time of pinch-off. Other examples where a change in topology is accompanied by divergent observables include the reconnection of quantized vortices in a superfluid [6], the formation of black holes in numerical studies of general relativity [7], crater collapse and jet formation in capillary waves [2], and the coalescence of droplets [8].

In many of these situations the rapid divergence has a small length scale cutoff because there is a crossover to a different balance of forces (e.g., at really small length scales, capillary action and the molecular structure of the fluid matters). In such cases, we speak of a near singularity rather than a singularity. For instance, viscosity, the speed of sound, or the speed of light might cap some near singularities. One is tempted to generalize that near-singular behavior may usually be present in any change in topology. In some of the examples above it is clear that the system, if forced, produces a local region of high curvature (corners) on the interface that is undergoing topology change. It is fascinating that such an ordinary event as dropping a stone into a pond holds such richness.

References

  1. S. Gekle, I. R. Peters, J. M. Gordillo, D. van der Meer, and D. Lohse, Phys. Rev. Lett. 104, 024501 (2010).
  2. B. W. Zeff, B. Kleber, J. Fineberg, and D. P. Lathrop, Nature 403, 401 (2000).
  3. J. Eggers and E. Villermaux, Rep. Prog. Phys. 71, 036601 (2008).
  4. S. Gekle, J. M. Gordillo, D. van der Meer, and D. Lohse, Phys. Rev. Lett. 102, 034502 (2009).
  5. J. E. Hogrefe, N. L. Peffley, C. L. Goodridge, W. T. She, H. G. E. Hentschel, and D. P. Lathrop, Physica D 123, 183 (1998).
  6. G. P. Bewley, M. S. Paoletti, K. R. Sreenivasan, and D. P. Lathrop, Proc. Nat. Acad. Sci. USA 105, 13707 (2008).
  7. M. W. Chopuik, Phys. Rev. Lett. 70, 9 (1993).
  8. J. Eggers, J. R. Lister, and H. A. Stone, Fluid Mech. 401, 293 (1999)


What interests you most in science ?

Making A Supersonic Jet In Your Home

As always, I'm a sucker for articles like this. While it may not have earth-shattering ramifications, I always love reading curious but common phenomenon like this that produced something that is highly unexpected.

The paper shows that when you drop, say, a marble, into a liquid, what happens next can actually produce a supersonic jet of air! A review of this work can be found here, and you can also get access to the actual paper in the link.

In the kitchen version of the experiment, the marble creates a crown-shaped splash and crater as it falls into the liquid. The crater deepens to the point at which the walls start to contract. This is due to both the weight of the water outside and possibly surface tension, both of which create pressure gradients that force the collapse. Air inside this collapsing neck must escape upward or downward as the neck approaches pinch-off. It is in this escaping air that Gekle et al. found supersonic velocities—the first jet in this simple experiment.


A video of this also accompanies the review article.

I often wonder if the fun and fascinating tidbits of apparently "mundane" things like this is the reason why I got into physics in the first place. I know many people cite trying to understand the universe, or wanting to find the meaning of life, etc... etc. as the reason they study physics. I often find that I don't have such grand ambition. Instead, I find delightful pleasure in figuring out if quantum effects causes a pencil balanced on its tip to fall over, or if warm water freezes faster than cold water! Maybe I have a small mind....

Wednesday, January 13, 2010

tipping time of quantum rod

Here is physicist who wrote a blog on a very pedagogical question on quantum mechanics, which is calculating the tipping time of a vertical rod.

Tipping Time of a Quantum Pencil

I ran across this article in Eur. J. of Phys. and it reminded me of several other articles that I've read on this very topic. This is, of course, a rather familiar problem to many physics students. It involves the a pencil balanced vertically on its tip. So classically, it is in an unstable equilibrium. The problem is to use quantum mechanics, or the Heisenberg Uncertainty principle in particular, to find the tipping time for the pencil. The application of the HUP invokes the fact that the exact position of the top of the pencil can have a natural fluctuation that will tip it off the vertical axis.

The latest paper that I'm aware of on this topic deals with a very detailed calculation of calculating the tipping time of a quantum rod[1]. In this calculation, the author showed that the classical problem can be recovered when the Planck constant goes to zero, and draws the conclusion that:

.. the tipping of the quantum rod can be understood as having been triggered by the uncertainty in angular momentum engendered by localization of the initial state...


The article is a bit difficult to follow, and I didn't get any direct value of the tipping time.

The more interesting papers that I've found earlier on the same topic are much more illuminating than this one. A paper by Don Easton presents a caution for people who tries to apply QM as the basis of the tipping time[2]. His calculation of the tipping time, using QM, gives a humongous number: 0.6 million years. He examined why some posted solutions actually gave a balancing time of the order of 3 seconds, and why those treatment may be faulty.

Another paper that cautioned the use of the HUP in calculating the tipping time is a paper by Shegelski et al.[3] Here, they caution that one can't just use the HUP alone, and they also compared this to the faulty application of the WKB approximation to this problem.

Fascinating! Certainly something that I read in bed before going to sleep! :)

Tuesday, January 12, 2010

What is special about CNT ?

It was reported in an experiment [1] that the components of a cooper electron pair tunnel into two different arms that made of carbon nanotubes. But why is CNT picked out for this work ? What is special with CNT ?

Can measurement of one quantum system instantaneously affect the measurement outcome of another, even if the systems are spatially separated? This question has never been clearly answered for solid-state materials. Now, a new experiment by L. G. Herrmann in France, working with colleagues in France, Spain, and Germany, published in Physical Review Letters [1] demonstrates that electrons entangled in a superconducting Cooper pair can be spatially separated into different arms of a carbon nanotube, a material thought favorable for the efficient injection and transport of split, entangled pairs. This work may help pave the way for tests of nonlocal effects in solid-state systems, as well as applications such as quantum teleportation and ultrasecure communication.

The question of nonlocal quantum mechanics plagued physicists for decades, as it seemed to violate the rule of special relativity that information cannot travel faster than the speed of light. In fact, in 1935, Albert Einstein, in collaboration with Boris Podolsky and Nathan Rosen, hoped to disprove nonlocal quantum mechanics by publishing a famous thought experiment describing what is now called “the EPR paradox” [2]. In a simple example of this paradox, two particles, A and B, are entangled in a spin singlet, |ψAB= 1/ √2[|A|B-|A|B], where | and | refer to spin up and spin down, respectively. If the singlet is separated into two noninteracting particles, any subsequent measurement of the spin of particle A (e.g., found to be up spin) should immediately identify the state of particle B (e.g., required to be down spin). Later, John Bell derived a set of inequalities based on correlations between measurements of particles A and B, and showed that breaking these inequalities would prove quantum nonlocality [3].

The EPR paradox was finally resolved experimentally in the early 1980s, when violations of Bell’s inequalities were verified via polarization-entangled photons [4]; nonlocality has more recently been verified in systems such as ions in optical traps [5] and atom/photon hybrids [6]. However, despite recent advances in the manipulation of entangled electron states [7], Bell’s inequalities have not yet been tested in solid-state systems. Besides the fundamental importance of verifying Bell’s inequalities in materials, spatially separated entangled states could potentially form the basis of advanced technologies such as quantum cryptography, teleportation, or information processing, all of which could be integrated with existing solid-state technology and infrastructure.

In the experiment by Herrmann et al., the entangled electrons are formed naturally in an s-wave superconductor, which consists of spin singlets (Cooper pairs) correlated over the superconducting coherence length. Superconductors have been previously proposed and experimentally verified as an excellent source of EPR pairs [8, 9, 10, 11]. Typical charge transfer between a superconductor and a normal metal occurs via a process called “Andreev reflection,” where two electrons in the normal metal pair up to enter the superconductor, and a hole is reflected at the interface to conserve energy (see Fig.1, panel a). In the case of nonlocal transport, also termed “crossed Andreev reflection” (CAR), the incoming electron enters from one normal metal wire, and the hole is reflected in a different, spatially separated wire; this process is equivalent to electrons of a superconducting singlet splitting into two different wires (Fig.1, panel b).

Evidence of CAR was previously demonstrated in groundbreaking experiments that used superconducting pairs injected into two normal metal [11] or ferromagnetic [10] wires. However, these measurements were complicated by the difficulty of distinguishing between contributions from CAR and contributions from pairs tunneling into a single arm (Fig.1, panel c) or from electrons bypassing Andreev reflection and directly tunneling between arms (elastic co-tunneling, see Fig.1, panel d). In addition, CAR signals can be obscured by nonequilibrium effects due to the injection of quasiparticles into the superconductor [12]. A more ideal case in which to test EPR pairs would be if the pair splitting were preferentially enhanced over these other processes. In particular, strong electron interactions—such as Coulomb charging in a quantum dot [8] or correlated states in a one-dimensional wire (e.g., a Luttinger liquid) [9]—could enhance single-particle tunneling over pair tunneling in superconductor-normal injection, thereby enhancing the CAR signal in a split-wire configuration.

Herrmann et al. measure tunneling from a superconductor into two separated quantum dots formed on a carbon nanotube. Quantum dots are isolated puddles of charges where a capacitive charging energy as well as a “particle-in-a-box” quantization energy are required to add additional charges from the leads. Thus simultaneous tunneling of multiple charges into a quantum dot is strongly suppressed. Using this configuration, Hermann et al. observe a strong CAR contribution to the conductance between the superconductor and each of the quantum dots. The conductance is measured at the point of resonance between the dots (where their energy levels line up, so simultaneous tunneling into each dot is enhanced) and then compared to calculations for the superconductor/double-quantum-dot system to extract CAR contributions of up to 55%. In addition, the asymmetry in tunneling between the superconductor and the left or right dot is shown to be consistent with what is expected for a large CAR contribution.

This work is similar to very recent experiments by Hofstetter et al. [13], which also demonstrated enhanced CAR via tunneling from a superconductor into quantum dots; in this case the dots were formed on semiconducting nanowires. However, the experiment on carbon nanotubes not only substantiates the work on nanowires, but also may have significant material advantages. For example, strong electron interactions in the effectively one-dimensional nanotubes are predicted to further enhance pair-splitting processes [9]. Pair splitting is also enhanced by the large quantized energy level spacings in carbon nanotube quantum dots, an effect of their tiny diameters. In addition, metallic carbon nanotubes are predicted to exhibit ballistic transport and long spin-flip scattering lengths, both relevant to the coherent transport of EPR pairs.

The work by Herrmann et al. is important in that it demonstrates that CAR is likely occurring in carbon nanotube quantum dot systems. It sets the stage for future work, in which ideally an experimental parameter can be tuned to separate CAR signals from those of the other tunneling processes. This could be achieved by modifying the gate voltages [13] or various interface transparencies [14], for example. It would also be valuable to clarify the role of the double-dot resonance, as the work on nanowires, in agreement with some theories [8], demonstrated that the relative value of the nonlocal signal was diminished at resonance, likely because the electron number on each dot was not well defined. Finally, a test of Bell’s inequalities requires not just the creation of EPR pairs, but the transport and measurement of them [15]. These measurements entail (1) determining the spin of the electrons in each arm, via spin filters such as polarized ferromagnets, and (2) performing time-resolved spin correlation measurements on currents between the superconductor and each dot. The latter task is quite difficult, due to the large numbers of charge carriers in solid-state systems. A somewhat simpler intermediate step would be to determine noise correlations for transport between the superconducting interface and each quantum dot; correlated signals in this case would be strong evidence of nonlocality [15, 16].

Tests of nonlocality in a solid-state system would be a major breakthrough, enabling not only a greater understanding of entanglement in materials, but also the possibility of using separated entangled states for applications. The observation of enhanced nonlocal transport in carbon nanotubes, a material uniquely favorable for the injection and transport of split, entangled charges, offers an exciting new possibility for the study and use of nonlocality.




[1] L. G. Herrmann, F. Portier, P. Roche, A. Levy Yeyati, T. Kontos, and C. Strunk, Phys. Rev. Lett. 104, 026801 (2010) – Published January 11, 2010

Pulsar bursts move 'faster than light'

Don't be confused by this claim. It means nothing in violation of causality or relativity, according to which none physical signal travels faster than vacuum light speed. A physical signal is a physical object in a particular state. This object can be detected physically and carries energy that can be exchanged with matter. Many unphysical things (such as a shadow) can be faster, but none physical signals in this definition.

Every physicist is taught that information cannot be transmitted faster than the speed of light. Yet laboratory experiments done over the last 30 years clearly show that some things appear to break this speed limit without upturning Einstein's special theory of relativity. Now, astrophysicists in the US have seen such superluminal speeds in space – which could help us to gain a better understanding of the composition of the regions between stars.

Superluminal speeds are associated with a phenomenon known as anomalous dispersion, whereby the refractive index of a medium (such as an atomic gas) increases with the wavelength of transmitted light. When a light pulse – which is comprised of a group of light waves at a number of different wavelengths – passes through such a medium, its group velocity can be boosted to beyond the velocity of its constituent waves. However, the energy of the pulse still travels at the speed of light, which means that information is transferred in agreement with Einstein's theory.

Now, astrophysicists claim to have witnessed this phenomenon in radio pulses that have travelled from a distant pulsar.

Modified pulses

The discovery has been made at the University of Texas at Brownsville, where Frederick Jenet and colleagues have been monitoring a pulsar – a rapidly spinning neutron star – more than 10,000 light years away. As pulsars spin, they emit a rotating beam of radiation that flashes past distant observers at regular intervals like a lighthouse. Because the pulses are modified as they travel through the interstellar medium, astrophysicists can use them to probe the nature of the cosmos.

Several factors are known to affect the pulses. Neutral hydrogen can absorb them, free electrons can scatter them and an additional magnetic field can rotate their polarization. Plasma in the interstellar medium also causes dispersion, which means pulses with longer wavelengths are affected by a smaller refractive index.

Timing is off

Jenet's group thinks that anomalous dispersion should be added to this list. Using the Arecibo Observatory in Puerto Rico, they took radio data of the pulsar PSR B1937+21 at 1420.4 MHz with a 1.5 MHz bandwidth for three days. Oddly, those pulses close to the centre value arrived earlier than would be expected given the pulsar's normal timing, and therefore appeared to have travelled faster than the speed of light.

The cause of the anomalous dispersion for these pulses, according to the Brownsville astrophysicists, is the resonance of neutral hydrogen, which lies at 1420.4 MHz. But like anomalous dispersion seen in the lab, the pulsar's superluminal pulses do not violate causality or relativity because, technically, no information is carried in the pulse. Still, Jenet and colleagues believe that the phenomenon could be used to pick out the properties of clouds of neutral hydrogen in our galaxy.

'Solid result'

"It seems to be very interesting indeed...a solid and rather nice result," says Michael Kramer, an astrophysicist at the University of Manchester who was not involved with the study.

Andrew Lyne, a pulsar astrophysicist who is also based at Manchester, thinks it is an "interesting, if not unexpected result". However, he has doubts that it could help in the understanding of neutral-hydrogen clouds because there are often several clouds in the same line of sight. "It is not clear from the paper quite what extra information will be obtained," he adds.

The research will be published in the Astrophysical Journal. A preprint is available at arXiv:0909.2445v2.

Saturday, January 9, 2010

Simulation of Dirac equation



This is an experimental work [1] on mimicking the motion of a particle that is subject to Dirac-type model. It managed to detect a quivering motion, the so-called Zitterbewegung, which is understood to arise from the interference between positive energy components and negative energy components. Such quivering occurs at very high frequency (ten powered to 21 Hz) and with very tiny amplitude (of the order of Compton wavelength) for electrons in vacuum, and very difficult to observe. In this work, the authors considered the Dirac equation in 1+1 dimensions, which was realized with a single trapped ion. For such a 'relativistic' ion, the oscillation period is couple of microseconds.

Experimental recipes:
(1)trap a single 40Ca1 ion in a linear Paul trap22 with axial trapping frequency vax52p31.36MHz and radial trapping frequency vrad52p33 MHz;
(2)prepare the ion in the axial motional ground
state and in the internal state jS1/2,mJ51/2æ (mJ, magnetic quantum
number).
(3)identify spinor states.

How to measure the average position of the trapped ion without a full reconstruction of the state:
To measure the position for a motional state rm, we have to
(1) prepare the ion’s internal state in an eigenstate of sy,
(2) apply a unitary transformation, U(t), that maps information about rm onto the internal states;
(3) record the changing excitation as a function of the probe time t, by measuring
fluorescence22.

Results are displayed on the figures.


[1]Vol 463|7 January 2010| doi:10.1038/nature08688

Tuesday, January 5, 2010

LHC, a grand oddyssy

With its successful test run at the end of 2009, the Large Hadron Collider near Geneva seized the world record for the highest-energy particle collisions created by mankind. We can now reflect on the next questions: What will it discover, and why should we care?

Despite all we have learned in physics -- from properties of faraway galaxies to the deep internal structure of the protons and neutrons that make up an atomic nucleus -- we still face vexing mysteries. The collider is poised to begin to unravel them. By colliding protons at ultra-high energies and allowing scientists to observe the outcome in its mammoth detectors, the LHC could open new frontiers in understanding space and time, the microstructure of matter and the laws of nature.

We know, for example, that all the types of matter we see, that constitute our ordinary existence, are a mere fraction -- 20% -- of the matter in the universe. The remaining 80% apparently is mysterious "dark matter"; though it is all around us, its existence is inferred only via its gravitational pull on visible matter. LHC collisions might produce dark-matter particles so we can study their properties directly and thereby unveil a totally new face of the universe.

The collider might also shed light on the more predominant "dark energy," which is causing the universe's expansion to accelerate. If the acceleration continues, the ultimate fate of the universe may be very, very cold, with all particles flying away from one another to infinite distances.

More widely anticipated is the discovery of the Higgs particle -- sometimes inaptly called the God particle -- whose existence is postulated to explain why some matter has mass. Were it not for the Higgs, or something like it, the electrons in our bodies would behave like light beams, shooting into space, and we would not exist.

If the Higgs is not discovered, its replacement may involve something as profound as another layer of substructure to matter. It might be that the most elementary known particles, like the quarks that make up a proton, are made from tinier things. This would be revolutionary -- like discovering the substructure of the atom, but at a deeper level.

More profound still, the LHC may reveal extra dimensions of space, beyond the three that we see. The existence of a completely new type of dimension -- what is called "supersymmetry" -- means that all known particles have partner particles with related properties.Supersymmetry could be discovered by the LHC producing these "superpartners," which would make characteristic splashes in its detectors.Superpartners may also make up dark matter -- and two great discoveries would be made at once.

Or, the LHC may find evidence for extra dimensions of a more ordinary type, like those that we see -- still a major revolution. If these extra dimensions exist, they must be wound up into a small size, which would explain in part why we can't see or feel them directly. The LHC detectors might find evidence of particles related to the ones we know but shooting off into these dimensions.

Even more intriguing, if these extra dimensions are configured in certain ways, the LHC could produce microscopic black holes. As first realized by Stephen Hawking, basic principles of quantum physics tell us that such black holes evaporate in about a billionth of a billionth of a billionth of a second -- in a spectacular spray of particles that would be visible to LHC detectors.

This would let us directly probe the deep mystery of reconciling two conflicting pillars of 20th century physics: Einstein's theory of general relativity and quantum mechanics. This conflict produces a paradox -- related to the riddle of what happens to stuff that falls into a black hole -- whose resolution may involve ideas more mind-bending than those of quantum mechanics or relativity.

Other possible discoveries include new forces of nature, similar to electric or magnetic forces. Any of these discoveries would represent a revolution in physics, though one that had been already considered. We may also discover something utterly new and unexpected -- perhaps the most exciting possibility of all. Even not discovering anything is important -- it would tell us where phenomena we know must exist are not to be found.

Such talk of new phenomena has worried some -- might ultra-high-energy particle collisions be dangerous? The simple answer is no. Though it will be very novel to produce these conditions in a laboratory, where they can be carefully studied, nature is performing similar experiments all the time, above our heads. Cosmic ray protons with energies over a million times those at the LHC regularly strike the protons in our atmosphere, and in other cosmic bodies, without calamity. Also, there are significant indications that nature performed such experiments early in the universe, near the Big Bang, without untoward consequences. Physicists have carefully investigated these concerns on multiple occasions.

All this may seem like impractical and esoteric knowledge. But modern society would be unrecognizable without discoveries in fundamental physics. Radio and TV, X-rays, CT scans, MRIs, PCs, iPhones, the GPS system, the Web and beyond -- much that we take for granted would not exist without this type of physics research and was not predicted when the first discoveries were made. Likewise, we cannot predict what future discoveries will lead to, whether new energy sources, means of space travel or communication, or amazing things entirely unimagined.

The cost of this research may appear high -- about $10 billion for the LHC -- but it amounts to less than a ten-thousandth of the gross domestic product of the U.S. or Europe over the approximately 10 years it has taken to build the collider. This is a tiny investment when one accounts for the continuing value of such research to society.

But beyond practical considerations, we should ponder what the value of the LHC could be to the human race. If it performs as anticipated, it will be the cutting edge for years to come in a quest that dates to the ancient Greeks and beyond -- to understand what our world is made of, how it came to be and what will become of it. This grand odyssey gives us a chance to rise above the mundane aspects of our lives, and our differences, conflicts and crises, and try to understand where we, as a species, fit in a wondrous universe that seems beyond comprehension, yet is remarkably comprehensible.

Steve Giddings is a physics professor at UC Santa Barbara and an expert in high-energy and gravitational physics. He coauthored the first papers predicting that black hole production could be an important effect at the LHC and describing certain extradimensional scenarios that the LHC might explore.