Sunday, July 31, 2011

Another simple and universal role in high Tc ?

These authors presented a very simple rule that seems validated by their analysis of experimental data [J. Phys.: Condens. Matter 23 (2011) 295701 (17pp)]. In this rule, the Tc of optimal compounds is essentially set by two length scales and the electron charge, i.e., Tc~e^2/l\times l'. What is striking is that, this rue was argued to cover a wide range of materials, including cuprates, pnictides and ruthenates. They proposed a paring mechanism via Compton scattering: e.g., the holes in the conducting layer is scattered by the electrons in the charge reservoir layer. Instead of forming excitons, superfluid forms. The following is a brief sojourn over this work [http://iopscience.iop.org/0953-8984/labtalk-article/46706]:

High-TC superconductors have layered crystal structures, where TC depends on bond lengths, ionic valences, and Coulomb coupling between electronic bands in adjacent, spatially separated layers. Analysis of 31 high-TC materials—cuprates, ruthenates, rutheno-cuprates, iron pnictides and organics—has revealed that the optimal transition temperature TCO is given by the universal expression kB-1e2Λ / ℓζ. Here, ℓ is the spacing between interacting charges within the layers, ζ is the distance between interacting layers, Λ is a universal constant, equal to about twice the reduced electron Compton wavelength, kB is Boltzmann's constant and e is the elementary charge. Non-optimum compounds in which sample degradation is evident typically exhibit TC below TCO. Figure 1 shows TCO versus (ση/A)1/2/ζ—a theoretical expression determining 1 / ℓζ, where σ is the charge fraction, η is the layer number count and A is the formulaic area. The diagonal black line represents the theoretical TCO. Coloured data points falling within ± 1.4 K of the line constitute validation of the theory.


The elemental building block of high-TC superconductors comprises two adjacent and spatially separated charge layers. The factor e2 / ℓζ, determining TCO arises from Coulomb forces between them. Remarkably an explicit dependence on phonons, plasmons, magnetism, spins, band structure, effective masses, Fermi-surface topologies and pairing-state symmetries in high-TC materials is absent. The magnitude of Λ suggests a universal role of Compton scattering in high-TC superconductivity, as illustrated in figure 2 that considers pairing of carriers (h) mediated by electronic excitation (e) via virtual photons (ν). Several other important predictions are given. A conducting charge sheet is non-superconducting without a second mediating charge layer next to it, and a charge structure representing a room-temperature superconductor yet to be discovered is presented.

The role of phase

This demonstration (you can watch a video there) comes from Harvard Natural Sciences Lecture Demonstrations

What it shows: Fifteen uncoupled simple pendulums of monotonically increasing lengths dance together to produce visual traveling waves, standing waves, beating, and random motion. One might call this kinetic art and the choreography of the dance of the pendulums is stunning! Aliasing and quantum revival can also be shown.

How it works: The period of one complete cycle of the dance is 60 seconds. The length of the longest pendulum has been adjusted so that it executes 51 oscillations in this 60 second period. The length of each successive shorter pendulum is carefully adjusted so that it executes one additional oscillation in this period. Thus, the 15th pendulum (shortest) undergoes 65 oscillations. When all 15 pendulums are started together, they quickly fall out of sync—their relative phases continuously change because of their different periods of oscillation. However, after 60 seconds they will all have executed an integral number of oscillations and be back in sync again at that instant, ready to repeat the dance.


Thursday, July 28, 2011

So you know ?

Sometimes I met students majoring in theoretical physics who has no clear idea about the following important fact of quantum mechanics at the introductory level based on non-relativistic Schrodinger equation:
1. The single-valuedness and finiteness everywhere of physical wave functions are derived solely from the Born interpretation;
2. The matching conditions (or say interface conditions) used for example in textbook problems such as a particle tunneling through a square potential barrier are derived solely from Schrodinger equation and vary from case to case;
3. The sign of energy, E, determines whether a state is localized or extended : localized for E<0 while extended for E>0, because the notion of 'localized' and 'extended' actually refers to what happens on the boundary at infinity and henceforth, in the infinity, negative energy leads to imaginary wave number while positive to real. One just needs check the asymptotic behaviors of Schrodinger equation.
4. The boundary conditions for localized states: wave functions vanishing in the infinity; while that for extended: wave functions finite everywhere. Therefore, 'localized' or 'extended' are simply states of distinct boundary conditions.
5. Localized states form discrete spectrum (whose values can not be experimentally prepared in arbitrary fasion); while extended ones (whose values can be experimentally prepared in arbitrary fasion) form a continua. A simple illustration: to get localized states one just imagines a particle in a very large box, while to get extended ones one just makes use of free particle states (plane waves). Scattering states are typical extended states.

Monday, July 25, 2011

Pseudogap does not twin with Superconducting gap: another evidence

I only have time to quickly graze over this interesting paper for the moment.
In underdoped cuprate superconductors, phase stiffness is low
and long-range superconducting order is destroyed readily by
thermally generated vortices (and anti-vortices), giving rise to
a broad temperature regime above the zero-resistive state in
which the superconducting phase is incoherent1–4. It has often
been suggested that these vortex-like excitations are related to
the normal-state pseudogap or some interaction between the
pseudogap state and the superconducting state5–10. However,
to elucidate the precise relationship between the pseudogap
and superconductivity, it is important to establish whether
this broad phase-fluctuation regime vanishes, along with the
pseudogap11, in the slightly overdoped region of the phase
diagram where the superfluid pair density and correlation
energy are both maximal12. Here we show, by tracking
the restoration of the normal-state magnetoresistance in
overdoped La2􀀀xSrxCuO4, that the phase-fluctuation regime
remains broad across the entire superconducting composition
range. The universal low phase stiffness is shown to be
correlated with a low superfluid density1, a characteristic of
both underdoped and overdoped cuprates12–14. The formation
of the pseudogap, by inference, is therefore both independent
of and distinct from superconductivity.

Cool Water Fountain

Here is a show of a very cute water fountain in Japan.

Think about the physics in it !

Friday, July 22, 2011

Smectic Coexisting with nematic in cuprate

In the pseudogap phase of cuprate superconductors, incredibly rich and exotic things have been observed, among which are the checkerboard pattern that breaks the C4v symmetry within an unit cell and the stripes that break an additional translational symmetry. These are called electronic nematic and smectic phases, respectively. According to this study, there should be an interesting interplay between the two on cuprates, due to topological defects. The authors formalize the coupling in a gauge invariant way.
Coupling to the smectic fields can then occur either through phase or amplitude fluctuations of the smectic. Here, we focus on the former, which means that Formula couples to local shifts of the wave vectors Formula and Formula. Replacing the gradient in the x direction by a covariant-derivative-like coupling givesFormula(4)and similarly for the gradient in the y direction, to yield a GL term coupling the nematic to smectic states. The vector Formula represents by how much the wave vector, Formula, is shifted for a given fluctuationFormula. Hence, we propose a GL functional (for modulations along Formula) based on symmetry principles and Formula and Formula being small:Formula(5)where … refers to terms we can neglect for the present purpose (SOM d). If we were to replace Formula by Formula where Formula is the electromagnetic vector potential, Eq. 5 becomes the GL free energy of a superconductor; its minimization in the long-distance limit yields Formula and thus quantization of its associated magnetic flux (22, 23). Analogously, minimization of Eq. 5 implies Formula surrounding each topological defect (SOM e). Here, the vector Formula is proportional to Formula and lies along the line where Formula = 0. The resulting key prediction is that Formula will vanish along the line in the direction of Formula that passes through the core of the topological defect, with Formula becoming greater on one side and less on the other (Fig. 4B). Additional coupling to the smectic amplitude can shift the location of the topological defect away from the line of Formula = 0 (SOM e).

Wednesday, July 20, 2011

New upper limit on the electron dipole moment

This experiment sets a new limit on the possible electron dipole moment, and thus put stringent constraints on candidate extensions, such as supersymmetric models, over the Standard Model, hoping for new physics.
The electron is predicted to be slightly aspheric1, with a distortion characterized by the electric dipole moment (EDM), de. No experiment has ever detected this deviation. The standard model of particle physics predicts that de is far too small to detect2, being some eleven orders of magnitude smaller than the current experimental sensitivity. However, many extensions to the standard model naturally predict much larger values of de that should be detectable3. This makes the search for the electron EDM a powerful way to search for new physics and constrain the possible extensions. In particular, the popular idea that new supersymmetric particles may exist at masses of a few hundred GeV/c2 (where c is the speed of light) is difficult to reconcile with the absence of an electron EDM at the present limit of sensitivity2, 4. The size of the EDM is also intimately related to the question of why the Universe has so little antimatter. If the reason is that some undiscovered particle interaction5 breaks the symmetry between matter and antimatter, this should result in a measurable EDM in most models of particle physics2. Here we use cold polar molecules to measure the electron EDM at the highest level of precision reported so far, providing a constraint on any possible new interactions. We obtain de = (−2.4±5.7stat±1.5syst)×10−28ecm, where e is the charge on the electron, which sets a new upper limit of |de|<10.5×10−28ecm with 90 per cent confidence. This result, consistent with zero, indicates that the electron is spherical at this improved level of precision. Our measurement of atto-electronvolt energy shifts in a molecule probes new physics at the tera-electronvolt energy scale2.

No concensus

A glance at how fierce the quarrels over the working mechanism of high Tc superconductors are !
No one is predicting a full understanding of high-temperature superconductivity any time soon — not least because such an account would have to make sense of the huge number of papers. “A rich enough theory should explain everything and not just cherry pick,” says David Pines, a physicist from the University of Illinois at Urbana-Champaign.
But it’s not always clear exactly what needs to be explained. Roughly 15 years ago, for example, researchers discovered that some high-temperature superconductors allow electron pairs to form above the transition temperature. In this ‘pseudogap’ regime, the material
spontaneously organizes itself into stripes: linear regions that act like rivers and carry electron pairs through the insulating landscape where electrons remain stuck in place. “It’s a precursor state to the superconducting state and is therefore fundamental to understanding this problem,” says Ali Yazdani, a physicist at Princeton University. Not so, says Pines, who thinks the pseudogap state “interferes with superconductivity but is not responsible for it”.
Much as physicists had to wait for highly developed quantum-mechanical tools to unlock the secret behind traditional superconductivity, researchers today may require future ideas to complete their task.
If nothing else, the field’s early quarrels have ensured that only the most determined researchers have stayed. Those remaining are perhaps humbled by their experiences. “I think our biggest problem has been human fallibility,” says Anderson. And perhaps these initial difficulties have helped to forge theories that can stand the test of time. “In the end, it’s your competitor that makes you strong,” says Shen

spin charge separation in purple bronze

Spin charge separation has been a very old concept. It actually has been confirmed in various 1D systems. One manifestation allowing for experimental detection is the violation of Wiedemann-Franz Law, which states that the heat and electricity are carried by the same entities, i.e., electrons. But this law will break down in 1D systems.

http://bristol.ac.uk/news/2011/7777.html

The origin of this empirical observation did not become clear however until the discovery of the electron and the advent of quantum physics in the early twentieth century. Electrons have a spin and a charge. When they move through a metal they cause an electrical current because of the moving charge. In addition, the moving electrons also carry heat through the metal but now it is via both the charge and the spin. So a moving electron must carry both heat and charge: that is why the ratio does not vary from metal to metal.

For the past 150-plus years, the Wiedemann-Franz law has proved to be remarkably robust, the ratio varying at most by around 50 per cent amongst the thousands of metallic systems studied.

In 1996, American physicists C. L. Kane and Matthew Fisher made a theoretical prediction that if you confine electrons to individual atomic chains, the Wiedemann-Franz law could be strongly violated. In this one-dimensional world, the electrons split into two distinct components or excitations, one carrying spin but not charge (the spinon), the other carrying charge but not spin (the holon). When the holon encounters an impurity in the chain of atoms it has no choice but for its motion to be reflected. The spinon, on the other hand, has the ability to tunnel through the impurity and then continue along the chain. This means that heat is conducted easily along the chain but charge is not. This gives rise to a violation of the Wiedemann-Franz law that grows with decreasing temperature.

The experimental group, led by Professor Nigel Hussey of the Correlated Electron Systems Group at the University of Bristol, tested this prediction on a purple bronze material comprising atomic chains along which the electrons prefer to travel.

Remarkably, the researchers found that the material conducted heat 100,000 times better than would have been expected if it had obeyed the Wiedemann-Franz law like other metals. Not only does this remarkable capability of this compound to conduct heat have potential from a technological perspective, such unprecedented violation of the Wiedemann-Franz law provides striking evidence for this unusual separation of the spin and charge of an electron in the one-dimensional world.

Professor Hussey said: “One can create purely one-dimensional atomic chains on substrates, or free-standing two-dimensional sheets, like graphene, but in a three-dimensional complex solid, there will always be some residual coupling between individual chains of atoms within the complex that allow the electrons to move in three-dimensional space.

“In this purple bronze, however, nature has conspired to limit this coupling to such an extent that the electrons are effectively confined to individual chains and thus creating a one-dimensional world inside the three-dimensional complex. The goal now is to find a way, for example, using pressure or chemical substitution, to increase the ability of the electrons to hop between adjacent chains and to study the evolution of the spin and charge states as the three-dimensional world is restored within the material.”

Paper

‘Gross violation of the Wiedemann-Franz law in a quasi-one-dimensional conductor’ by Nicholas Wakeham, Alimamy F. Bangura, Xiaofeng Xu, Jean-Francois Mercure, Martha Greenblatt and Nigel E. Hussey in Nature Communications

Impact factor: the number misused

It was introduced to evaluate a journal. But it is now widely and wrongly used to evaluate individuals. This exerts great pressure on young scientists. Only the most resolute and strong-minded can stand up. Just to point out that: (1) the significance of a journal can hardly be fairly judged by its IF; (2) let alone an individual. An individual can be judged only by perusing his papers and through time.

(http://www.aps.org/publications/apsnews/200604/impact.cfm)

The impact factor, a numerical score that claims to rank the importance of scientific journals, may be resulting in unnecessary pressure on researchers to publish in journals with high values for that score.

With some qualifications, the impact factor is a measure of the average number of citations for papers published in a particular journal. It is calculated by counting the total number of citations papers in the journal receive, and dividing by the number of papers published in the journal. These statistics are compiled by the Institute for Scientific Information (ISI).

Does the impact factor provide an accurate measure of a journal’s importance? In counting citations, only papers published in the past two years are considered, though many research papers may be influential for much longer than two years. Also, items such as news articles and editorials that some journals publish are not counted in the denominator of the impact factor, but citations to those news articles may be included in the numerator, inflating the impact factor of journals that publish those types of articles.

Review articles, such as those published in Reviews of Modern Physics, are often much more highly cited than the average original research paper, so the impact factor of review journals can be quite high.

In some fields, there have been reports of journals that have raised their impact factors by such tactics as adding news articles, accepting papers preferentially that are likely to raise the journal’s impact factor, or even asking authors to add citations to other articles in the journal.

APS journals have not been much affected by these types of problems, said Martin Blume, APS Editor-in-Chief. In fact, Blume says he makes a point of trying not to pay attention to the impact factor.

Blume and others are more concerned that in some cases hiring and tenure committees or funding agencies may use the impact factor inappropriately as a way to evaluate individual researchers. “There is no quantitative metric of excellence. High impact factor journal publication is not a measure of excellence of the individual,” said Blume.

Ivan Schuller of UCSD says he likes to publish in the Physical Review journals, because he wants his work to be read by physicists. But some of his students feel that publishing in Physical Review instead of Science or Nature, which have higher impact factors, puts them at a disadvantage when applying for jobs. They believe some universities may simply look at the impact factors of journals they’ve published in, rather than carefully review the individual’s work.

Paul Kwiat of the University of Illinois recently co-authored a paper on quantum computation that was published in Nature. But the impact factor, which Kwiat had never heard of, wasn’t considered in the decision of where to publish.

"We chose Nature because we thought we had an item that might have some general public interest, while being novel science," Kwiat said. "I'm not sure I know any kind of quantitative 'impact factor', but surely scientists know that some journals are more prestigious than others, partly in view of the difficulty of getting published in them."


Theory sometimes goes ahead of experiments

Imagination goes ahead of reality, from time to time.

Surely, a lot of theoretical scientists (e.g., Abrikosov) admit that they follow closely experiments. And they are mostly motivated by them. In numerous cases, little interest might be invested in a subject until some lab fellows could get their hands on it. Graphene offers a typical example of this kind. Before 2004, few body payed attention to this seemingly abstract material. The situation drastically changed due to Geim et al's fabrication. Often, only after the material becomes experimentally available and controllable will theorists come out to construct models and explain observations or make predictions. Efforts are primarily concentrated on such systems that already exist or at least, likely to exist. A great portion of work then surrounds the comparison between models and empirical data. This is especially true with particle physics or other arena where fundamental laws are sought. In these fields, the objects are created by GOD and already there and to be discovered.

But, in materials science or more generally, in condensed matter physics, one has to reset mind. Here, many things are invented rather than discovered. Experimentalists can be motivated by theorists, and model can advance data. Novel phases may not exist in any materials that are known currently, but in some models that are constructed merely by imagination. These models, although seem irrelevant to reality when they are born, could drive some workers to look for possibilities of realizing them in labs. On such occasions, it is not about qualifying or falsifying a theory. It is about how to put it into reality and into use. A nice example might be the spin liquid or Z2 gauge theory or string model. All these when born were nothing but brain products. But these thought products have been behind many many many wonderful experimental work.

In short, theory can go ahead. And it is not always about explanation and prediction. It can fostering new reality.

Tuesday, July 19, 2011

washboard road effects

There is a review on a PRL [PRL 99, 068003 (2007)] paper investigating the so-called washboard road effects. Washboard road features ridges that might cause bumps, making driving on such road quite annoying and uncomfortable. What do you think could be the relevant factors ? The rotation of the wheel ? The size of the sand grains ? No. What appears involved are the wheel velocity, the wheel density and the gravity, in addition to the density of the road bed. This study looks mundane, but may have wide ranging applications and complex physics behind. As Zz said in his blog, "This is another one of those "mundane" stuff that perks up my interest and what got me into physics in the first place. Of course, these things APPEAR to be mundane, but the physics of these things have wide-ranging impact and application. It is just that the phenomena that manifest the principles looks so benign. Still these are the stuff that I find most fascinating. You can go solve the mysteries of dark matter and CP-violation. Just give me rippled roads and grapes that bounce up and down in sodas!"

Experimentation and analysis lead the physicists to conclude that the washboard effect is not, in fact, due to the suspension of the vehicles driving over it. As well, the size of the wheel and the size of the sand grains are irrelevant. Most surprising of all, the rotation of the wheel was also irrelevant, since the effect could be reproduced with a fixed, non-rotating object! In the end, all that mattered was the mass and velocity of the wheel, density of the road bed, and the acceleration of gravity. The fact that the velocity of the wheel was important also explains why the effect is worse on certain sections of road:

The speed of the wheel appears to be crucial. Indeed, there exists a critical velocity below which the road always remains flat and above which washboard bumps appear. Typically, for a car this critical velocity is around 5 mph or 8 km/h.

Monday, July 18, 2011

Subwavelength focus of sound

In focusing waves, one is often faced with the so-called diffraction limit as a result of the wave nature, which limits the resolution when seeing objects using waves. Now there came an interesting study beating this limit by focusing sound into a 1/25th wave length spot. Remarkably, this is attained with Coke cans !

Sound, like light, can be tricky to manipulate on small scales. Try to focus it to a point much smaller than one wavelength and the waves bend uncontrollably — a phenomenon known as the diffraction limit. But now, a group of physicists in France has shown how to beat the acoustic diffraction limit — and all it needs is a bunch of soft-drink cans.

Scientists have attempted to overcome the acoustic diffraction limit before, but not using such everyday apparatus. The key to controlling and focusing sound is to look beyond normal waves to 'evanescent' waves, which exist very close to an object's surface. Evanescent waves can reveal details smaller than a wavelength, but they are hard to capture because they peter out so quickly. To amplify them so that they become detectable, scientists have resorted to using advanced man-made 'metamaterials' that bend sound and light in exotic ways.

Some acoustic metamaterials have been shown to guide and focus sounds waves to points that are much smaller than a wavelength in size. However, according to Geoffroy Lerosey, a physicist at the Langevin Institute of Waves and Images at the Graduate School of Industrial Physics and Chemistry in Paris (ESPCI ParisTech), no one has yet been able to focus sound beyond the diffraction limit away from a surface, in the 'far field'. "Without being too enthusiastic, I can say [our work] is the first experimental demonstration of far-field focusing of sound that beats the diffraction limit," Lerosey says.

Lerosey and his colleagues took a similar approach to an experiment they performed in 2007 and later described theoretically for electromagnetic waves1,2. The group generated audible sound from a ring of computer speakers surrounding the acoustic 'lens': a seven-by-seven array of empty soft-drink cans. Because air is free to move inside and around the cans, they oscillate together like joined-up organ pipes, generating a cacophony of resonance patterns. Crucially, many of the resonances emanate from the can openings, which are much smaller than the wavelength of the sound wave, and so have a similar nature to evanescent waves.

To focus the sound, the trick is to capture these waves at any point on the array. For this, Lerosey and his team used a method known as time reversal: they recorded the sound above any one can in the resonating array, and then played the recording backwards through the speakers. Thanks to a quirk of wave physics, the resultant waveform cancels out the resonance patterns everywhere — except above the chosen can.

After the playback, the can continues to resonate by itself, scattering out the sound energy left inside. Normal waves scatter efficiently, so they disappear quickly. However, the evanescent-like waves are less efficient at scattering, and take roughly a second to make it out of the can — a prolonged emission that allows the build up of a narrow, focused spot. In fact, Lerosey's group found that the focused spot could be as small as just 1/25th of one wavelength, way beyond the diffraction limit. The results are due to be published in Physical Review Letters3.

There is some debate among acoustic scientists as to whether this is the first time anyone has truly beaten the acoustic diffraction limit. Mechanical engineer Nicholas Fang at the Massachusetts Institute of Technology in Cambridge thinks that the results are a first because the focal point is away from the lens, in the far field. But John Page, a physicist at the University of Manitoba in Winnipeg, Canada, who has published evidence for sub-wavelength focusing in the near field4, disagrees. "Super-resolution is super-resolution, no matter in what regime it is obtained," he says.

Still, Page calls the Lerosey group's work "a very important accomplishment" and believes it could find many applications, such as feeding energy to tiny electromechanical devices so they can operate.

Lerosey himself thinks that the simplicity of the apparatus is what bodes so well for applications. "To me, this experiment says, 'we can do it easily, even with Coke cans,' and it opens a door."

[http://www.nature.com/news/2011/110708/full/news.2011.406.html]

PRL Standards

Two years have now passed since PRL reinvigorated its standards for publication. By all measures the initiative has been successful, and we thank all authors and referees for their adherence to our more stringent criteria. As a reminder, a Letter should do at least one of the following: (i) substantially advance a particular field; or (ii) open a significant new area of research; or (iii) solve a critical outstanding problem, or make a significant step toward solving such a problem; or (iv) be of great general interest, based, for example, on scientific aesthetics.

In the first year of reinvigoration, receipts fell 9% and publications fell 20% relative to the previous year. In the second year, both numbers have crept upwards. Our receipts of 11,376 are nearly the level they were in 2008, but we published 3247 Letters, about the number of Letters published in 2004.

Of course, increases in receipts and in published Letters are not necessarily unwanted. If the quality of both groups is high, we welcome their growth, since this supports our mission to cover important results across all physics. We do ask, however, that authors and reviewers remain mindful of the reinvigorated standards at PRL, which are a crucial part of our ongoing efforts to maintain, and strengthen, PRL's place as the premier APS journal for current research.

[http://prl.aps.org/edannounce/PhysRevLett.107.020001]

Sunday, July 17, 2011

Event Cloak

I just bumped into this funny stuff [http://physicsworld.com/cws/article/indepth/46376]:

But imagine if we could make a cloak that operates not only in space but in time as well. To understand how such a "space–time" cloak might work, consider a bank housing a money-filled safe. Initially, all incoming light continuously scatters off the safe and its surroundings, revealing the rather dull scene of an undisturbed safe visible to surveillance cameras. But imagine, near some specified time, splitting all the light approaching the safe into two parts: "before" and "after", with the "before" part sped up, and the "after" part slowed down. This would create a brief period of darkness in the stream of illuminating photons. If the photons were a stream of cars on a motorway, it is as if the leading cars were to speed up and those trailing behind were to decelerate, creating a gap in the traffic edged by bunches of cars (a dark period with bright edges – see t3 in figure 1).

Now imagine that during the moment of darkness, a safe-cracker enters the scene and steals the money, being careful to close the safe door before he leaves. With the safe-cracker gone, the process of speeding up and slowing down the light is reversed, leading to an apparently untouched, uniform illumination being reconstituted. As far as the light reaching the surveillance cameras is concerned, everything looks the same as it did beforehand, with the safe door firmly shut. The dark interval when the safe was cracked has literally been edited out of visible history.

To complete our motorway analogy, it is as if the cars have acted to first open up and then close a gap in traffic, leaving no disturbance in the flow of vehicles. There is now no evidence of that temporary car-free interlude, during which the proverbial chicken may even have crossed the road without getting squashed. So by manipulating how light travels in time around a region of space, we can, at least in principle, make a space–time cloak that can conceal events – an "event cloak", if you will.

back from vacation

Oh, after back from vacation, so many emails are crammed. Ah, a lot of new things have come out and will absorb me ...

Monday, July 4, 2011

Graphene age

In the past 20 years, we have gone through the copper age and iron age, both in superconductivity, weighing in strongly correlated systems. Some people might even have an impression that, few new physics can be found outside the U-regime. U is the Hubbard repulsion. But imagination leads us beyond that sight: we have found surprises and fostered new cherished babies in the usual band theory. We now are digging in the topological insulators and graphene, which are constantly offering exotic and practically important physics. The lesson here is: imagination is more important than knowledge ! (Einstein)

Sunday, July 3, 2011

Space is much smooth on Planck scale

This finding might be the most stunning and most fundamentally interesting and important in the past tens of years. Many garish, dazzling yet gaudy, speculations can hardly withstand this finding.
The space is just so smooth ! [http://www.physorg.com/news/2011-06-physics-einstein.html]

Einstein’s General Theory of Relativity describes the properties of gravity and assumes that space is a smooth, continuous fabric. Yet quantum theory suggests that space should be grainy at the smallest scales, like sand on a beach.

One of the great concerns of modern physics is to marry these two concepts into a single theory of quantum gravity.

Now, Integral has placed stringent new limits on the size of these quantum ‘grains’ in space, showing them to be much smaller than some quantum gravity ideas would suggest.

According to calculations, the tiny grains would affect the way that travel through space. The grains should ‘twist’ the light rays, changing the direction in which they oscillate, a property called polarisation.

High-energy gamma rays should be twisted more than the lower energy ones, and the difference in the polarisation can be used to estimate the size of the grains.

Philippe Laurent of CEA Saclay and his collaborators used data from Integral’s IBIS instrument to search for the difference in polarisation between high- and low-energy gamma rays emitted during one of the most powerful gamma-ray bursts (GRBs) ever seen.

GRBs come from some of the most energetic explosions known in the Universe. Most are thought to occur when very massive stars collapse into neutron stars or black holes during a supernova, leading to a huge pulse of gamma rays lasting just seconds or minutes, but briefly outshining entire galaxies.

GRB 041219A took place on 19 December 2004 and was immediately recognised as being in the top 1% of GRBs for brightness. It was so bright that Integral was able to measure the polarisation of its gamma rays accurately.

Dr Laurent and colleagues searched for differences in the polarisation at different energies, but found none to the accuracy limits of the data.

Some theories suggest that the quantum nature of space should manifest itself at the ‘Planck scale’: the minuscule 10-35 of a metre, where a millimetre is 10-3 m.

However, Integral’s observations are about 10 000 times more accurate than any previous and show that any quantum graininess must be at a level of 10-48 m or smaller.

“This is a very important result in fundamental physics and will rule out some string theories and quantum loop gravity theories,” says Dr Laurent.

Integral made a similar observation in 2006, when it detected polarised emission from the Crab Nebula, the remnant of a supernova explosion just 6500 light years from Earth in our own galaxy.

This new observation is much more stringent, however, because GRB 041219A was at a distance estimated to be at least 300 million light years.

In principle, the tiny twisting effect due to the grains should have accumulated over the very large distance into a detectable signal. Because nothing was seen, the grains must be even smaller than previously suspected.

“Fundamental physics is a less obvious application for the , Integral,” notes Christoph Winkler, ESA’s Integral Project Scientist. “Nevertheless, it has allowed us to take a big step forward in investigating the nature of space itself.”

Now it’s over to the theoreticians, who must re-examine their theories in the light of this new result.

Provided by European Space Agency (news : web)