Friday, December 25, 2009

Observation of confinment in condensed matter systems

It is well known that baryons are made of quarks. However, these quarks can not be directly observed due to a phenomenon called 'aymptotic freedom' or say 'confinement', which arises becasuse of increasing force strength with separation. It is interesting that, such confinement is not exclusive to high energy physics. It was recently observed occuring to condensed matter systems. This gives another example on how concepts are shared between various fields in physics. image

a, The region between two spinons (domain walls) on a chain consists of reversed spins (coloured in red); if this chain is coupled antiferromagnetically to another chain, as in a spin ladder, these reversed spins cost energy owing to their parallel alignment with the spins on the neighbouring chain. This energy cost, which is proportional to the separation of the spinons, acts to confine the spinons. b,c, The structure of CaCu2O3 for the ab plane (b) and the ac plane (c). CaCu2O3 has orthorhombic symmetry with space group Pmmn and lattice parameters a=9.949Å, b=4.078Å and c=3.460Å at T=10K. The magnetic Cu2+ ions have spin=1/2 and are represented by the red spheres; they are coupled to each other by superexchange interactions through the O2− ions (blue spheres) and the Cu–O bonds are represented by the solid black lines; the Ca2+ ions are not shown. The lattice parameters are shown in grey as well as the rung distance drung, which is approximately one third of the a lattice parameter. The structure consists of copper oxide layers stacked along the c direction, the ladders lie within this plane running parallel to b and neighbouring ladders are shifted by half a unit cell in a. The dotted black lines indicate the separate ladder units and the inter- and intraladder exchange interactions are labelled. The coupling along the legs, Jleg, occurs through superexchange interactions mediated by oxygen; the Cu–O–Cu bond angle is 180°, giving rise to strong antiferromagnetic coupling (according to the Goodenough–Kanamori–Anderson rules). In contrast, the Cu–O–Cu bond along the rungs is 123° and therefore Jrung is expected to be substantially weaker although still antiferromagnetic. In addition, a weak antiferromagnetic interaction, Jdiag, is predicted between opposite copper ions within each plaquette of the ladder. The ladders are coupled together by a number of weaker interactions. Within the ab plane, Cu2+ ions on neighbouring ladders are connected through Cu–O–Cu bonds that are 90°, giving rise to a weak ferromagnetic Jinter. Note that Jinter is frustrated and competes with the much stronger Jleg; thus, its energy cancels in the Hamiltonian to first order. Weak interladder couplings Jc1 and Jc2 are also expected between ladders in the c direction. Finally, in common with other planar copper oxide materials, CaCu2O3 is expected to have a four-spin cyclic exchange interaction, Jcyclic, coupling the four copper ions that form each plaquette. Quantum chemistry calculations give the following exchange constants for CaCu2O3: Jleg=−147 to −134meV; Jrung=−15 to −11.3meV; Jcyclic=4meV; Jinter<24meV; Jdiag=−0.2meV; Jc1=0.1meV; Jc2=0.8meV (refs 19, 20). Susceptibility data fitted to a spin-1/2 Heisenberg chain model without other interactions provide good agreement with the data and suggest that Jleg is indeed the dominant interaction and has a value of −168meV (ref. 22).

Nature Physics 6, 50 - 55 (2009)

Curved space acts as gauge field in graphene


a, Distortion of a graphene disc which is required to generate uniform BS. The original shape is shown in blue. b, Orientation of the graphene crystal lattice with respect to the strain. Graphene is stretched or compressed along equivalent crystallographic directions left fence100right fence. Two graphene sublattices are shown in red and green. c, Distribution of the forces applied at the disc’s perimeter (arrows) that would create the strain required in a. The uniform colour inside the disc indicates strictly uniform pseudomagnetic field. d, The shown shape allows uniform BS to be generated only by normal forces applied at the sample’s perimeter. The length of the arrows indicates the required local stress.

Among many remarkable qualities of graphene, its electronic properties attract particular interest owing to the chiral character of the charge carriers, which leads to such unusual phenomena as metallic conductivity in the limit of no carriers and the half-integer quantum Hall effect observable even at room temperature1, 2, 3. Because graphene is only one atom thick, it is also amenable to external influences, including mechanical deformation. The latter offers a tempting prospect of controlling graphene’s properties by strain and, recently, several reports have examined graphene under uniaxial deformation4, 5, 6, 7, 8. Although the strain can induce additional Raman features7, 8, no significant changes in graphene’s band structure have been either observed or expected for realistic strains of up to ~15% (refs 9, 10, 11). Here we show that a designed strain aligned along three main crystallographic directions induces strong gauge fields12, 13, 14 that effectively act as a uniform magnetic field exceeding 10T. For a finite doping, the quantizing field results in an insulating bulk and a pair of countercirculating edge states, similar to the case of a topological insulator15, 16, 17, 18, 19, 20. We suggest realistic ways of creating this quantum state and observing the pseudomagnetic quantum Hall effect. We also show that strained superlattices can be used to open significant energy gaps in graphene’s electronic spectrum.

Thursday, December 24, 2009

Is time in a hurry?

By Marcelo Gleiser

Well, 2009 is almost over. To me at least, and I bet to most of you, it went way too fast. On average, it was a year like any other, with some new things to celebrate and others to lament. (I'll abstain from listing them. Each person has her own list.) But it's hard to shake off the feeling that everything happened faster, that time seems to be in a hurry to get somewhere. Sometimes, people ask me if it's possible, from a physics perspective, for time to be passing faster. It can't.

According to the theory of relativity, time can slow down but not speed up. There are a few ways to do this. For example, you may move faster than other people. If you get to speeds close to the speed of light, time will slow down for you relative to the others. Hard to do, as the speed of light is a whopping 186,400 miles per second, in round numbers. Or, you may go live on the surface of the Sun. Time there would tick slower than here as well. But that's really not what people have in mind when they wonder about time. The question is about our psychological perception of time. And I am sure many of you would agree that sometimes it does feel like time is on a roller coaster.

Time is a measure of change. If nothing happens, time is unnecessary. So, at a personal level, we perceive the passage of time in the changes that happen around and within us. What's interesting is that--as anyone who has tried to meditate knows--even if you shut off all your senses, time keeps ticking away. As our thoughts unfold, our brains give us time. To "quiet the chatter" is the big challenge for going deeper into a meditative state, to be in the now.

The passage of time is about the ordering of events, things that happen one after another. Numbers, some say, are devices that were created to help us order time. Maybe, although counting chicks is also very useful if you are a hen. However, if we are to order events, we must remember them. Ergo, the perception of time is deeply related to memory. If our memories were to be erased, we would revert to the wonder of babyhood, where time extends forever. The more we have to learn, the more memories we make, the slower time passes. Routine, sameness, makes time speed up. Since routine is not usually equated with fun, this seems to go contrary to the "time flies when you're having fun" dictum. What's going on here?

The answer may be in the level of mindful engagement, that is, in how tuned-in your brain is to what you are doing. Newness, as in fun newness, works as a flood of information and places the focus on the immediate. There is no ordering between events yet and not sense of the passage of time. I have felt this disengagement when lost in a calculation for hours or trying out a new trout stream with my fly rod. This is the opposite of routine, where new memories are not being made and the now is all there is. But maybe someone will prove me wrong.

In physics, things are simpler. Time is a fundamental quantity, something that cannot be defined in terms of anything else. There are some issues with this, that we will address some other time. (Sorry...) The second is the universal unit, and it's defined as 9,192,631,770 oscillations between two levels of the cesium-133 atom. Very different from the tick-tack of old mechanical clocks, which are not very reliable.

Einstein had a colloquial definition of the relativity of time: by the side of a pretty girl an hour feels like a second; if you burn your hand on the stove, a second feels like an hour. His special theory of relativity showed that the simultaneity of two events depends on how they are observed: what may be simultaneous for one observer will not be for another moving with respect to the first. Be that as it may, even in physics the ordering of time is essential: that's causality, causes preceding effects so that the present vanishes into the past and the future becomes the present.

At the cosmic level, there is a well-defined direction of time: the expansion of the universe, which has been going on for 13.7 billion years, pointing resolutely forward. Link it to our own passage through life, and we have a well-defined asymmetry of time, what's sometimes called time's arrow . There is not much we can do to escape this at the physical level. But at the psychological level, to slow down time we have to engage our minds, create more memories, absorb knowledge. Perhaps I will leave my guitar aside for a while and start playing the piano.

Although we use clocks to count time, it is impossible to know if the clock is in hurry or not. Because, to know if the clock goes differently, you need refer to another clock. Actually, time is defined by uniform periodic motions. But, how do you know if it is uniform or not ? You would need another motion to define uniform. So, this is a cyclic logic: time itself needs be used to define time. Eventually, it seems falling upon our sense to make a decision on 'uniform or not' !

Tuesday, December 22, 2009

bastardization of physics, an example

Are Human Ideas And Opinions Quantum Objects?

Are human ideas and opinions quantum objects? Does it show properties of superposition? Does it have a wavefunction and observable operators? Really!

In the continuing effort to "bastardize" quantum mechanics, many people who don't have a clue of what it is routinely cite the various aspects of quantum mechanics and then applying it to situations where it may not even apply. Crackpots do this all the time, especially in areas of pseudoscience where QM has been used as a justification for all the new-age mumbo jumbo. They do this while forgetting that various aspects of QM have been experimentally tested and verified, whereas their applications to other things have not.

And that brings us to this "delightful" discussion. The writer applied both QM and SR (the physics-bastardization double-coupon) to make amazing justification regarding opposite opinions.

So what's physics got to do with it? First, it allows two contradictory descriptions of nature to be true. So both my friends both could be (and were) right. As Neils Bohr put it: The opposite of a shallow truth is false, but the opposite of a deep truth is also true.

Particles are waves and waves are particles. Whether they show one face or the other depends on what you look for in your experiment, on what kind of question you ask. In other words, the context.

Each of my friends is a complex, warm, caring, passionate and much-loved individual. Each is also nothing but a bunch of quarks and electrons. Two contradictory statements. Both true. Different contexts.


This, of course, isn't new. Extreme post-modernists have done this already, with hysterical and nonsensical conclusions. One only needs to follow the situation surrounding the Sokal Hoax.

The problem in all of this is, of course, that if you understand only a very small and superficial portion of something, and then you apply it, you've essentially ignore the majority of what you applied. For example, if contradictory ideas like that can be represented or justified as "waves" or having such duality, then ALL the other consequences of such analogy should also follow. What happened when they "interfere" with each other, or underwent rapid decoherence? If one makes an "observation", shouldn't the other ideas essentially goes away since the wavefunction has collapsed? Now what?

Bastardization of physics produces nonsensical results. I don't know why people need to grasp onto something they don't even understand to justify something.

Sunday, December 20, 2009

Has dark matter been detected ?

For 80 years, it has eluded the finest minds in science. But tonight it appeared that the hunt may be over for dark matter, the mysterious and invisible substance that accounts for three-quarters of the matter in the universe.

In a series of coordinated announcements at several US laboratories, researchers said they believed they had captured dark matter in a defunct iron ore mine half a mile underground. The claim, if confirmed next year, will rank as one the most spectacular discoveries in physics in the past century.

Tantalising glimpses of dark matter particles were picked up by highly sensitive detectors at the bottom of the Soudan mine in Minnesota, the scientists said.

Dan Bauer, head of the Cryogenic Dark Matter Search (CDMS), said the group had spotted two particles with all the expected characteristics of dark matter. There is a one in four chance that the result is due to some other effect in the underground detectors, Bauer told a seminar at the Fermi National Accelerator Laboratory, near Chicago.

Rumours that Bauer's group was on the verge of making an announcement surfaced on physicists' blogs a few weeks ago. Though tentative, tonight's results triggered an immediate wave of excitement in the science community.

"If they have a real signal, it's a seriously big deal. The scale on which people are looking for dark matter is vast," said Gerry Gilmore at Cambridge University's institute of astronomy. "Dark matter is what created the structure of the universe and is essentially what holds it together. When ordinary matter falls into lumps of dark matter it turns into galaxies, stars, planets and people. Without it, we wouldn't be here," Gilmore said.

Scientists have debated the existence of dark matter since 1933, when the Swiss astronomer Fritz Zwicky argued that a distant cluster of galaxies would fall apart were it not for the gravitational pull of some vast but invisible cosmic substance. It was named dark matter because it does not reflect or absorb light, making it impossible to observe with telescopes.

Last year, the Hubble telescope photographed indirect evidence in the form of a ghostly halo around a distant galaxy, caused by clumps of dark matter bending light from stars as it passed by. A year before that, scientists led by the British astronomer Richard Massey, at the California Institute of Technology, published the first 3D map of dark matter, which revealed how it clung around galaxies and held clusters of them together.

Dark matter is likely to be made up of a variety of invisible particles that not only explain the missing mass of the universe, but shed light on some of the most profound mysteries in science.

Some dark matter particles could explain why ordinary matter is not radioactive, while others may help scientists understand why time – so far as we know – always runs forward.

"The real impact of this is psychological, in that it shows we're getting close to being able to do a whole new kind of physics," Gilmore said. "We know there are properties of the universe that should correspond to new families of particles. One of the great mysteries is why time only goes in one direction, and one candidate to explain that is a dark matter particle."

Many scientists believe dark matter particles will turn out to be proof of a theory called supersymmetry, which predicts that every kind of particle in the universe is paired with a heavier twin. Finding evidence for supersymmetry is one of the major goals of the Large Hadron Collider at Cern, in Switzerland.

Dark matter particles are peculiar because they pass through objects as if they were not there. Their aloof nature has led scientists to name them weakly interacting massive particles, or Wimps. Vast amounts of these are thought to be constantly moving through the Earth and everything on it, us included, as the solar system spins around our galaxy.

The detectors at the Soudan mine are buried underground to shield them from other kinds of particles that bombard Earth from space. To detect dark matter, scientists have to wait for the extremely rare occasion when a dark matter particle knocks into an atomic nucleus in the detector and makes it vibrate.

Detectors in the mine will be upgraded in the new year before the search for more dark matter continues, Bauer said.

The hunt for dark matter

What is dark matter?

The night sky might seem full of stars and planets, but what we see is only 4% of the stuff of the universe. Some three-quarters is dark matter, an invisible substance that scientists believe is there because of the gravitational force it exerts.

What does dark matter do?

Dark matter stretches throughout space where it attracts ordinary matter that coalesces into galaxies of billions of stars and planets. It forms a kind of cosmic skeleton that gives the universe its structure. Many scientists believe they will find a family of invisible dark matter particles, each of which plays a different role in nature. Some may even explain why time always goes in the same direction.

Who came up with the idea?

The Swiss astronomer Fritz Zwicky postulated dark matter in 1933. He noticed that a distant cluster of galaxies would fall apart were it not for the extra gravitational pull of some mysterious unseen mass in space. Astronomers verified his prediction by showing that stars swirling around distant galaxies zipped around so fast they must be held in place by extra gravitational forces.

Does everyone believe in dark matter?

A minority of astronomers and physicists dismiss dark matter as a fudge. Instead, they suspect that the strength of gravity varies from place to place, in a way that explains why stars do not hurtle out of spinning galaxies. The theory is known as Modified Newtonian Dynamics (Mond).

• This article was amended on Friday 18 December 2009. We said dark matter accounts for three-quarters of the mass of the universe; we meant to say three-quarters of the matter of the universe. This has been corrected.

Tuesday, December 15, 2009

Covalency makes a smaller form factor !

Theories involving highly energetic spin fluctuations are among the leading contenders for explaining high-temperature superconductivity in the cuprates1. These theories could be tested by inelastic neutron scattering (INS), as a change in the magnetic scattering intensity that marks the entry into the superconducting state provides a precise quantitative measure of the spin-interaction energy involved in the superconductivity2, 3, 4, 5, 6, 7, 8, 9, 10, 11. However, the absolute intensities of spin fluctuations measured in neutron scattering experiments vary widely, and are usually much smaller than expected from fundamental sum rules, resulting in 'missing' INS intensity2, 3, 4, 5, 12, 13. Here, we solve this problem by studying magnetic excitations in the one-dimensional related compound, Sr2CuO3, for which an exact theory of the dynamical spin response has recently been developed. In this case, the missing INS intensity can be unambiguously identified and associated with the strongly covalent nature of magnetic orbitals. We find that whereas the energies of spin excitations in Sr2CuO3 are well described by the nearest-neighbour spin-1/2 Heisenberg Hamiltonian, the corresponding magnetic INS intensities are modified markedly by the strong 2p–3d hybridization of Cu and O states. Hence, the ionic picture of magnetism, where spins reside on the atomic-like 3d orbitals of Cu2+ ions, fails markedly in the cuprates.
A recent high Tc model seems promising in solving the underestimated INS intensity. This model explicitly covers the spin-spin interaction between the spin of O holes and the spin of Cu holes. Such interaction effectively makes a smaller scattering form factor, which may give a good fit into observations. Details to be worked out !

[1]Nature Physics 5, 867 - 872 (2009);
[2]J.Phys.:Condens.Matter, 21:075702(2009)

A review on cloaking theory

Scientists and novelists have been intrigued for centuries by the possibility of hiding an object so completely that neither trace of the object nor of its cloak is to be found. Recent theoretical developments show that cloaking is, in principle, possible for electromagnetic waves and to a limited extent for other types of wave, such as acoustic waves. An energetic program of experimental research has shown some of the schemes to be realizable in practice.

We have a touching faith in the ability of our eyes to tell the truth. No other sense has such confidence invested in it, so when our eyes deceive us the result is bewilderment, giving rise to appeals to magic or even the supernatural. This explains the enormous interest aroused by recent work on invisibility and the cloaking of objects from electromagnetic radiation. In this article we review the theories and experiments behind the hype and suggest what devices might realistically be expected in the near future and what is likely to prove impossible.

Hard wired into our brains is the expectation that light travels in straight lines. Mostly this is true, but there are well-known exceptions, such as mirages, which occur when a hot surface heats the air above, reducing its density and hence creating a refractive index gradient immediately above the surface (Fig. 1, top). Such a gradient bends the trajectories of light rays so that an observer misinterprets where the light is coming from. Typically, light from the sky is refracted by the gradient, giving the appearance of water shimmering in the distance—hence a cruel illusion seen by a thirsty traveler in the desert or, more prosaically, the appearance of a wet road on a hot day.

It is the ability of refractive index gradients to bend light that the invisibility engineer exploits. Light is steered around the hidden object by a cloaking device, and then returned to the same straight line trajectory, rather as a skier would make a chicane around a tree (Fig. 1, bottom). The observer’s brain is unaware of the possibility of chicanes and sees only that which is behind the cloak and nothing of the cloak itself or of its contents. The real challenge of cloaking lies in deriving a theoretical prescription for the optical properties of the cloak and, even more challenging, realizing these properties in a material. Transformation optics provides the theoretical background and metamaterials provide the means of achieving the prescribed parameters.


This year's Nobel Prize: CCD

Getting a digital camera for Christmas? Before you fire it up to capture Uncle Wally's fateful fifth trip to the punch bowl, take a moment to picture this: You've got a genuine scientific marvel in your mitts. In fact, it took nothing less than two Nobel prizes and a revolution in physics in order for you to point and shoot.

Why? Because to take a filmless picture, your camera or camcorder relies on, um, quantum mechanics. In particular, it exploits the fact -- revealed by Albert Einstein himself -- that a beam of light, which behaves like a wave in some circumstances, acts like a bunch of separate particles in other circumstances. (If that seems infuriatingly contradictory, suck it up. It's just how we do things in this cosmos. Or go complain to the management.)

The individual particles, called photons, come in a wide range of energies. Visible light has enough so that when its photons slam into something, such as a sheet of specially fabricated semiconductor material in a digital camera, they kick electrons right out of the stuff, producing an electrical charge at the crash site. Explaining this phenomenon, known as the photoelectric effect, got Einstein his Nobel.

In most consumer cameras, the photoelectric action takes place back behind the lens, when the light reflected from Uncle Wally hits a "charge-coupled device," or CCD. A typical CCD contains a light-sensitive semiconductor rectangle, usually smaller than a fingernail, crisscrossed by a grid of tiny channels that divide it into several million separate picture elements, or pixels.

Each pixel emits a different number of electrons, depending on how many photons struck it, and it stores those electrons in a gizmo called a capacitor, which functions like a bucket. After the exposure is over, the CCD circuitry empties the millions of pixel buckets one by one, records the amount of charge in each, and transfers the resulting mosaic to a processor that converts it into digital form -- all in a fraction of a second. Not surprisingly, the guys from Bell Labs who invented the CCD won a 2009 Nobel Prize in Physics.

Of course, if that were all that happened, you'd only have a black-and-white picture. But to photograph your gift from Aunt Myrna, who somehow found a sweater so lurid that it can be seen from space, you want color. There are a few ways to get hues you can use, and they all rely on the convenient truth that all the shades we recognize can be represented by various proportions of red, green and blue, the "RGB" of computer monitor fame.

Unless you've got a high-end camcorder, your gear probably has a single CCD whose grid is covered by an exactly matching grid of color filters arranged in a repeating pattern. For every two-by-two set of four pixels, one is covered by a blue filter, one by a red filter and two (at opposite corners) by green filters. Doubling up on green is needed because the human eye evolved to be disproportionately sensitive to that color, which is right in the middle of the sun's visible spectrum.

The CCD records the electron count on each set of four pixels, and then the camera's on-board computer compares the value of each pixel in the foursome to that of its three neighbors to calculate the "true" color of each one. Considering that these are software-generated approximations and not actual measured colors, the accuracy is astonishing. And the range is equally impressive: customarily at least 256 levels of R, G and B in each pixel, for a total of 16.7 million different colors.

If you've got a still camera that cost more than a case of cat food, it probably has 6 million to 25 million pixels, or six to 25 megapixels in photo argot. How does that stack up to film? The finest 35-mm film in the best cameras using incomparable lenses produces images that can "resolve" (that is, show the difference between) somewhere around 90 million separate spots. A lot of that detail, however, would never be noticed by the human eye unless the photo was blown up to drive-in movie dimensions. A reasonable benchmark is that a good film picture is equivalent to about 20 megapixels.

But the whole megapixel mania that is used to market digital cameras can be awfully misleading, especially in the case of the pocket-size models. For one thing, if you don't have a good enough lens or a CCD sophisticated enough to capture fine differences in contrast and tone, it doesn't matter how many megapixels you've supposedly got. You'll just get a more expensive blur.

For another, most people don't enlarge their photos to the point at which the difference between six and 10, or 12 and 16 megapixels is important. And if you pass your pictures around on the Internet, they probably won't display at much over 100 dots per inch anyway -- about one-third the resolution of an ordinary print. For most folks, gross pixel count is more about self-image than photographic image. But who needs an ego boost when you've mastered quantum mechanics?

Wednesday, December 2, 2009

2DEG switchable by electric field ?



Perovskite materials are cool as they frequently exhibit exotic properties and thus offer opportunities to fabricate new electronic components.

Here i talk about a perovskite-based interface structure that traps electrons within a few layers (2DEG). 2DEG has been the focus of extensive investigations for many years, examples concerning cuprate superconductors and transistors.

This structure consists of a NbO2 layer sanwitched by strontium STO on one end and KNO on the other. Electrons shall pool around that NbO2 sheet. As we know, the d orbitals on every Nb atom in bulk KNO are nominally empty. So does the pure NbO2 sheet. As one incorperates this sheet into that structure, due to electronic reconstruction that happens often at interfaces, the d orbitals shall be taken up by electrons, but only partially, which forms the so-called Hubbard layer. For partial filling, these electrons shall conduct electricity, with conductivity proportional to the electron density.

Now that KNO is a ferroelectric (STO is only incipient), one may wonder if the spontaneous polarization appearing in it shall affect the electron density and hence the conductivity. Yes, it is, as recently demonstrated by first-principles computations [1]. The physics is simple: the electric field produced by this polarization shall deplet or enrich electrons (screening effect), depending on the field direction, resembling what takes place to a conventional p-n jucntion in the presence of an ecternal electric field. Hence, by inverting the spontaneous polarization in KNO, one is able to switch the conduction states of the NbO2 layer.

For the moment, it may be interesting to see how this prediction will be confirmed experimentally and to undrstand the switch time required for the polarization reversal. Obviously, this time shall be crucial for applications.

[1]PRL, 103:016804(2009)


Monday, November 30, 2009

Ionic Liquid's Makeup Measurably Non-Uniform at the Nanoscale

Ionic liquids are liquids of ions. The reason why they stay in liquid rather than solid is because these ions are of big size. The big mass impedes the solidification which would happen due to electrostatic interactions. Different from conventional liquids like water, which are uniform and homogeneous throughout, ionic liquids were predicted to be non-uniform at nano-scale. Now this prediction was confirmed.

Their findings were published online in the Journal of Physics: Condensed Matter. The article was selected for inclusion in the Institute of Physics' IOP Select, which is a special collection of articles chosen by IOP editors based on research showing significant breakthroughs or advancements, high degree of novelty and significant impact on future research.

Ionic liquids are a new frontier of research for chemists. Originally invented to replace volatile and toxic solvents such as benzene, they’re now used in high-efficiency solar cells, as cheaper, more environmentally friendly rocket fuel additives and to more effectively dissolve plant materials into biofuels. Since 1990, research on ionic liquids has grown exponentially.

“Their properties are strikingly different than those of most conventional liquids,” said Edward Quitevis, a professor of chemistry in the Texas Tech Department of Chemistry and Biochemistry. “A conventional liquid for the most part is composed of neutral molecules whereas an ionic liquid is composed entirely of ions.”

Because of their ability to be tailored and manipulated for specific applications, ionic liquids can be compared to a new form of Erector Set for chemists. By modifying the ions, scientists can create specific properties in the liquids to fit particular applications or discover new materials.

Each new discovery that adds to the understanding of ionic liquids leads to new possibilities for applications and materials, Quitevis said.

“An ionic liquid is basically a salt that happens to have a melting point at or about room temperature,” he said. “The reason why it’s a liquid and not a solid is because the ions are bulky and don’t crystallize readily. The more we learn about them, the more we can find new applications for them that we never could have imagined for conventional liquids.”

By using X-rays and lasers, researchers found that parts of the liquid at the nanoscopic level were not uniform. Some domains of the liquid may have had more or less density or viscosity compared to other domains. Also, these non-uniform domains could be measured.

“At the nanoscopic scale, these liquids are not uniform, compared to other liquids, such as water, where properties are all uniform throughout,” Quitevis said. “This non-uniformity is not random. These domains of non-uniformity are well defined and can be measured. And this nanoscopic non-uniformity was predicted in computer simulations, but never confirmed experimentally until recently.”

Understanding these types of attributes of can lead to more breakthroughs in the future, Quitevis said.

Sunday, November 29, 2009

What does understanding mean ?

I'm attempting to make a definition of understanding. Because I think this is as primary as it is useful. We need understanding when we feel baffled, i.e., we feel that our pure logical deductions can not make an immediate connection between what we know (as a part of our experience) and what we just observed (which is the object to be understood). In other words, we feel that, these two ends, our knowledge as one end and the object as the other, seem very distant. Our logical deductions seem helpless in ferrying us from one end to the other. As long as this ferry is not finished satisfactorily, the understanding will be on-going. The process of understanding is to build these logical steps (the causal chain), starting either from existing models or a new one, all the way to reach the object. So, only when (1) the proper model has been found and (2) the causal chain has been forged can the understanding be settled, and can we be released from the baffling. Our baffling, in my opinion, is a gift. The sense of being baffled promotes us to raise questions, and raising questions reverts us into another baffling. We are baffled when we have questions. As long as we have questions, we shall be baffled. This is a joyful voyage. That is why Einstein once remarked, 'the best one can have in life is the experience of mythtery, and I'm content with a life of mythteries'.

An example. If a man fucks a woman, this woman may get pregnant. So, the question is, 'why does she have to get pregnant ?' 'Why cant it be otherwise ?' . 'Why cant it be otherwise ?' is a question that urges one to answer. We know a fucked woman may get pregnant, but we don't know why this simple fucking could lead to a baby ! If one asks himself, he'll be baffled and curious. It is never self-evident that a fucking shall bring about a baby. An understanding may be achieved if he finds, 'fucking ——> ejaculation of semen ——> semen entering the womb ——> semen's synergy with an egg in the womb ——> this compounded object divides and grows ——> a baby forms in the womb'. If this chain is found, his baffling shall be more or less alleviated. However, only after he completely confirms every element of this chain will be fully relaxed.

In the same spirit, I wish to talk about computer simulations, which have become a very important tool in theoretical physics and other fields. It often offers very important insight that may lead to ultimate understanding, albeit it is not an understanding by itself. It helps understanding, just as experiments. Actually, computer simulations play the same role of experiments, I reckon. In experiments, you set experimental knobs and then start experiments and observes what will happen and make record. In computer simulations, you set and input required parameters and then let a computer to execute orders and output results and you record the results. The only difference is that, in the former it is the Nature that composes and executes the orders while in the latter it is you who write the codes to be executed by the computer. After simulations or experiments, you get the outputs. But you don't know why the output looks like this but not like that. The causal chain between the input and the output is not clear and awaits building. Frequently, this chain can seldom be exactly built. Many approximations have to be made. Much the way one builds a bridge. For the bridge to be strong, perfect materials should be used. But perfect materials can hardly be found, so instead one uses the best at hand. 'The best' may not be perfect, but at least a bridge can be laid down. When better materials are found, an improved one can be built.

That is the way science is done, I think.

Friday, November 27, 2009

symmetry manifested in symmetry breaking

Spontaneous symmetry breaking has become a fundamental notion of condensed matter physics as well as high energy physics. This concept says, the low temperature properties of a physical system may not acquire the same symmetry as its basic Hamiltonian. The reason is simple: for a system with symmetry, there is always a degeneracy occurring to its ground state, i.e., it has a number of ground states with the same energy, and at low energy scales, no perturbation suffices to turn this system in one of its ground state to another, therefore, its physical properties shall bear features special to that particular ground state it is in. Let us point out that, the perturbation stems from its interaction with all the rest of our universe.

As this concept has been corroborated, people tend to skip an important thing, which is that, many consequences of this symmetry breaking actually reveal the original symmetry. One such consequence is the formation of domain structures. Roughly speaking, a domain is a region where the system is found in one of its degenerate ground states. Now that there are many equally possible (in the absence of external field) ground states, the system, when its symmetry becomes broken, falls in a state with domains that each realizes one particular ground state. So, one can factually find almost reminiscent of every ground state in this symmetry broken state.
Therefore, as a whole, this system actually respects its symmetry rather than simply break it ! Of course, domain walls are high energy regions which would dismiss the domain formation but for two factors: (1)inter-domain interaction and (2)ergodicity broken.

An often cited example is ferromagnets. No natural ferromagnet (such as iron) can be found magnetic at all, because its domains cancel each other, as a result no global magnetism found, though with very small probes (like STM) local magnetism can be detected.

Domain walls are current active research areas. They display very aberrant properties. For example, scientists found conducting domain walls in Bismuth Ferrite, despite that the material itself is insulating in bulk state,

Nature Materials 8, 229 - 234 (2009)
Published online: 25 January 2009 | doi:10.1038/nmat2373

Subject Categories: Electronic materials | Magnetic materials

Conduction at domain walls in oxide multiferroics

J. Seidel1,2,10, L. W. Martin2,3,10, Q. He1, Q. Zhan2, Y.-H. Chu2,3,4, A. Rother5, M. E. Hawkridge2, P. Maksymovych6, P. Yu1, M. Gajek1, N. Balke1, S. V. Kalinin6, S. Gemming7, F. Wang1, G. Catalan8, J. F. Scott8, N. A. Spaldin9, J. Orenstein1,2 & R. Ramesh1,2,3


Domain walls may play an important role in future electronic devices, given their small size as well as the fact that their location can be controlled. Here, we report the observation of room-temperature electronic conductivity at ferroelectric domain walls in the insulating multiferroic BiFeO3. The origin and nature of the observed conductivity are probed using a combination of conductive atomic force microscopy, high-resolution transmission electron microscopy and first-principles density functional computations. Our analyses indicate that the conductivity correlates with structurally driven changes in both the electrostatic potential and the local electronic structure, which shows a decrease in the bandgap at the domain wall. Additionally, we demonstrate the potential for device applications of such conducting nanoscale features.

Thursday, November 26, 2009

wave-corpulscle duality: wield or elegant ?

Wave-corpuscle duality is usually (and maybe best) illustrated in the famous double-slit experiment, where one lets a beam of particles pass through a wall with two slits to reach a screen, on which an intensity pattern shall be observed. Now if these particles are what every one understands with his daily experience, such as bullets and sands, one gets a pattern displaying all features one can find with bullets. On the other hand, if these particles are light quanta, i.e., photons, a completely different pattern, which is usually called interference pattern, shows up exhibiting all features one may perceive with water waves bypassing two obstacles (e.g., two big stones). If the first pattern prevails over the latter, then one may say this beam behaves more like a beam of particles. And the opposite case corresponds to wave-like behaviors. Now the 'wield' thing is that, the same beam can display both patterns, depending on whether a slit is open or closed !

Common sense would say, a bullet is always a bullet, regardless of the experimental setup. However, the quantum world goes absolutely against one's common sense, since by operating a slit switch without explicitly affecting the gun, the bullets become something else. Were such quantum effect dominant in daily world, one would be able to alter the moon by playing with something on earth. Wield ! Completely wield !

Yes, it is wield relative to common sense, as wield as curved space-time! Einstein said our universe is curved, which is also counter-intuitive. Nevertheless, in spite of their wields, they are elegant. Why ? Because they seem to be the simplest notions one can have to solve all puzzles in their own fields. One can hardly expel them without encountering awkwardness. They have a unique unifying power. In the eyes of theoretical physicists, the elegance of a concept consists largely of its unifying power. Such concept solves not one phenomenon, not two phenomena, but a dozen of seemingly isolated phenomena. In quantum world, it is hard to dispense with wave-corpuscle duality and at the same time explains everything. It is impossible to explain double-slit experiment using a particle-only picture without invoking some very ugly assumptions. It is impossible to dispense with the notion of relativity of simultaneity while at the same time accounting for all fast phenomena.

So, wield and elegant are likely to go hand in hand. Further, what is wield is constantly changing, because our common sense is constantly changing. It is never a good reason to reject an elegant idea just because it is wield !

P.S.: this blog is intrigued by a research presented in PHYS.FORUM, aiming at eradicating wave-particle duality. The author was motivated by this question, 'what goes through the slits ?'. In my opinion, all exiting answers to this question differ as sheerly an issue of semantics. You may use another name of wave-particle duality, but the content remains the same, because, it is at the heart of quantum mechanics.

Soft colloids make strong glasses

Despite familarity with it, many puzzles remain with glass. Here is a letter talking about glass formation in aqueous suspensions of microgel tiny particles driven by varying concentration instead of temperature. Perfect resemblance was found between these two mechanisms, thus providing a new material for understanding glass. What is special to soft colloids is their deformability under concentration change, which permits ont only fragile glass but also strong one. This feature has not been seen in hard sphere colloids.

Glass formation in colloidal suspensions has many of the hallmarks of glass formation in molecular materials1, 2, 3, 4, 5. For hard-sphere colloids, which interact only as a result of excluded volume, phase behaviour is controlled by volume fraction, phi; an increase in phi drives the system towards its glassy state, analogously to a decrease in temperature, T, in molecular systems. When phi increases above phi* approximately 0.53, the viscosity starts to increase significantly, and the system eventually moves out of equilibrium at the glass transition, phig approximately 0.58, where particle crowding greatly restricts structural relaxation1, 2, 3, 4. The large particle size makes it possible to study both structure and dynamics with light scattering1 and imaging3, 4; colloidal suspensions have therefore provided considerable insight into the glass transition. However, hard-sphere colloidal suspensions do not exhibit the same diversity of behaviour as molecular glasses. This is highlighted by the wide variation in behaviour observed for the viscosity or structural relaxation time, taualpha, when the glassy state is approached in supercooled molecular liquids5. This variation is characterized by the unifying concept of fragility5, which has spurred the search for a 'universal' description of dynamic arrest in glass-forming liquids. For 'fragile' liquids, taualpha is highly sensitive to changes in T, whereas non-fragile, or 'strong', liquids show a much lower T sensitivity. In contrast, hard-sphere colloidal suspensions are restricted to fragile behaviour, as determined by their phi dependence1, 6, ultimately limiting their utility in the study of the glass transition. Here we show that deformable colloidal particles, when studied through their concentration dependence at fixed temperature, do exhibit the same variation in fragility as that observed in the T dependence of molecular liquids at fixed volume. Their fragility is dictated by elastic properties on the scale of individual colloidal particles. Furthermore, we find an equivalent effect in molecular systems, where elasticity directly reflects fragility. Colloidal suspensions may thus provide new insight into glass formation in molecular systems.

Tuesday, November 24, 2009

decoherence and collapse in quantum theory

The following news seems ignoring the difference between decoherence and collapse of wave function. The former is governed by Schrodinger equation and hence in principle deterministic, whereas the latter is completely probabilistic. And, never forget that, it takes no time for a collapse, although, it indeed takes time for decoherence (the so-called decoherence time). The riddle is not about decoherence but about collapse. If collapse could be removed, Einstein would accept Quantum Theory !

WHY can't we be in two places at the same time? The simple answer is that it's because large objects appear not to be subject to the same wacky laws of quantum mechanics that rule subatomic particles. But why not - and how big does something have to be for quantum physics no longer to apply? Ripples in space-time could hold the answer.

The location of the boundary between the classical and quantum worlds is a long-standing mystery. One idea is that everything starts off as a quantum system, existing in a superposition of states. This would make an object capable of being, for example, in many places at once. But when this system interacts with its environment, it collapses into a single classical state - a phenomenon called quantum decoherence.

Brahim Lamine of Pierre and Marie Curie University in Paris, France, and colleagues say that gravitational waves may be responsible for this. These waves in the very fabric of the universe were generated by its rapid expansion soon after the big bang, as well as by violent astrophysical events such as colliding black holes. As a consequence, a background of ripples at very low amplitudes pervades space-time.

Gravitational waves may be responsible for collapsing quantum ambiguity into a single classical state

Lamine and colleagues calculated how this fluctuating space-time might contribute to quantum decoherence. They found that for systems with very large mass, such as the moon, decoherence induced by the gravitational waves would have caused any quantum superposition to dissipate immediately. At the other end of the scale, such waves would have a negligible effect on massless photons.

To test whether gravitational waves do in fact cause the decoherence seen in large objects, the researchers suggest using a set-up called a matter-wave interferometer in which molecules are made to pass through multiple gratings. The wave-like nature of the molecules causes them to diffract, and the diffracted waves interact to give rise to an interference pattern. Quantum decoherence destroys this pattern, so in principle this could provide a test for whether the decohering effect of background space-time fluctuations matches predictions. Such a system would have to be completely isolated to rule out other effects.

This is, however, impossible in practice - with today's interferometers, at least. Experiments pioneered by Anton Zeilinger, Markus Arndt and colleagues at the University of Vienna, Austria, have been able to generate interference with beams of 60-atom carbon buckyballs, but even with molecules of this size the effect of gravitational waves would be too small to be observed.

According to Lamine, who presented his work last month at the Gravitation and Fundamental Physics in Space meeting at Les Houches in the French Alps, the effect should be measurable in larger systems at high energy. Supersonic beams of about 3000 carbon atoms would do the trick if made to interfere over an effective area of about 1 square metre. This is far beyond the reach of any foreseeable technology.

Some speculative theories predict, however, that quantum decoherence will occur on a lower energy scale than that suggested by Lamine. If so, this could be within experimental reach. "That is why our experiments are pushing [up] the interference mass limit, step by step," says Arndt.

Friday, November 20, 2009

How do you interpret your results ?

Scientists should take caution when they explain their findings to journalists. Here is an example on how misleading claims can be made. The article headline claims 'innate correlation between various power law phenomena', which however lacks justifications in their original work. In their work, the authors study the relation between donation and wealth, which was found following Zipf's power law. This law was originally used to describe the distribution of words in language. Now that these two distant phenomena share the same mathematical structure, they claimed their work demonstrates an innate correlation between these two phenomena, which is however quite inappropriate. It is obvious that, sharing the same mathematical structure does equal having innate relation. By 'innate relation', we mean 'a phenomenon is a logical corollary of another'. In this sense, one cannot see any innate relation between 'wealth-donation' and 'words-language'. They just share the same math curve, which may be a coincidence. One can find many similar examples in physics and other fields. For example, physicists have found that, the gauge theory, which was originally designed for descriptions of particle behaviors, also emerges in condensed matter, but one cannot say one can be deduced from the other.

........................................................................................................................................................................

Researchers Find Innate Correlations Among Different Power Law Phenomena

November 17, 2009 By Lisa Zyga Researchers Find Innate Correlations Among Different Power Law Phenomena

Enlarge

The Zipf plot of wealth and donation are innately connected. The upper part of the donation distribution follows Zipf’s law, and the Zipf exponent is equal to that of the corresponding wealth distribution. Image credit: Q. Chen, et al.

(PhysOrg.com) -- Studying the patterns that emerge in natural and social phenomena is a popular area of research, although usually individual phenomena are studied separately from each other. In a recent study, researchers have found innate correlations among some of these phenomena, showing that the amount of money that individuals in a society donate to a charity can be used to determine the distribution of personal wealth in that society. The connection between these two topics can also be used for exploring the complexity of a society's economic system.

“The greatest significance of this work is showing that power law phenomena in different references may correlate with each other innately,” Yougui Wang of Beijing Normal University told PhysOrg.com. “Thus, this implies that some power law phenomena should be the derivatives of other basic ones.”

The key to using patterns from one data set to infer the patterns of a different set of data is realizing that both sets share a mathematical principle called Zipf’s law, explained Wang, along with coauthors Qinghua Chen and Chao Wang of Beijing Normal University. Although Zipf’s law was originally proposed in the field of linguistics to explain the distribution of words in a language, it has attracted much more attention because it also describes a wide variety of natural and social phenomena. Zipf’s law quantitatively describes how the most common entities of a set (such as the common word “the”) appear with a high frequency that logarithmically tapers off as entities become less common. The same power law holds true for the distribution of population sizes, Internet traffic, and other phenomena. As researchers have previously found, in some cases the law stems from a competition among individuals for a constraint resource.

In the current study, the scientists show that collective donations follow a particular pattern: the upper part (made of the larger monetary donations) follows Zipf’s law, while the lower part (made of smaller donations) exhibits a uniform distribution. The data comes from donations by Chinese to the Chinese Red Cross Foundation after an earthquake of magnitude 8.0 struck Sichuan province in southwest China in May 2008. The data includes more than 230,000 personal donations, with the donation amount ranging from 0.01 RMB to 2.79 million RMB. Significantly, 205,000 donations (87.5%) of the total sample were 100 RMB or more. This part of the data approximately followed Zipf’s law, while the distribution of donations of less than 100 RMB was basically uniform.

So far, the analysis is yet another phenomenon of human behavior that follows the regularity of Zipf’s law. But the researchers also developed a model to explain this pattern, taking into account the previous finding that wealth distribution has also been known to follow Zipf’s law. Their model shows that only a portion of the individuals in a society have a desire to donate, and of these, each individual donates a portion of his or her overall wealth that is random but uniformly distributed. Even though only a small sample of the donators in the case of the Sichuan earthquake was collected and analyzed, the researchers’ model could generate the distribution of personal wealth throughout China, which is consistent with what has been obtained from the data of the richest 500 individuals in China.

As the researchers explain, donation and wealth, like other power law phenomena, seem to coexist in systems. By showing that power law phenomena can be related to one another, the researchers’ work could be valuable for exploring the correlations among natural patterns in systems.

“Based on the results of our study, the distribution of wealth could be derived from that of donation,” said Qinghua Chen. “Once the link between two variables involved in a complex system is just like the relation between donation and wealth in our case, we can infer the distribution of one variable from the other.”

More information: Q. Chen, C. Wang, and Y. Wang. “Deformed Zipf’s law in personal donation.” Europhysics Letters, 88 (2009) 38001. doi: 10.1209/0295-5075/88/38001

............................................................................................................................................................................

Tuesday, November 17, 2009

Models and explanation

As I have said many times, the following, this time which was said by Sergei Maslov, is my credo in doing science:

Even more important than tools, theoretical physics has taught me about the power of simple models in revealing the essence of complex phenomena. Simple models are indispensable if one wants not to just reproduce the complexity of a system (e.g. by detailed computer simulations) but to truly understand it.
In my opinion, no comprehension can be achieved without simple models, despite how accurate a simulation may be. Because understanding is not simply about accuracy. More, it is about how to establish a clear connection between one's own already existing experience (including his knowledge) and the phenomena under consideration. Only when this connection can be built in light of his experience can he be satisfied in understanding. This process of building connection is actually a process of constructing models based on what he knows and reasoning with his model.

P.S., Maslov is doing research in systems biology and complex networks.

An opaque fishing net ?

Light, because of its wave-corpulse duality, shows many surprising behaviors. A recent PRL paper [1] adds one more. Imagine you fabricate a gold film on a glass substrate and punch a regular array of sub-wavelength holes in this film. Now you shine light upon it. It is so thin--about 20 nm--that it becomes semi-transparent. Now you look at the transmission, which unexpectedly turns out being smaller than without holes. That is, the brutalized film, contrary to expectations, makes an obacure view. Nevertheless, if the film is much thicker, say, 100nm thick, the scenario will be the opposite: the transmission is greatly increased [2].

The authors think that, Fano analysis may lend an explanation. According to them, there are two interfering wavelets contributing to the transmitted waves, which are the resonantly scattered and the nonresonantly scattered, respectively. The former involves resonant excitations of surface plasmon, whereas the latter enters directly through holes. It turns out that, Fano resonance hinges on a single parameter, which is the quotient of a ratio to the line width (which measures how coherent a light is). The ratio concerns the resonant wave amplitude and the directly transmited amplitude. They argue that, this parameter is large for thick films but rather small for extremely thin films, which may then give rise to the observations.

It is worth seeing that, the surface plasma may play a central role. The interaction between light and plasma is obviously an interesting subject. This interaction can carry light to pass through very small holes. The light is at first coupled to the plasma and then the plasma carries it to the destination [2].

[1]Phys. Rev. Lett. 103, 203901 (2009)

[2]Nature (London) 391, 667 (1998)

Monday, November 16, 2009

Wield light

Here is a fascinating work on counter-intuitive facets of light [1].

If a metal film, thick enough to be totally opaque, is perforated by tiny subwavelength holes in an orderly fashion, the transmission will be enhanced extraordinarily [2]. Here, we investigate the transmission through an ultrathin semitransparent Au film with a square array of subwavelength holes and observe the opposite behavior: less light is transmitted through the pierced metal compared to the closed film.

These authors blame surface plasma for the light block, although not every one thinks so. Here is a comment.

The way light moves, with its fixed speed and its ability to act like either a wave or a particle, often leads to some of the most curious paradoxes of physics. A new one has just been found: Make holes in a film of gold so thin that it's already semitransparent, and less light gets through.

Because of its wave nature, light generally can't squeeze through a hole whose width is smaller than the wavelength of the light. In 1998, however, researchers discovered that light could zip through certain patterns of such holes punched into thin metal plates. Physicists figured out that the light created waves in the metal's electrons--called plasmons--that move across the material's surface in much the same way that ripples move through water. The plasmons, which have wavelengths much shorter than light, couple with each other across the tiny holes and pull the light along for the ride. One possible application is to use plasmons to build better light-based integrated circuits that would be as fast as fiber optics but less bulky.

Toward this end, researchers from the University of Stuttgart in Germany laid very thin films of gold onto pieces of glass and then used ion beams to etch the film with holes arranged in a regular, square array. These holes were smaller than the wavelength of light and, despite being so tiny, are just the kind of openings that have been shown to let light through the thicker, opaque film used in the 1998 experiment. But in the new experiment, the gold film was so thin--only 20 nanometers--that light could already shine through it. And surprisingly, less light went through the holey gold than through the original semitransparent film.

Why? The researchers blame the semitransparent nature of the gold film, which allows 40% of the light to flow directly through it, preventing it from stopping at the surface to help form plasmons. Plasmons are formed by the kick of energy they get from the incoming light, combined with how the electron waves of the plasmons skitter around the hole geometry, so the light needs to be tuned to the specific hole geometry to maximize plasmons. In this case, that leftover 60% of the light simply doesn't combine with the geometry to create plasmons that can cross through the gold holes, the team reports this week in Physical Review Letters.

Physicist Martin P. van Exter of Leiden University in the Netherlands says that interference between hole geometry and light transmission is expected, so the results shouldn't come as too much of a surprise. However, he also points out that gold always absorbs light in peculiar ways--indeed, this is what leads to its golden color instead of most metals' more typical silver--and it's possible that this contributed to the results.

Team member Bruno Gompf says that the next step is to see whether other hole patterns--hexagonal, rectangular, aperiodic--show the same effect. Perhaps a particular pattern could serve as a filter to block certain wavelengths of light in those future plasmonic integrated chips, he says.

The origin of this seems still elusive.

[2]Nature (London) 391, 667 (1998)
[1]Phys. Rev. Lett. 103, 203901 (2009)

Saturday, November 14, 2009

An interesting correspondence between magnetism and superconductivity

Magnetism and superconductivity are apt to be found in a number of strongly correlated electron systems. Celebrated examples are cuprate superconductors and their recent iron-based analogs. As doping is varied, these substances move between magnetism and superconductivity. It is thus no wonder that, many surmise an inherent connection between these two phases. It is quite possible that, the same electronic interaction is controlling them.

This viewpoint discusses another aspect on the connection between magnetism and superconductivity. This discussion strongly suggests that, a mathematical description may be established generic to them. If this were the case, two things would be clear: (1) competitions between magnetism and superconductivity must exist as a rule and (2) understanding one could help comprehend the other.

The unknown aspects of friction



Perhaps every one has some knowledge of friction, which he gains through experience and/or science education. As a part of daily life, friction is as common as gravity. Many people think that, its thorough understanding must have been attained. The reality is nonetheless not so satisfactory. Here [1] is a piece of work attempting to resolve an issue concerning the onset of sliding under a force at the trailing of a slider, which rubs against a track.



The author set up a phenomenological model and solved it numerically. The interaction between the slider and the track is modeled by spring contacts, whose rupture and formation are governed by a simple law set by a threshold force. Their study showed that, the sliding is always preceded by crack-like fronts, which signifies the propagation of a broken contact.

[1]PRL, 103:194301(2009)

Wednesday, November 11, 2009

The following is a quotation from a guy I highly respect. I agree with him very much on the relevant matter as said below:

Creationist Refutes Darwin’s Evolutionary Theory - A Rebuttal

I mentioned about this talk about a week ago of a creationist attempting to falsify Darwin's theory of evolution. This morning, I found this response written by a physics major junior that easily threw a lot of doubt in the garbage that was spewed at that talk.

Carter used a wonderful scientific vocabulary and showed some facts that were true.

However, blinded by science jargon, he put up facts and figures with little truth to them, no way to verify them (or if he did, they were not accurate and considered fraudulent in the scientific community), nor accuracy to the science actually used.

This man performed a wonderful show, and is an outstanding example of how the public will believe almost anything that has numbers and graphs in it with no scientific proof.


The writer listed several examples where Carter simply can't produce valid sources for his numbers.

I'm left to wonder how many people in the audience who bought into what they were told. We often talk about the public needed to be scientifically literate. What we mean by that is NOT that the public knows all these "facts", but rather, having the skill to analyze how one goes from A to B to C to D. How, for example, do you draw up the conclusion that, say, "gay marriage" leads to "undermining traditional marriage". People throw out those two phrases all the time, but no one seems to explain the mechanism that show how "gay marriage" CAUSES "undermining of traditional marriage". Not only that, if such mechanism exists, one needs to publish such a thing and be scrutinized for it by others who are experts in the field of study to ensure that such a mechanism is valid, and that leads to the unique conclusion.

The same thing is occurring here. One simply can't throw out all of these numbers and conclusions (something that is commonly done in politics and economics) without any basis to show that they are valid. But the public that isn't familiar with the scientific process are ignorant of that. This is why I'm very proud of this young writer who already has the skill (hopefully something he gained from his education) to analyze and question how such conclusions are made. So well done, Jim Eakins!

Making the public be scientifically literate should mean making them able to make a rational analysis of how one draws up a conclusion. It is why when I proposed a revamping of the undergraduate intro physics labs, I try to steer away from making "textbook tests" of physics principles. Rather, I focused on how one can draw up the conclusion on how A depends on B, and what is the exact relationship between those two. Our world has always been focused on how we can relate things, how are they interconnected, etc. These types of lab exercises precisely present such tests.
It is a nice statement.

Wednesday, November 4, 2009

A grand unification

In science, there is one thing that always makes me curious and awe. Roughly, I'd like to call it 'grand unification', which means a simple concept that connects various seemingly unrelated and independent phenomena which occur in distant disciplines. Such unification corroborates the conviction that, the universe has a common underlying mathematical structure. Here I talk of one more example of this.

This example is about fast symmetry breaking phase transitions (FSPT), in contrast with the usual cases where one talks of phase transitions near equilibrium states which may be described within hydrodynamic jargon. Up to my knowledge, no general effective theory for the moment has been established for FSPT. Despite this, it is possible to find out on some quantities very generic constraints which derive from the basic structure (such as symmetry and causality) of the first principles theory. Suppose a physical system is in equilibrium. Now one changes an experimental knob (e.g., pressure). This system shall then evolve away from its initial equilibrium state according to the corresponding dynamic theory, and shall eventually reach another new equilibrium state. However, what this final state might be should depend on the evolution and how the knobs are changed. An interesting thing is that, during this evolution some topological defects (which are textures that break the overall symmetry) shall form. Generally, it is expected that, as constrained by causality, which means no physical signal can travel faster than a typical velocity special to the system, two textures separated by a typical distance, say x, could not develop significant correlations during the evolution. Assume that a texture be characterized by 'direction vector'. Then, the causality indicates that, two distant textures should choose their directions independently in statistical sense. Therefore, all direction vectors may be found with certain textures because of the symmetry respected by the first principles model. Now, an interesting question is, how does x scale with the phase transition rate ?

In 1976, Kibble made an estimation [1], which is based on a cosmological model. It deals with cosmic strings, which are inherently stable topological objects that are expected to survive to today. Experimental verification of his idea is however quite difficult, since it is about the whole universe, which is so vast. A breakthrough came by Zurek [2], who tried Kibble's idea on Helium, which undergoes super-fluid transition at much low temperatures. And in such system, topological defects, which are known as fluxons and antifluxons by their winding numbers, could form as the transition is passed. Therefore, the helium system poses a remarkable realization of the big universe in this respect. To obtain the relation between x and transition rate, Zurek employed the dynamic Landau's theory, which was supposed to govern the temporary evolution. It turns out that, the relation follows a simple power law. Verification of this law came afterward. The latest one was done on superconductor [3]. For a good review see Ref.[4].

It is exciting to see that, the thing speculated about cosmos has its image on earth. I'd like to pose another question concerning FSPT, how does x scale with the dimension of the system ?

It may be worth pointing out that, as was emphasized by Anderson in his 'more is different' address, to deal with emergent phenomena, it is not enough to know just the first principles model (e.g., BCS model), which are usually not much useful in obtaining intuition and understanding. It is more useful to come up with an effective model (e.g., Landau's model), which concerns the quantities under imminent interest instead of those in the first principles model.

[1]J.Phys.A, 9:1387(1976)
[2]Nature, 317:505(1985)
[3]PRB, 80:180501(R), 2009
[4]Phys.Today, 60:47(2007)

Tuesday, November 3, 2009

First order phase transition rounded by disorder

This is an interesting work, which concludes that, an arbitrary addition of random perturbation shall erase the discontinuity that might otherwise exist of the order parameter (namely, the polarization) as a function of the conjugate field in a quantum spin system. In other words, the delta peak in susceptibility shall be rounded in the presence of random perturbations. It turns out that, the conditions are the same as for classical systems.

[1]PRL, 103:197201

Monday, November 2, 2009

How does decoherence take place ?

This is a very fundamental and tantamount question. I remember, in a letter to Pauli, Einstein questioned the superposition principle of quantum mechanics, asking why a bullet is always there instead of everywhere. This is an old example of the question posed as the title. Although, it is now accepted by many authors that, the answer should be closely related to the concept of decoherence, it is not clear how this happens and if it is the case with every system.

I want to mention some other examples:
  • In statistical mechanics, it is assumed that, the average of any observable should be taken over all thermodynamically accessible energy eigenstates with corresponding Boltzmann weights. This implies that, all the interferences that might occur during unitary evolution have been set aside. Usually, the physical system under interest is bulk and immersed in a heat bath, this assumption should work well. Nonetheless, violations may arise as long as the interference time becomes discernible. This situation is comparable to what is happening to light interference. For natural light, the polarization has no significant effect in interference experiments, which, however, is not so with a laser. It is quite evident that, such decoherence should be ascribed to interaction with heat bath, which represents a stochastic source. A general assertion regarding relations between system size, temperature and coherence time is lacking.
  • The measurement theory has been debated since the discovery of quantum mechanics. How does a measurement lead to wave function collapse ? Does a measurement necessarily involve a classical object ? Or does a measurement actually involve decoherence ?
  • The third is usually named 'Hund Paradox', which
    concerns how to explain from first principles why molecules often appear as enantiomers, i.e., either in a left-handed configuration or as in the right-handed image
    That is, the mixing of these two configurations disappears, contrary to one's expectation based on parity symmetry.
The last question was recently carefully addressed in Ref.[1], where the authors made use of molecular scattering theory and master equation to investigate the simplest molecule D2S2. They concluded that, the dominant collisional decoherence is due to a parity sensitive higher-order dispersive interaction term that is usually dropped in dealing with thermodynamic properties. They also made predictions on the conditions for experimental stabilization of enantiomers.

[1]PRL, 103:023202(2009)