Monday, November 30, 2009

Ionic Liquid's Makeup Measurably Non-Uniform at the Nanoscale

Ionic liquids are liquids of ions. The reason why they stay in liquid rather than solid is because these ions are of big size. The big mass impedes the solidification which would happen due to electrostatic interactions. Different from conventional liquids like water, which are uniform and homogeneous throughout, ionic liquids were predicted to be non-uniform at nano-scale. Now this prediction was confirmed.

Their findings were published online in the Journal of Physics: Condensed Matter. The article was selected for inclusion in the Institute of Physics' IOP Select, which is a special collection of articles chosen by IOP editors based on research showing significant breakthroughs or advancements, high degree of novelty and significant impact on future research.

Ionic liquids are a new frontier of research for chemists. Originally invented to replace volatile and toxic solvents such as benzene, they’re now used in high-efficiency solar cells, as cheaper, more environmentally friendly rocket fuel additives and to more effectively dissolve plant materials into biofuels. Since 1990, research on ionic liquids has grown exponentially.

“Their properties are strikingly different than those of most conventional liquids,” said Edward Quitevis, a professor of chemistry in the Texas Tech Department of Chemistry and Biochemistry. “A conventional liquid for the most part is composed of neutral molecules whereas an ionic liquid is composed entirely of ions.”

Because of their ability to be tailored and manipulated for specific applications, ionic liquids can be compared to a new form of Erector Set for chemists. By modifying the ions, scientists can create specific properties in the liquids to fit particular applications or discover new materials.

Each new discovery that adds to the understanding of ionic liquids leads to new possibilities for applications and materials, Quitevis said.

“An ionic liquid is basically a salt that happens to have a melting point at or about room temperature,” he said. “The reason why it’s a liquid and not a solid is because the ions are bulky and don’t crystallize readily. The more we learn about them, the more we can find new applications for them that we never could have imagined for conventional liquids.”

By using X-rays and lasers, researchers found that parts of the liquid at the nanoscopic level were not uniform. Some domains of the liquid may have had more or less density or viscosity compared to other domains. Also, these non-uniform domains could be measured.

“At the nanoscopic scale, these liquids are not uniform, compared to other liquids, such as water, where properties are all uniform throughout,” Quitevis said. “This non-uniformity is not random. These domains of non-uniformity are well defined and can be measured. And this nanoscopic non-uniformity was predicted in computer simulations, but never confirmed experimentally until recently.”

Understanding these types of attributes of can lead to more breakthroughs in the future, Quitevis said.

Sunday, November 29, 2009

What does understanding mean ?

I'm attempting to make a definition of understanding. Because I think this is as primary as it is useful. We need understanding when we feel baffled, i.e., we feel that our pure logical deductions can not make an immediate connection between what we know (as a part of our experience) and what we just observed (which is the object to be understood). In other words, we feel that, these two ends, our knowledge as one end and the object as the other, seem very distant. Our logical deductions seem helpless in ferrying us from one end to the other. As long as this ferry is not finished satisfactorily, the understanding will be on-going. The process of understanding is to build these logical steps (the causal chain), starting either from existing models or a new one, all the way to reach the object. So, only when (1) the proper model has been found and (2) the causal chain has been forged can the understanding be settled, and can we be released from the baffling. Our baffling, in my opinion, is a gift. The sense of being baffled promotes us to raise questions, and raising questions reverts us into another baffling. We are baffled when we have questions. As long as we have questions, we shall be baffled. This is a joyful voyage. That is why Einstein once remarked, 'the best one can have in life is the experience of mythtery, and I'm content with a life of mythteries'.

An example. If a man fucks a woman, this woman may get pregnant. So, the question is, 'why does she have to get pregnant ?' 'Why cant it be otherwise ?' . 'Why cant it be otherwise ?' is a question that urges one to answer. We know a fucked woman may get pregnant, but we don't know why this simple fucking could lead to a baby ! If one asks himself, he'll be baffled and curious. It is never self-evident that a fucking shall bring about a baby. An understanding may be achieved if he finds, 'fucking ——> ejaculation of semen ——> semen entering the womb ——> semen's synergy with an egg in the womb ——> this compounded object divides and grows ——> a baby forms in the womb'. If this chain is found, his baffling shall be more or less alleviated. However, only after he completely confirms every element of this chain will be fully relaxed.

In the same spirit, I wish to talk about computer simulations, which have become a very important tool in theoretical physics and other fields. It often offers very important insight that may lead to ultimate understanding, albeit it is not an understanding by itself. It helps understanding, just as experiments. Actually, computer simulations play the same role of experiments, I reckon. In experiments, you set experimental knobs and then start experiments and observes what will happen and make record. In computer simulations, you set and input required parameters and then let a computer to execute orders and output results and you record the results. The only difference is that, in the former it is the Nature that composes and executes the orders while in the latter it is you who write the codes to be executed by the computer. After simulations or experiments, you get the outputs. But you don't know why the output looks like this but not like that. The causal chain between the input and the output is not clear and awaits building. Frequently, this chain can seldom be exactly built. Many approximations have to be made. Much the way one builds a bridge. For the bridge to be strong, perfect materials should be used. But perfect materials can hardly be found, so instead one uses the best at hand. 'The best' may not be perfect, but at least a bridge can be laid down. When better materials are found, an improved one can be built.

That is the way science is done, I think.

Friday, November 27, 2009

symmetry manifested in symmetry breaking

Spontaneous symmetry breaking has become a fundamental notion of condensed matter physics as well as high energy physics. This concept says, the low temperature properties of a physical system may not acquire the same symmetry as its basic Hamiltonian. The reason is simple: for a system with symmetry, there is always a degeneracy occurring to its ground state, i.e., it has a number of ground states with the same energy, and at low energy scales, no perturbation suffices to turn this system in one of its ground state to another, therefore, its physical properties shall bear features special to that particular ground state it is in. Let us point out that, the perturbation stems from its interaction with all the rest of our universe.

As this concept has been corroborated, people tend to skip an important thing, which is that, many consequences of this symmetry breaking actually reveal the original symmetry. One such consequence is the formation of domain structures. Roughly speaking, a domain is a region where the system is found in one of its degenerate ground states. Now that there are many equally possible (in the absence of external field) ground states, the system, when its symmetry becomes broken, falls in a state with domains that each realizes one particular ground state. So, one can factually find almost reminiscent of every ground state in this symmetry broken state.
Therefore, as a whole, this system actually respects its symmetry rather than simply break it ! Of course, domain walls are high energy regions which would dismiss the domain formation but for two factors: (1)inter-domain interaction and (2)ergodicity broken.

An often cited example is ferromagnets. No natural ferromagnet (such as iron) can be found magnetic at all, because its domains cancel each other, as a result no global magnetism found, though with very small probes (like STM) local magnetism can be detected.

Domain walls are current active research areas. They display very aberrant properties. For example, scientists found conducting domain walls in Bismuth Ferrite, despite that the material itself is insulating in bulk state,

Nature Materials 8, 229 - 234 (2009)
Published online: 25 January 2009 | doi:10.1038/nmat2373

Subject Categories: Electronic materials | Magnetic materials

Conduction at domain walls in oxide multiferroics

J. Seidel1,2,10, L. W. Martin2,3,10, Q. He1, Q. Zhan2, Y.-H. Chu2,3,4, A. Rother5, M. E. Hawkridge2, P. Maksymovych6, P. Yu1, M. Gajek1, N. Balke1, S. V. Kalinin6, S. Gemming7, F. Wang1, G. Catalan8, J. F. Scott8, N. A. Spaldin9, J. Orenstein1,2 & R. Ramesh1,2,3


Domain walls may play an important role in future electronic devices, given their small size as well as the fact that their location can be controlled. Here, we report the observation of room-temperature electronic conductivity at ferroelectric domain walls in the insulating multiferroic BiFeO3. The origin and nature of the observed conductivity are probed using a combination of conductive atomic force microscopy, high-resolution transmission electron microscopy and first-principles density functional computations. Our analyses indicate that the conductivity correlates with structurally driven changes in both the electrostatic potential and the local electronic structure, which shows a decrease in the bandgap at the domain wall. Additionally, we demonstrate the potential for device applications of such conducting nanoscale features.

Thursday, November 26, 2009

wave-corpulscle duality: wield or elegant ?

Wave-corpuscle duality is usually (and maybe best) illustrated in the famous double-slit experiment, where one lets a beam of particles pass through a wall with two slits to reach a screen, on which an intensity pattern shall be observed. Now if these particles are what every one understands with his daily experience, such as bullets and sands, one gets a pattern displaying all features one can find with bullets. On the other hand, if these particles are light quanta, i.e., photons, a completely different pattern, which is usually called interference pattern, shows up exhibiting all features one may perceive with water waves bypassing two obstacles (e.g., two big stones). If the first pattern prevails over the latter, then one may say this beam behaves more like a beam of particles. And the opposite case corresponds to wave-like behaviors. Now the 'wield' thing is that, the same beam can display both patterns, depending on whether a slit is open or closed !

Common sense would say, a bullet is always a bullet, regardless of the experimental setup. However, the quantum world goes absolutely against one's common sense, since by operating a slit switch without explicitly affecting the gun, the bullets become something else. Were such quantum effect dominant in daily world, one would be able to alter the moon by playing with something on earth. Wield ! Completely wield !

Yes, it is wield relative to common sense, as wield as curved space-time! Einstein said our universe is curved, which is also counter-intuitive. Nevertheless, in spite of their wields, they are elegant. Why ? Because they seem to be the simplest notions one can have to solve all puzzles in their own fields. One can hardly expel them without encountering awkwardness. They have a unique unifying power. In the eyes of theoretical physicists, the elegance of a concept consists largely of its unifying power. Such concept solves not one phenomenon, not two phenomena, but a dozen of seemingly isolated phenomena. In quantum world, it is hard to dispense with wave-corpuscle duality and at the same time explains everything. It is impossible to explain double-slit experiment using a particle-only picture without invoking some very ugly assumptions. It is impossible to dispense with the notion of relativity of simultaneity while at the same time accounting for all fast phenomena.

So, wield and elegant are likely to go hand in hand. Further, what is wield is constantly changing, because our common sense is constantly changing. It is never a good reason to reject an elegant idea just because it is wield !

P.S.: this blog is intrigued by a research presented in PHYS.FORUM, aiming at eradicating wave-particle duality. The author was motivated by this question, 'what goes through the slits ?'. In my opinion, all exiting answers to this question differ as sheerly an issue of semantics. You may use another name of wave-particle duality, but the content remains the same, because, it is at the heart of quantum mechanics.

Soft colloids make strong glasses

Despite familarity with it, many puzzles remain with glass. Here is a letter talking about glass formation in aqueous suspensions of microgel tiny particles driven by varying concentration instead of temperature. Perfect resemblance was found between these two mechanisms, thus providing a new material for understanding glass. What is special to soft colloids is their deformability under concentration change, which permits ont only fragile glass but also strong one. This feature has not been seen in hard sphere colloids.

Glass formation in colloidal suspensions has many of the hallmarks of glass formation in molecular materials1, 2, 3, 4, 5. For hard-sphere colloids, which interact only as a result of excluded volume, phase behaviour is controlled by volume fraction, phi; an increase in phi drives the system towards its glassy state, analogously to a decrease in temperature, T, in molecular systems. When phi increases above phi* approximately 0.53, the viscosity starts to increase significantly, and the system eventually moves out of equilibrium at the glass transition, phig approximately 0.58, where particle crowding greatly restricts structural relaxation1, 2, 3, 4. The large particle size makes it possible to study both structure and dynamics with light scattering1 and imaging3, 4; colloidal suspensions have therefore provided considerable insight into the glass transition. However, hard-sphere colloidal suspensions do not exhibit the same diversity of behaviour as molecular glasses. This is highlighted by the wide variation in behaviour observed for the viscosity or structural relaxation time, taualpha, when the glassy state is approached in supercooled molecular liquids5. This variation is characterized by the unifying concept of fragility5, which has spurred the search for a 'universal' description of dynamic arrest in glass-forming liquids. For 'fragile' liquids, taualpha is highly sensitive to changes in T, whereas non-fragile, or 'strong', liquids show a much lower T sensitivity. In contrast, hard-sphere colloidal suspensions are restricted to fragile behaviour, as determined by their phi dependence1, 6, ultimately limiting their utility in the study of the glass transition. Here we show that deformable colloidal particles, when studied through their concentration dependence at fixed temperature, do exhibit the same variation in fragility as that observed in the T dependence of molecular liquids at fixed volume. Their fragility is dictated by elastic properties on the scale of individual colloidal particles. Furthermore, we find an equivalent effect in molecular systems, where elasticity directly reflects fragility. Colloidal suspensions may thus provide new insight into glass formation in molecular systems.

Tuesday, November 24, 2009

decoherence and collapse in quantum theory

The following news seems ignoring the difference between decoherence and collapse of wave function. The former is governed by Schrodinger equation and hence in principle deterministic, whereas the latter is completely probabilistic. And, never forget that, it takes no time for a collapse, although, it indeed takes time for decoherence (the so-called decoherence time). The riddle is not about decoherence but about collapse. If collapse could be removed, Einstein would accept Quantum Theory !

WHY can't we be in two places at the same time? The simple answer is that it's because large objects appear not to be subject to the same wacky laws of quantum mechanics that rule subatomic particles. But why not - and how big does something have to be for quantum physics no longer to apply? Ripples in space-time could hold the answer.

The location of the boundary between the classical and quantum worlds is a long-standing mystery. One idea is that everything starts off as a quantum system, existing in a superposition of states. This would make an object capable of being, for example, in many places at once. But when this system interacts with its environment, it collapses into a single classical state - a phenomenon called quantum decoherence.

Brahim Lamine of Pierre and Marie Curie University in Paris, France, and colleagues say that gravitational waves may be responsible for this. These waves in the very fabric of the universe were generated by its rapid expansion soon after the big bang, as well as by violent astrophysical events such as colliding black holes. As a consequence, a background of ripples at very low amplitudes pervades space-time.

Gravitational waves may be responsible for collapsing quantum ambiguity into a single classical state

Lamine and colleagues calculated how this fluctuating space-time might contribute to quantum decoherence. They found that for systems with very large mass, such as the moon, decoherence induced by the gravitational waves would have caused any quantum superposition to dissipate immediately. At the other end of the scale, such waves would have a negligible effect on massless photons.

To test whether gravitational waves do in fact cause the decoherence seen in large objects, the researchers suggest using a set-up called a matter-wave interferometer in which molecules are made to pass through multiple gratings. The wave-like nature of the molecules causes them to diffract, and the diffracted waves interact to give rise to an interference pattern. Quantum decoherence destroys this pattern, so in principle this could provide a test for whether the decohering effect of background space-time fluctuations matches predictions. Such a system would have to be completely isolated to rule out other effects.

This is, however, impossible in practice - with today's interferometers, at least. Experiments pioneered by Anton Zeilinger, Markus Arndt and colleagues at the University of Vienna, Austria, have been able to generate interference with beams of 60-atom carbon buckyballs, but even with molecules of this size the effect of gravitational waves would be too small to be observed.

According to Lamine, who presented his work last month at the Gravitation and Fundamental Physics in Space meeting at Les Houches in the French Alps, the effect should be measurable in larger systems at high energy. Supersonic beams of about 3000 carbon atoms would do the trick if made to interfere over an effective area of about 1 square metre. This is far beyond the reach of any foreseeable technology.

Some speculative theories predict, however, that quantum decoherence will occur on a lower energy scale than that suggested by Lamine. If so, this could be within experimental reach. "That is why our experiments are pushing [up] the interference mass limit, step by step," says Arndt.

Friday, November 20, 2009

How do you interpret your results ?

Scientists should take caution when they explain their findings to journalists. Here is an example on how misleading claims can be made. The article headline claims 'innate correlation between various power law phenomena', which however lacks justifications in their original work. In their work, the authors study the relation between donation and wealth, which was found following Zipf's power law. This law was originally used to describe the distribution of words in language. Now that these two distant phenomena share the same mathematical structure, they claimed their work demonstrates an innate correlation between these two phenomena, which is however quite inappropriate. It is obvious that, sharing the same mathematical structure does equal having innate relation. By 'innate relation', we mean 'a phenomenon is a logical corollary of another'. In this sense, one cannot see any innate relation between 'wealth-donation' and 'words-language'. They just share the same math curve, which may be a coincidence. One can find many similar examples in physics and other fields. For example, physicists have found that, the gauge theory, which was originally designed for descriptions of particle behaviors, also emerges in condensed matter, but one cannot say one can be deduced from the other.

........................................................................................................................................................................

Researchers Find Innate Correlations Among Different Power Law Phenomena

November 17, 2009 By Lisa Zyga Researchers Find Innate Correlations Among Different Power Law Phenomena

Enlarge

The Zipf plot of wealth and donation are innately connected. The upper part of the donation distribution follows Zipf’s law, and the Zipf exponent is equal to that of the corresponding wealth distribution. Image credit: Q. Chen, et al.

(PhysOrg.com) -- Studying the patterns that emerge in natural and social phenomena is a popular area of research, although usually individual phenomena are studied separately from each other. In a recent study, researchers have found innate correlations among some of these phenomena, showing that the amount of money that individuals in a society donate to a charity can be used to determine the distribution of personal wealth in that society. The connection between these two topics can also be used for exploring the complexity of a society's economic system.

“The greatest significance of this work is showing that power law phenomena in different references may correlate with each other innately,” Yougui Wang of Beijing Normal University told PhysOrg.com. “Thus, this implies that some power law phenomena should be the derivatives of other basic ones.”

The key to using patterns from one data set to infer the patterns of a different set of data is realizing that both sets share a mathematical principle called Zipf’s law, explained Wang, along with coauthors Qinghua Chen and Chao Wang of Beijing Normal University. Although Zipf’s law was originally proposed in the field of linguistics to explain the distribution of words in a language, it has attracted much more attention because it also describes a wide variety of natural and social phenomena. Zipf’s law quantitatively describes how the most common entities of a set (such as the common word “the”) appear with a high frequency that logarithmically tapers off as entities become less common. The same power law holds true for the distribution of population sizes, Internet traffic, and other phenomena. As researchers have previously found, in some cases the law stems from a competition among individuals for a constraint resource.

In the current study, the scientists show that collective donations follow a particular pattern: the upper part (made of the larger monetary donations) follows Zipf’s law, while the lower part (made of smaller donations) exhibits a uniform distribution. The data comes from donations by Chinese to the Chinese Red Cross Foundation after an earthquake of magnitude 8.0 struck Sichuan province in southwest China in May 2008. The data includes more than 230,000 personal donations, with the donation amount ranging from 0.01 RMB to 2.79 million RMB. Significantly, 205,000 donations (87.5%) of the total sample were 100 RMB or more. This part of the data approximately followed Zipf’s law, while the distribution of donations of less than 100 RMB was basically uniform.

So far, the analysis is yet another phenomenon of human behavior that follows the regularity of Zipf’s law. But the researchers also developed a model to explain this pattern, taking into account the previous finding that wealth distribution has also been known to follow Zipf’s law. Their model shows that only a portion of the individuals in a society have a desire to donate, and of these, each individual donates a portion of his or her overall wealth that is random but uniformly distributed. Even though only a small sample of the donators in the case of the Sichuan earthquake was collected and analyzed, the researchers’ model could generate the distribution of personal wealth throughout China, which is consistent with what has been obtained from the data of the richest 500 individuals in China.

As the researchers explain, donation and wealth, like other power law phenomena, seem to coexist in systems. By showing that power law phenomena can be related to one another, the researchers’ work could be valuable for exploring the correlations among natural patterns in systems.

“Based on the results of our study, the distribution of wealth could be derived from that of donation,” said Qinghua Chen. “Once the link between two variables involved in a complex system is just like the relation between donation and wealth in our case, we can infer the distribution of one variable from the other.”

More information: Q. Chen, C. Wang, and Y. Wang. “Deformed Zipf’s law in personal donation.” Europhysics Letters, 88 (2009) 38001. doi: 10.1209/0295-5075/88/38001

............................................................................................................................................................................

Tuesday, November 17, 2009

Models and explanation

As I have said many times, the following, this time which was said by Sergei Maslov, is my credo in doing science:

Even more important than tools, theoretical physics has taught me about the power of simple models in revealing the essence of complex phenomena. Simple models are indispensable if one wants not to just reproduce the complexity of a system (e.g. by detailed computer simulations) but to truly understand it.
In my opinion, no comprehension can be achieved without simple models, despite how accurate a simulation may be. Because understanding is not simply about accuracy. More, it is about how to establish a clear connection between one's own already existing experience (including his knowledge) and the phenomena under consideration. Only when this connection can be built in light of his experience can he be satisfied in understanding. This process of building connection is actually a process of constructing models based on what he knows and reasoning with his model.

P.S., Maslov is doing research in systems biology and complex networks.

An opaque fishing net ?

Light, because of its wave-corpulse duality, shows many surprising behaviors. A recent PRL paper [1] adds one more. Imagine you fabricate a gold film on a glass substrate and punch a regular array of sub-wavelength holes in this film. Now you shine light upon it. It is so thin--about 20 nm--that it becomes semi-transparent. Now you look at the transmission, which unexpectedly turns out being smaller than without holes. That is, the brutalized film, contrary to expectations, makes an obacure view. Nevertheless, if the film is much thicker, say, 100nm thick, the scenario will be the opposite: the transmission is greatly increased [2].

The authors think that, Fano analysis may lend an explanation. According to them, there are two interfering wavelets contributing to the transmitted waves, which are the resonantly scattered and the nonresonantly scattered, respectively. The former involves resonant excitations of surface plasmon, whereas the latter enters directly through holes. It turns out that, Fano resonance hinges on a single parameter, which is the quotient of a ratio to the line width (which measures how coherent a light is). The ratio concerns the resonant wave amplitude and the directly transmited amplitude. They argue that, this parameter is large for thick films but rather small for extremely thin films, which may then give rise to the observations.

It is worth seeing that, the surface plasma may play a central role. The interaction between light and plasma is obviously an interesting subject. This interaction can carry light to pass through very small holes. The light is at first coupled to the plasma and then the plasma carries it to the destination [2].

[1]Phys. Rev. Lett. 103, 203901 (2009)

[2]Nature (London) 391, 667 (1998)

Monday, November 16, 2009

Wield light

Here is a fascinating work on counter-intuitive facets of light [1].

If a metal film, thick enough to be totally opaque, is perforated by tiny subwavelength holes in an orderly fashion, the transmission will be enhanced extraordinarily [2]. Here, we investigate the transmission through an ultrathin semitransparent Au film with a square array of subwavelength holes and observe the opposite behavior: less light is transmitted through the pierced metal compared to the closed film.

These authors blame surface plasma for the light block, although not every one thinks so. Here is a comment.

The way light moves, with its fixed speed and its ability to act like either a wave or a particle, often leads to some of the most curious paradoxes of physics. A new one has just been found: Make holes in a film of gold so thin that it's already semitransparent, and less light gets through.

Because of its wave nature, light generally can't squeeze through a hole whose width is smaller than the wavelength of the light. In 1998, however, researchers discovered that light could zip through certain patterns of such holes punched into thin metal plates. Physicists figured out that the light created waves in the metal's electrons--called plasmons--that move across the material's surface in much the same way that ripples move through water. The plasmons, which have wavelengths much shorter than light, couple with each other across the tiny holes and pull the light along for the ride. One possible application is to use plasmons to build better light-based integrated circuits that would be as fast as fiber optics but less bulky.

Toward this end, researchers from the University of Stuttgart in Germany laid very thin films of gold onto pieces of glass and then used ion beams to etch the film with holes arranged in a regular, square array. These holes were smaller than the wavelength of light and, despite being so tiny, are just the kind of openings that have been shown to let light through the thicker, opaque film used in the 1998 experiment. But in the new experiment, the gold film was so thin--only 20 nanometers--that light could already shine through it. And surprisingly, less light went through the holey gold than through the original semitransparent film.

Why? The researchers blame the semitransparent nature of the gold film, which allows 40% of the light to flow directly through it, preventing it from stopping at the surface to help form plasmons. Plasmons are formed by the kick of energy they get from the incoming light, combined with how the electron waves of the plasmons skitter around the hole geometry, so the light needs to be tuned to the specific hole geometry to maximize plasmons. In this case, that leftover 60% of the light simply doesn't combine with the geometry to create plasmons that can cross through the gold holes, the team reports this week in Physical Review Letters.

Physicist Martin P. van Exter of Leiden University in the Netherlands says that interference between hole geometry and light transmission is expected, so the results shouldn't come as too much of a surprise. However, he also points out that gold always absorbs light in peculiar ways--indeed, this is what leads to its golden color instead of most metals' more typical silver--and it's possible that this contributed to the results.

Team member Bruno Gompf says that the next step is to see whether other hole patterns--hexagonal, rectangular, aperiodic--show the same effect. Perhaps a particular pattern could serve as a filter to block certain wavelengths of light in those future plasmonic integrated chips, he says.

The origin of this seems still elusive.

[2]Nature (London) 391, 667 (1998)
[1]Phys. Rev. Lett. 103, 203901 (2009)

Saturday, November 14, 2009

An interesting correspondence between magnetism and superconductivity

Magnetism and superconductivity are apt to be found in a number of strongly correlated electron systems. Celebrated examples are cuprate superconductors and their recent iron-based analogs. As doping is varied, these substances move between magnetism and superconductivity. It is thus no wonder that, many surmise an inherent connection between these two phases. It is quite possible that, the same electronic interaction is controlling them.

This viewpoint discusses another aspect on the connection between magnetism and superconductivity. This discussion strongly suggests that, a mathematical description may be established generic to them. If this were the case, two things would be clear: (1) competitions between magnetism and superconductivity must exist as a rule and (2) understanding one could help comprehend the other.

The unknown aspects of friction



Perhaps every one has some knowledge of friction, which he gains through experience and/or science education. As a part of daily life, friction is as common as gravity. Many people think that, its thorough understanding must have been attained. The reality is nonetheless not so satisfactory. Here [1] is a piece of work attempting to resolve an issue concerning the onset of sliding under a force at the trailing of a slider, which rubs against a track.



The author set up a phenomenological model and solved it numerically. The interaction between the slider and the track is modeled by spring contacts, whose rupture and formation are governed by a simple law set by a threshold force. Their study showed that, the sliding is always preceded by crack-like fronts, which signifies the propagation of a broken contact.

[1]PRL, 103:194301(2009)

Wednesday, November 11, 2009

The following is a quotation from a guy I highly respect. I agree with him very much on the relevant matter as said below:

Creationist Refutes Darwin’s Evolutionary Theory - A Rebuttal

I mentioned about this talk about a week ago of a creationist attempting to falsify Darwin's theory of evolution. This morning, I found this response written by a physics major junior that easily threw a lot of doubt in the garbage that was spewed at that talk.

Carter used a wonderful scientific vocabulary and showed some facts that were true.

However, blinded by science jargon, he put up facts and figures with little truth to them, no way to verify them (or if he did, they were not accurate and considered fraudulent in the scientific community), nor accuracy to the science actually used.

This man performed a wonderful show, and is an outstanding example of how the public will believe almost anything that has numbers and graphs in it with no scientific proof.


The writer listed several examples where Carter simply can't produce valid sources for his numbers.

I'm left to wonder how many people in the audience who bought into what they were told. We often talk about the public needed to be scientifically literate. What we mean by that is NOT that the public knows all these "facts", but rather, having the skill to analyze how one goes from A to B to C to D. How, for example, do you draw up the conclusion that, say, "gay marriage" leads to "undermining traditional marriage". People throw out those two phrases all the time, but no one seems to explain the mechanism that show how "gay marriage" CAUSES "undermining of traditional marriage". Not only that, if such mechanism exists, one needs to publish such a thing and be scrutinized for it by others who are experts in the field of study to ensure that such a mechanism is valid, and that leads to the unique conclusion.

The same thing is occurring here. One simply can't throw out all of these numbers and conclusions (something that is commonly done in politics and economics) without any basis to show that they are valid. But the public that isn't familiar with the scientific process are ignorant of that. This is why I'm very proud of this young writer who already has the skill (hopefully something he gained from his education) to analyze and question how such conclusions are made. So well done, Jim Eakins!

Making the public be scientifically literate should mean making them able to make a rational analysis of how one draws up a conclusion. It is why when I proposed a revamping of the undergraduate intro physics labs, I try to steer away from making "textbook tests" of physics principles. Rather, I focused on how one can draw up the conclusion on how A depends on B, and what is the exact relationship between those two. Our world has always been focused on how we can relate things, how are they interconnected, etc. These types of lab exercises precisely present such tests.
It is a nice statement.

Wednesday, November 4, 2009

A grand unification

In science, there is one thing that always makes me curious and awe. Roughly, I'd like to call it 'grand unification', which means a simple concept that connects various seemingly unrelated and independent phenomena which occur in distant disciplines. Such unification corroborates the conviction that, the universe has a common underlying mathematical structure. Here I talk of one more example of this.

This example is about fast symmetry breaking phase transitions (FSPT), in contrast with the usual cases where one talks of phase transitions near equilibrium states which may be described within hydrodynamic jargon. Up to my knowledge, no general effective theory for the moment has been established for FSPT. Despite this, it is possible to find out on some quantities very generic constraints which derive from the basic structure (such as symmetry and causality) of the first principles theory. Suppose a physical system is in equilibrium. Now one changes an experimental knob (e.g., pressure). This system shall then evolve away from its initial equilibrium state according to the corresponding dynamic theory, and shall eventually reach another new equilibrium state. However, what this final state might be should depend on the evolution and how the knobs are changed. An interesting thing is that, during this evolution some topological defects (which are textures that break the overall symmetry) shall form. Generally, it is expected that, as constrained by causality, which means no physical signal can travel faster than a typical velocity special to the system, two textures separated by a typical distance, say x, could not develop significant correlations during the evolution. Assume that a texture be characterized by 'direction vector'. Then, the causality indicates that, two distant textures should choose their directions independently in statistical sense. Therefore, all direction vectors may be found with certain textures because of the symmetry respected by the first principles model. Now, an interesting question is, how does x scale with the phase transition rate ?

In 1976, Kibble made an estimation [1], which is based on a cosmological model. It deals with cosmic strings, which are inherently stable topological objects that are expected to survive to today. Experimental verification of his idea is however quite difficult, since it is about the whole universe, which is so vast. A breakthrough came by Zurek [2], who tried Kibble's idea on Helium, which undergoes super-fluid transition at much low temperatures. And in such system, topological defects, which are known as fluxons and antifluxons by their winding numbers, could form as the transition is passed. Therefore, the helium system poses a remarkable realization of the big universe in this respect. To obtain the relation between x and transition rate, Zurek employed the dynamic Landau's theory, which was supposed to govern the temporary evolution. It turns out that, the relation follows a simple power law. Verification of this law came afterward. The latest one was done on superconductor [3]. For a good review see Ref.[4].

It is exciting to see that, the thing speculated about cosmos has its image on earth. I'd like to pose another question concerning FSPT, how does x scale with the dimension of the system ?

It may be worth pointing out that, as was emphasized by Anderson in his 'more is different' address, to deal with emergent phenomena, it is not enough to know just the first principles model (e.g., BCS model), which are usually not much useful in obtaining intuition and understanding. It is more useful to come up with an effective model (e.g., Landau's model), which concerns the quantities under imminent interest instead of those in the first principles model.

[1]J.Phys.A, 9:1387(1976)
[2]Nature, 317:505(1985)
[3]PRB, 80:180501(R), 2009
[4]Phys.Today, 60:47(2007)

Tuesday, November 3, 2009

First order phase transition rounded by disorder

This is an interesting work, which concludes that, an arbitrary addition of random perturbation shall erase the discontinuity that might otherwise exist of the order parameter (namely, the polarization) as a function of the conjugate field in a quantum spin system. In other words, the delta peak in susceptibility shall be rounded in the presence of random perturbations. It turns out that, the conditions are the same as for classical systems.

[1]PRL, 103:197201

Monday, November 2, 2009

How does decoherence take place ?

This is a very fundamental and tantamount question. I remember, in a letter to Pauli, Einstein questioned the superposition principle of quantum mechanics, asking why a bullet is always there instead of everywhere. This is an old example of the question posed as the title. Although, it is now accepted by many authors that, the answer should be closely related to the concept of decoherence, it is not clear how this happens and if it is the case with every system.

I want to mention some other examples:
  • In statistical mechanics, it is assumed that, the average of any observable should be taken over all thermodynamically accessible energy eigenstates with corresponding Boltzmann weights. This implies that, all the interferences that might occur during unitary evolution have been set aside. Usually, the physical system under interest is bulk and immersed in a heat bath, this assumption should work well. Nonetheless, violations may arise as long as the interference time becomes discernible. This situation is comparable to what is happening to light interference. For natural light, the polarization has no significant effect in interference experiments, which, however, is not so with a laser. It is quite evident that, such decoherence should be ascribed to interaction with heat bath, which represents a stochastic source. A general assertion regarding relations between system size, temperature and coherence time is lacking.
  • The measurement theory has been debated since the discovery of quantum mechanics. How does a measurement lead to wave function collapse ? Does a measurement necessarily involve a classical object ? Or does a measurement actually involve decoherence ?
  • The third is usually named 'Hund Paradox', which
    concerns how to explain from first principles why molecules often appear as enantiomers, i.e., either in a left-handed configuration or as in the right-handed image
    That is, the mixing of these two configurations disappears, contrary to one's expectation based on parity symmetry.
The last question was recently carefully addressed in Ref.[1], where the authors made use of molecular scattering theory and master equation to investigate the simplest molecule D2S2. They concluded that, the dominant collisional decoherence is due to a parity sensitive higher-order dispersive interaction term that is usually dropped in dealing with thermodynamic properties. They also made predictions on the conditions for experimental stabilization of enantiomers.

[1]PRL, 103:023202(2009)

Cherenkov radiation (CR) in negative refractive materials

Nature provides infinite number of substances with amazing properties. That is why scientists go back from time to time to learn from nature. The design of solar cells may benefit a great deal from inspecting how a plant makes food via photo-induced chemical reactions. What underlies the colorful butterfly wings is at all a marvelous texture, which is now known as photonic crystals. Nevertheless, sometimes the nature seems wanting in its diversity. An example is the so-called negative refraction materials (NRM), which are up to now not available naturally. All known NRM are made artificially, going under the name 'composite left-handed meta-materials'.

NRM is special in that, it has a negative refractive index, which means the refracted ray will lie on the same side as the incident ray, relative to the normal of the interface. This property stimulates novel ways of light manipulation. Many application are bound to take place. Years ago[1], it was utilized to make perfect lens, which has a remarkable resolution that is smaller than light wavelength, an impossible thing with conventional positive refraction materials. More innovations will surely come out soon.

Again due to this negative refraction, Cherenkov radiation will also be quite different. Actually, it is the reversed CR that happens. Namely, as a charged particle passes through an NRM dielectric at a speed greater than the light speed in this dielectric, backward radiation will be experienced. A direct experimental observation of this is much rare. In Ref.[2], a vivid simulation, however, was conducted. Despite this, it is still highly desirable to observe what will happen if a real beam is passed through the NRM.

A question: can we someday find a naturally available NRM ? This would be very interesting and exciting ... ...

[1]PRL, 85:3966(2000)
[2]PRL, 103:194801(2009)

The Philosophy of Science

I do science because I want to see how things operate causally. I believe they should operate causally.

Science is about how to comprehend natural phenomena in a logical way. The phenomena, at first glance, seem scattered and unrelated, the unification of which is the goal of science. Science attempts to achieve unification with a conceptual model, based on which logical deductions set in. In science, one tries to relate various phenomena using the possibly smallest number of concepts and axioms, much the way everything about flat space geometry is wholly built on Pythagorean theorem. Doing science is like a play. The player all the time looks for new way of playing with the toy in hand. Frequently, he looks for new toys. He examines a toy from various perspectives and thinks about what will happen if some conditions are given. And he tries to do what he speculates. He entertains himself in doing this. The toy for scientists is any piece of Nature.

We take a piece of Nature and think what we can do with it. We may place it in a heat bath and measure its heat capacity. We may apply an electric field to it and watch its responses. We may consider how a beam of light can be influenced by it. Or, we may bombard it using a beam of electrons or other kinds of particles and observe what will happen to the beam and the target. We want to know more and more of what will happen if this and that ... ...On the other hand, we may also examine it theoretically, namely, we put forth a model and employ math and ideas to make predictions on what may occur given this and that ... ...We also contrast the predictions with observations and see how fit the model is and see how a better understanding may be accomplished with another model.

It is not simply about experiments and models. It is fundamentally about how to know more of and how to better understand nature, rationally. It is about exploring nature. It is a pursuit. It is an Odyssey. Science is a life style. Like arts, science is an endeavor to capture the world.

Incidentally, it is essential to realize that, science is not a part of Nature itself, but rather of human's culture. Thus, though it proves of great values to mankind's development, science does not bear any objective meanings. It is shaped by humans, as clothes. As once remarked by Albert Einstein, 'one knows little of life. Anyway, how much does a fish know of the water it lives in ?'

Sunday, November 1, 2009

shielding earthquakes

Earthquakes tend to cause disasters to humans. It is desirable to screen them. Seismic waves are generally composed of two components, the transverse one (i.e., the S waves, which represents the up-down vibrations of crust) and the longitudinal one (P waves, left-right vibrations). The latter travels faster and can reach more distant places, while the former is more fierce. Here is a piece of work coming up with a design, which assumes a concentric structure, to shield P waves. Their numeric simulations show that, this design is efficient with a broad frequency band.
(1)Ultrabroadband Elastic Cloaking in Thin Plates
(2)brief introduction to seismic waves

red blood cells in flow conditions

An interesting question in cell dynamics involves how the shape of a cell changes under flow conditions. Suppose a cell is moving through a blood vein and it will be pressed by the ambient fluid, and as a result, it may change its shape because of such pressure. However, a satisfactory treatment is considerably difficult. The following Letter focuses on this issue and numerical simulations were conducted.

Why Do Red Blood Cells Have Asymmetric Shapes Even in a Symmetric Flow?

(1)Phys. Rev. Lett. 103, 188101 (2009) – Published October 26, 2009

single CuO2 superconducting sheet demonstrated !

The question of how thin cuprate layers can be while still retaining superconductivity has been challenging to address, in part because experimental studies require the synthesis of near perfect unltrthin HTS layers and ways to profile the SC properties with atomic solution. In this work, the authors addressed this issue.

The idea may be described as follows. As is known from the phase diagram of p-type cuprate superconductors, SC exists in between two limiting doping levels, say x1 and x2. Below x1, the compound is insulating (I). Beyond x2, it is a bad metal (M). By epitaxy technique, it is easy to grow a couple of I layers above a couple of M layers. Now that M layers have too many holes while I layers are wanting holes, holes shall flow from M to I layers and eventually, all layers become SC. At point, one selects a particular layer, which shall be marked by partial substitution of Zn for Cu. As we know, such substitution will dramatically suppress SC within this layer. Now on using STM, it is possible to yield a complete profile of all the layers in terms of carrier concentration and critical temperature. The result shows that, Zn exchange only affects the SC properties of the as-marked layer, which means SC occurs within a single layer.

This finding therefore confirms the assumption that, High Tc is not essentially a 3D phenomenon. This is crucial to many current High Tc SC models.

(1)doi: 10.1126/science.1178863