The supreme task of the physicist is to arrive at those universal elementary laws from which the cosmos can be built up by pure deduction. There is no logical path to these laws; only intuition, resting on sympathetic understanding of experience, can reach them
Monday, January 30, 2012
A planet orbiting a pair of suns !!
Monday, October 24, 2011
How many neutrinos should exist ?
Neutrino oscillations, observed through the transmutation of neutrinos of one type into neutrinos of another type, occur if there is mixing between neutrino types and if individual neutrino types consist of a linear combination of different neutrino masses. (At present, the masses and mixings of the fundamental quarks and leptons can be measured but are not fully understood.) In the case of two-neutrino mixing — for example, mixing between the muon neutrino and the electron neutrino — the probability (P) that a muon neutrino (νμ) will oscillate into an electron neutrino (νe) is given by P(νμ
νe) = sin2(2θ)sin2(1.27Δm2L/E). Here, θ, in radians, describes the mixing between the muon neutrino and electron neutrino; Δm2 is the difference of the squares of the masses of the two neutrinos in square electronvolts; L is the distance travelled by the muon neutrino in kilometres; and E is the muon-neutrino energy in gigaelectronvolts.
In general, the number of different neutrino masses equals the number of neutrino types, so that three-neutrino mixing involves three neutrino masses and two independent Δm2 values, whereas five-neutrino mixing involves five neutrino masses and four independent Δm2 values. Neutrino oscillations have been observed at a Δm2 of about 7 × 10−5 eV2 by detectors that measure the flow of neutrinos from the Sun and experiments that detect neutrinos at a long distance from nuclear reactors. The oscillations have been detected at a Δm2 of around 2 × 10−3 eV2 by detectors that measure the flow of neutrinos from the atmosphere and by experiments in which neutrinos are measured at a long distance from particle accelerators. In addition to these confirmed observations of neutrino oscillations, there is also evidence for oscillations at a Δm2 of about 1 eV2 from short-distance accelerator and reactor neutrino experiments2, 3, 4. However, it is not possible to explain this third Δm2 value with only three neutrino masses. Therefore, additional neutrino masses are required.
In their study, Kopp et al.1 tried fitting the world neutrino-oscillation data to theoretical models involving four different neutrino masses (three active neutrinos plus one sterile neutrino) and then five different neutrino masses (three active plus two sterile neutrinos; Fig. 1). They found that one sterile neutrino was insufficient to explain the world data, but two gave a satisfactory global fit. (Similar fits are discussed elsewhere5, 6, 7, 8, 9, 10.) One other feature of the authors' two-sterile-neutrino fit is that it allows for violation in leptons of the charge–parity (CP) symmetry — according to which particles and antiparticles behave like mirror images of each other — or for a difference between neutrino oscillations and antineutrino oscillations. Such CP violation might help to explain the r-process, in which heavy elements are produced through nuclear reactions involving rapid neutron capture (hence the 'r'), and the production of heavy elements in neutrino bursts from stellar explosions known as supernovae. It might also help to explain why the Universe is dominated by matter and not by an equal amount of matter and antimatter.
Tuesday, October 18, 2011
Neutrino not that fast !!
Sunday, July 3, 2011
Space is much smooth on Planck scale
The space is just so smooth ! [http://www.physorg.com/news/2011-06-physics-einstein.html]
Einstein’s General Theory of Relativity describes the properties of gravity and assumes that space is a smooth, continuous fabric. Yet quantum theory suggests that space should be grainy at the smallest scales, like sand on a beach.
One of the great concerns of modern physics is to marry these two concepts into a single theory of quantum gravity.
Now, Integral has placed stringent new limits on the size of these quantum ‘grains’ in space, showing them to be much smaller than some quantum gravity ideas would suggest.
According to calculations, the tiny grains would affect the way that gamma rays travel through space. The grains should ‘twist’ the light rays, changing the direction in which they oscillate, a property called polarisation.
High-energy gamma rays should be twisted more than the lower energy ones, and the difference in the polarisation can be used to estimate the size of the grains.
Philippe Laurent of CEA Saclay and his collaborators used data from Integral’s IBIS instrument to search for the difference in polarisation between high- and low-energy gamma rays emitted during one of the most powerful gamma-ray bursts (GRBs) ever seen.
GRBs come from some of the most energetic explosions known in the Universe. Most are thought to occur when very massive stars collapse into neutron stars or black holes during a supernova, leading to a huge pulse of gamma rays lasting just seconds or minutes, but briefly outshining entire galaxies.
GRB 041219A took place on 19 December 2004 and was immediately recognised as being in the top 1% of GRBs for brightness. It was so bright that Integral was able to measure the polarisation of its gamma rays accurately.
Dr Laurent and colleagues searched for differences in the polarisation at different energies, but found none to the accuracy limits of the data.
Some theories suggest that the quantum nature of space should manifest itself at the ‘Planck scale’: the minuscule 10-35 of a metre, where a millimetre is 10-3 m.
However, Integral’s observations are about 10 000 times more accurate than any previous and show that any quantum graininess must be at a level of 10-48 m or smaller.
“This is a very important result in fundamental physics and will rule out some string theories and quantum loop gravity theories,” says Dr Laurent.
Integral made a similar observation in 2006, when it detected polarised emission from the Crab Nebula, the remnant of a supernova explosion just 6500 light years from Earth in our own galaxy.
This new observation is much more stringent, however, because GRB 041219A was at a distance estimated to be at least 300 million light years.
In principle, the tiny twisting effect due to the quantum grains should have accumulated over the very large distance into a detectable signal. Because nothing was seen, the grains must be even smaller than previously suspected.
“Fundamental physics is a less obvious application for the gamma-ray observatory, Integral,” notes Christoph Winkler, ESA’s Integral Project Scientist. “Nevertheless, it has allowed us to take a big step forward in investigating the nature of space itself.”
Now it’s over to the theoreticians, who must re-examine their theories in the light of this new result.
Thursday, June 16, 2011
The UNiverse seems less smooth than theory
Thomas et al. use publicly-released catalogs from the Sloan Digital Sky Survey to select more than 700,000 galaxies whose observed colors indicate a significant redshift and are therefore presumed to be at large cosmological distances. They use the redshift of the galaxies, combined with their observed positions on the sky, to create a rough three-dimensional map of the galaxies in space and to assess the homogeneity on scales of a couple of billion light years. One complication is that Thomas et al. measure the density of galaxies, not the density of all matter, but we expect that fluctuations of these two densities about their means to be proportional; the constant of proportionality can be calibrated by observations on smaller scales. Indeed, on small scales the galaxy data are in good agreement with the standard model. On the largest scales, the fluctuations in galaxy density are expected to be of order a percent of the mean density, but Thomas et al. find fluctuations double this prediction. This result then suggests that the universe is less homogeneous than expected. [http://physics.aps.org/articles/v4/47]
Thursday, April 14, 2011
High energy neutrinos undetected in IceCube
GRBs result from a giant star explosion or the collision of star remnants. These cosmic cataclysms produce—in addition to gamma rays—high-speed protons that are thought to account for the highest energy cosmic rays observed on Earth. Near the source, these protons may run into photons and end up generating neutrinos with energies far above 1 TeV. In the past, neutrino detectors on Earth have not been large enough to capture one of these high-energy neutrinos with any likelihood.
IceCube, which was completed in December 2010, is a kilometer-cubed array of photodetectors that have been drilled down into the Antarctic ice cap. Neutrinos typically fly through the array without leaving a trace, but occasionally one will collide with a nucleus and create a charged particle that emits light as it moves through the ice. The IceCube team compared 13 months of their data (collected when the array was half finished) to observations of 117 GRBs measured independently over the same time period. Contrary to expectations, no high-energy neutrinos were detected within a half-hour of each GRB. Theorists may need to rethink their models of GRBs, as well as look for other possible sources for the highest energy cosmic rays. – Michael Schirber
Friday, April 8, 2011
A TED talk by Janna Levin
That is marvelous !
Wednesday, March 23, 2011
LambdaDCM or MOND ?
Monday, March 7, 2011
Dark Matter Particles Remain Dark
Tuesday, March 1, 2011
The orbit of photons around black holes
A photon emitted near a rotating black hole feels the ground beneath it swirl around. Try to run over a rotating surface, such as the platform of a merry-go-round, and you will not only find yourself fighting the Coriolis force; your body follows the rotation and you stagger and stumble. A photon does not stumble, but rotating spacetime can impart to it an intrinsic form of orbital angular momentum (OAM) distinct from its spin. Like other forms of orbital angular momentum, the photon's OAM is quantized by integer multiples of ħ, not just ±ħ. One can visualize OAM by the wavefronts of this twisted light7, which are not planar but rather resemble a cylindrical spiral staircase, centred around the light beam (Fig. 1). The intensity pattern of twisted light transverse to the beam shows a dark spot in the middle — where no one would walk on the staircase — surrounded by concentric circles. The twisting of a pure OAM mode can be seen in interference patterns, which show a fork-like structure of partially broken mirror symmetry.
Monday, February 28, 2011
Superfluid Exists in the core of a neutron star
Thursday, January 20, 2011
How does black holes sette in galaxies
Bulges and their black holes seem to be a natural consequence of structure formation in the hot Big Bang theory of the expanding Universe. According to this theory, galaxies grew by gravitational assembly of matter into clumps that gathered into larger clumps, and so on to galaxies. In galaxies with bulges, including ellipticals, which have bulges and no disks, the mass of the central black hole correlates not only with the mass of the bulge, but also, as Kormendy, Bender and Cornell1 note ( page 374), with the average spread of velocities of the bulge stars (see Fig. 2a on page 375). The plausible explanation is that part of the gas out of which bulge stars formed settled instead near to the black hole, in part increasing its mass and in part fuelling explosions that blew the gas away and suppressed bulge-star formation. That is, the growth of bulge and black hole may have controlled each other. The timing looks right. Bulge stars are old: they formed when the expanding Universe was roughly a third of its present size (redshift about 2). This is when the rate of star formation per unit of matter was near its maximum (more than 10 times the present rate3). It is also when quasars — explosions powered by the central black holes — were most abundant (100 times more common than now4), probably an explosive result of overfeeding of the black holes as the early generations of stars were forming.
........
In theory, galaxies both with and without bulges were growing by the gravitational collection of clumps of matter when the star-formation rate was near its peak. That would suggest that the clumps contained stars; a recent discussion puts roughly comparable masses in stars and gas6. So where are these early generations of stars? Not in disks, because there is nothing that would slow the motion of a star to allow it to settle onto a disk. Bulges contain old stars, and it has been suggested that this is where the early stars ended up. But we now see that this is not plausible: why would these old stars have avoided our bulgeless Galaxy and settled instead in the bulge of our neighbour M31? Maybe the old stars are in diffuse stellar haloes. If so, it seems curious that the stellar halo of our Galaxy is much less prominent than that of M31. But more studies of other nearby galaxies will be required to check for inventories of stars that are old enough and abundant enough to account for stars that formed before disks.
Tuesday, December 14, 2010
No evidence for the time before Big Bang
Wednesday, December 8, 2010
Tuesday, November 9, 2010
The cosmic history
This is a review article published in Nature:
http://www.nature.com/nature/journal/v468/n7320/pdf/nature09527.pdf
Star-forming galaxies trace cosmic history. Recent observational progress with the NASA Hubble Space Telescope has led to the discovery and study of the earliest known galaxies, which correspond to a period when the Universe was only 800 million years old. Intense ultraviolet radiation from these early galaxies probably induced a major event in cosmic history: the reionization of intergalactic hydrogen.
Sunday, November 7, 2010
Still Quiet is Dark Matter
The XENON100 experiment, in operation at the Laboratori Nazionali del Gran Sasso in Italy, is designed to search for dark matter weakly interacting massive particles (WIMPs) scattering off 62 kg of liquid xenon in an ultralow background dual-phase time projection chamber. In this Letter, we present first dark matter results from the analysis of 11.17 live days of nonblind data, acquired in October and
November 2009. In the selected fiducial target of 40 kg, and within the predefined signal region, we observe no events and hence exclude spin-independent WIMP-nucleon elastic scattering cross sections above 3:4 10 44 cm2 for 55 GeV=c2 WIMPs at 90% confidence level. Below 20 GeV=c2, this result
constrains the interpretation of the CoGeNT and DAMA signals as being due to spin-independent, elastic, light mass WIMP interactions.
Friday, September 24, 2010
Cosmic censorship violation in 5D
Thursday, September 23, 2010
NO dark matter detected, yet
Although the above dark matter idea is popular, it is quite dubious to some physicists, who don't like extra assumptions. In 2004, a German group did a study which reveals running gravitational constant that goes bigger at astronomical scales [Physical Review D 70: 124028 (2004)]. This study might null the necessity of dark matter.
Wednesday, September 15, 2010
Special relativity comes to the help
We demonstrate that a purely ideal mechanism, originating in the space-time distortion caused by the demands of special relativity, can break the topological constraint (leading to helicity conservation) that would forbid the emergence of a magnetic field (a generalized vorticity) in an ideal nonrelativistic dynamics. The new mechanism, arising from the interaction between the inhomogeneous flow fields and
inhomogeneous entropy, is universal and can provide a finite seed even for mildly relativistic flows.
Tuesday, September 7, 2010
2D on the planck scale ?
(1)Definition of dimension by random walk: "In particular, the return probability K(x, x, s) is
K(x, x; s) ∼ (4πs)−dS/2." Here ds is just the dimension, and K(x,x,s) gives the return probability of a random walker in space s.
(2)Some claimed evidences:
- Causal Dynamical Triangulations;
- Renormalization Group Analysis;
- Loop quantum gravity;
- High temperature strings;
- Anisotropic scaling models
(4)Strong coupling limit.
At much smaller scales, on the other hand, the proper description is far less obvious.
While clever experimentalists have managed to probe some features down to distances close to the Planck scale [2], for the most part we have neither direct observations nor a generally accepted theoretical framework for describing the very small-scale structure of spacetime. Indeed, it is not completely clear that “space” and “time” are even the appropriate categories for such a description. But while a complete quantum theory of gravity remains elusive, we do have fragments:
approximations, simple models, and pieces of what may eventually prove to be the correct theory. None of these fragments is reliable by itself, but when they agree with each other about some fundamental property of spacetime, we should consider the possibility that they are showing us something real. The thermodynamic properties of black holes, for example, appear so consistently that it is reasonable to suppose that they reflect an underlying statistical mechanics of quantum states. Over the past several years, evidence for another basic feature of small-scale spacetime has been accumulating: it is becoming increasingly plausible that spacetime near the Planck
scale is effectively two-dimensional. No single piece of evidence for this behavior is in itself very convincing, and most of the results are fairly new and tentative. But we now have hints from a number of independent calculations, based on different approaches to quantum gravity, that all point in the same direction. Here, I will summarize these clues, provide a further piece of evidence in the form of a strong-coupling approximation to the Wheeler- DeWitt equation, and discuss some possible implications.