That has obvious application for governments and the military, but it is increasingly of interest to banks and other commercial operations that need to secure everything from contracts to financial transactions. What’s more, this kind of security is increasingly needed because quantum computers will be able to break the codes currently used to keep many messages private.

And that raises an interesting question: How should scientists and engineers go about the task of building a quantum internet that spans the globe?

Today we get an answer thanks to the work of Sumeet Khatri and colleagues at Louisiana State University in Baton Rouge. This team has studied the various ways a quantum internet could be built and say the most cost-effective approach is to create a constellation of quantum-enabled satellites capable of continuously broadcasting entangled photons to the ground. In other words, the quantum internet should be space-based.

First some background. At the heart of any quantum network is the strange property of entanglement. This is the phenomenon in which two quantum particles share the same existence, even if they are separated by vast distances. It ensures that a measurement on one of these particles immediately influences the other, a marvel that Einstein called “spooky action at a distance.”

Physicists usually distribute entanglement using pairs of photons created at the same point and instant in time. When the photons are sent to different locations, the entanglement linking them can be exploited to send secure messages.

The problem is that entanglement is fragile and hard to preserve. Any small interaction between one of the photons and its environment breaks the link. Indeed, this is exactly what happens when physicists transmit entangled photons directly through the atmosphere or through optical fibers. The photons interact with other atoms in the atmosphere or the glass, and the entanglement is destroyed. It turns out the maximum distance over which entanglement can be shared in this way is just a few hundred kilometers.

How then to build a quantum internet that shares entanglement across the globe? One option is to use “quantum repeaters”—devices that measure the quantum properties of photons as they arrive and then transfer these properties to new photons that are sent on their way. This preserves entanglement, allowing it to hop from one repeater to the next. However, this technology is highly experimental and several years from commercial exploitation.

So another option is to create the entangled pairs of photons in space and broadcast them to two different base stations on the ground. These base stations then become entangled, allowing them to swap messages with perfect secrecy.

In 2017, a Chinese satellite called Micius showed for the first time that entanglement can indeed be shared in this way. It turns out that photons can travel much further in this scenario because only the last 20 kilometers or so of the journey is through the atmosphere, provided the satellite is high in the sky and not too close to the horizon.

Khatri and co say that a constellation of similar satellites is a much better way to create a global quantum internet. The key is that to communicate securely, two ground stations must be able to see the same satellite at the same time so that both can receive entangled photons from it.

At what altitude should the satellites fly to provide coverage as broad as possible? And how many will be needed? “Since satellites are currently an expensive resource, we would like to have as few satellites as possible in the network while still maintaining complete and continuous coverage,” say Khatri and co.

To find out, the team modeled such a constellation. It turns out there are a number of important trade-offs to take into account. For example, fewer satellites can provide global coverage when they orbit at a high altitude. But higher altitudes lead to greater photon losses.

Also, satellites at lower altitudes can span only shorter distances between base stations, because both must be able to see the same satellite at the same time.

Given these limitations, Khatri and co suggest that the best compromise is a constellation of at least 400 satellites flying at an altitude of around 3,000 kilometers. By contrast, GPS operates with 24 satellites.

Even then, the maximum distance between base stations will be limited to about 7,500 kilometers. This means that such a system could support secure messaging between London and Mumbai, which are 7,200 km apart, but not between London and Houston, 7,800 km apart—or indeed between any cities that are farther apart. That’s a significant drawback.

Nevertheless, a space-based quantum internet significantly outperforms ground-based systems of quantum repeaters, say Khatri and co. Repeaters would have to be spaced at intervals of less than 200 kilometers, so covering long distances would require large numbers of them. This introduces its own set of limitations for a quantum internet. “We thus find that satellites offer a significant advantage over ground-based entanglement distribution,” say Khatri and co.

Of course, such a system would require significant investment. China has an obvious advantage, having already tested an orbiting satellite with this kind of technology. And it has plans to go further.

By contrast, Europe and the US appear to have less ambition in this respect. That could change quickly if this technology can prove its worth. if so, the quantum space race may be just about to heat up.

Ref: arxiv.org/abs/1912.06678 : Spooky Action at a Global Distance – Resource-Rate Analysis of a Space-Based Entanglement-Distribution Network for the Quantum Internet

]]>Back in 2017, an oil painting called *Salvator Mundi* (Savior of the World) sold for $450.3 million at Christie’s auction house in New York. That made it the world’s most expensive by some margin. The painting is one of fewer than 20 thought to be by Leonardo da Vinci, although there is still some dispute over this attribution.

There is also another puzzle. The picture depicts Christ holding a glass orb representing the celestial sphere of the heavens. Such a sphere ought to act like a convex lens, magnifying and inverting the robes behind it. However, Christ’s robes are not inverted or magnified but appear with minimal distortion.

Leonardo was well aware of the way glass refracts light. Indeed, his notebooks are filled with depictions of the way light bounces off and refracts from various objects. And this raises the question of why he drew the orb in this way.

Today, we get an answer thanks to the work of Marco Liang and colleagues at the University of California, Irvine. This group has used computer graphics software to reproduce the scene in three dimensions and then studied how light would be refracted through orbs of different kinds.

After comparing their renderings with the original, they have concluded that the orb is not solid at all. Instead, they show that the painting is a realistic physical representation of a hollow sphere with a radius of 6.8 centimeters but a thickness of just 1.3 millimeters.

First some background. Inverse rendering is a computer graphics technique originally developed to produce physically realistic renderings of virtual scenes by simulating the physics of light flow. One goal of this technique is to better simulate the appearance of transparent and semi-transparent objects made of glass or water.

The technique begins by creating a 3D representation of the scene, incorporating the texture and structure of all the objects that light interacts with. The scene must also include a source of light and a viewpoint. Then a ray-tracing algorithm maps out the way light illuminates the scene, as seen from the viewpoint.

Liang and co begin by re-creating a virtual version of the painting. “We depict the scene geometry using a rough approximation for the subject’s body along with more detailed representations for the orb and the hand holding it,” they say.

By comparison with the hand, they estimated the diameter of the orb to be 6.8 cm and its distance from the body to be 25 cm. They also refined the geometry of the orb-holding hand to make it touch the orb softly, using Maya, a type of 3D modeling and animation software.

By studying the shadows in the painting, the team concluded that the subject was lit by a strong directional light source from above as well as by a general diffuse illumination. At the same time, they estimated that the viewpoint in the picture is about 90 cm away from the subject.

“With the virtual scene ready, we tested whether the orb was solid by comparing renderings of a solid and a hollow orb,” say Liang and co.

The results make for interesting reading. The only way the team is able to reproduce the original painting is with a hollow orb. Furthermore, a hollow orb distorts the background in a specific way. For example, a straight line that passes through the center of the orb is not distorted. By contrast, straight lines that do not pass through the center of the orb are distorted in a way that creates a discontinuity at its edge.

In the painting, Christ’s robes are folded so that five lines appear to pass behind the orb. However, four of the lines have a fan-like arrangement that converges at the orb’s center. Consequently, there is no discontinuity visible in the reconstructed image or in the original.

However, the fifth fold does not follow this pattern, and the reconstructed image shows a clear discontinuity. The artist blurred this part of the painting where the fold enters the orb. This strongly suggests that he was aware of the way a hollow sphere distorts straight lines that pass behind it.

The team also experimented with varying the hollow orb’s thickness, the results suggesting that it cannot have been thicker than 1.3 mm.

An interesting question is whether Leonardo would have had access to the materials, light sources, and knowledge of optics that the new work suggests he must have had. On the point of optics, Liang and co have studied Leonardo’s notes and think this knowledge must have been well within his grasp. Hollow glass balls were well known at the time and appear in many paintings of the era. And Renaissance artists were experts in re-creating certain lighting conditions.

So Liang and co are sure of their conclusion: “Our experiments show that an optically accurate rendering qualitatively matching that of the painting is indeed possible using materials, light sources, and scientific knowledge available to Leonardo da Vinci circa 1500,” they say.

Of course, the team are not the first to suggest that the orb is hollow—Leonardo’s 2017 biographer Walter Isaacson makes a similar suggestion, and others have discussed it too. However, Liang and co are the first to show that the painting is a physically realistic rendering of a hollow orb and not a solid one.

That will help settle at least some of the controversy over the picture and its whopping price tag.

Ref: arxiv.org/abs/1912.03416 : On the Optical Accuracy of the *Salvator Mundi*

But materials scientists desperately need better batteries for the Internet of Things, for the next generation of personal devices, and much more. Better batteries will also be called on to play a major role in storing the energy from renewable, but inconstant, sources such as the wind and the sun.

Battery performance is the result of numerous different factors. Energy density is crucial; so is the ability to hold charge without it leaking away. Then there is rechargeability—not just once but thousands or tens of thousands of times—and, of course, safety.

Electrochemists know only too well how delicate this balancing act is. Consequently, battery makers are cautious about trying new approaches, lest one aspect of performance drop. So enhancements are usually incremental and tiny. Where are the big improvements we need likely to come from?

Today we get an answer of sorts: batteries of the future will be made via 3D printing, say Vladimir Egorov at the University of Cork in Ireland and a few colleagues. These folks have surveyed the various new printing techniques for batteries and suggest that this will make possible a new generation of smaller, more capable devices.

First some background. 3D printing is the general term for a variety of techniques that allow three-dimensional objects to be constructed by adding material layer by layer. It can be a way to make prototype designs for testing—not to mention exotic food items, replacement body parts, and even entire buildings. Using many printing machines in parallel allows the mass production of items such as car and aircraft parts and shoes. And when a new design is available, it can be printed quickly, with minimal reconfiguration of a factory space.

Materials scientists have also begun to experiment with ways to print electronic circuits using polymer inks and a silver polymer for traces, so soldering is no longer needed. In this way, circuit boards can take on more or less any shape and even form part of a device’s structure.

However, a significant limitation is the need to incorporate conventional batteries, which come in specific sizes and shapes.

The ability to print 3D batteries will change that. “If they can be printed to seamlessly integrate into the product design, for aesthetic as well and comfort or functional reasons, the bulkier and fixed form factor standard battery need not be accommodated at product design stage,” say Egorov and co.

This is easier said than done. The electroactive materials used in batteries are inherently reactive, and structures such as anodes and cathodes are physically complex. They must often be ordered like crystals, and sometimes porous like molecular sponges. Always, they must be chemically well characterized.

It is challenging to create versions of these materials that are suitable for 3D printing, whether by the extrusion of a solid or a liquid or by the polymerization of liquid. Once printed, these materials must maintain their electrical interconnections, tightly control any chemical reactions that take place between components, and ensure that the batteries can charge and discharge over many cycles.

Most important of all, batteries must be safe. All batteries have to pass strict safety standards before they can be used in homes, in vehicles, on airplanes, and so on. Batteries that leak can cause expensive damage. But the most serious risk is fire. It may be that the testing criteria will have to change to allow for new designs that constantly change.

And even if all these challenges can be overcome, another question looms. Will 3D batteries be any more capable than existing designs?

Egorov and co provide a comprehensive overview of the materials, methods, and challenges facing the battery industry in printing the power packs of the future. But the reviewers miss an important element of future battery design where 3D printing could have an important role to play.

One of the biggest and most important challenges for the battery industry is in making their products recyclable. Today’s batteries are specifically designed so that they cannot easily be taken apart, so reusing the valuable materials they contain is almost impossible.

That does not sit well for a technology that will have to play a central role in society’s transition from fossil fuels to renewable energy.

So change is much needed. The current thinking is that batteries must be designed from the start with recycling in mind, and that this will require an entirely new mind-set from battery designers. The flexibility that 3D printing allows has the potential to kick-start and accelerate this much-needed revolution.

While Egerov and co ignore this issue (the term “recycling” does not appear in their paper), the rest of the battery industry cannot afford to.

Ref: arxiv.org/abs/1912.04400 : Evolution of 3D Printing Methods and Materials for Electrochemical Energy Storage

]]>Both sent back the first close-up pictures of Jupiter, with Pioneer 11 continuing to Saturn. Voyager 1 and 2 later took even more detailed measurements, and extended the exploration of the solar system to Uranus and Neptune.

All four of these spacecraft are now on their way out of the solar system, heading into interstellar space at a rate of about 10 kilometers per second. They will travel about a parsec (3.26 light-years) every 100,000 years, and that raises an important question: What stars will they encounter next?

This is harder to answer than it seems. Stars are not stationary but moving rapidly through interstellar space. Without knowing their precise velocity, it’s impossible to say which ones our interstellar travelers are on course to meet.

Enter Coryn Bailer-Jones at the Max Planck Institute for Astronomy in Germany and Davide Farnocchia at the Jet Propulsion Laboratory in Pasadena, California. These guys have performed this calculation using a new 3D map of star positions and velocities throughout the Milky Way.

This has allowed them to work out for the first time which stars the spacecraft will rendezvous with in the coming millennia. “The closest encounters for all spacecraft take place at separations between 0.2 and 0.5 parsecs within the next million years,” they say.

Their results were made possible by the observations of a space telescope called Gaia. Since 2014, Gaia has sat some 1.5 million from Earth recording the position of 1 billion stars, planets, comets, asteroids, quasars, and so on. At the same time, it has been measuring the velocities of the brightest 150 million of these objects.

The result is a three-dimensional map of the Milky Way and the way astronomical objects within it are moving. It is the latest incarnation of this map, Gaia Data Release 2 or GDR2, that Bailer-Jones and Farnocchia have used for their calculations.

The map makes it possible to project the future positions of stars in our neighborhood and to compare them with the future positions of the Pioneer and Voyager spacecraft, calculated using their last known positions and velocities.

This information yields a list of stars that the spacecraft will encounter in the coming millennia. Bailer-Jones and Farnocchia define a close encounter as flying within 0.2 or 0.3 parsecs.

The first spacecraft to encounter another star will be Pioneer 10 in 90,000 years. It will approach the orange-red star HIP 117795 in the constellation of Cassiopeia at a distance of 0.231 parsecs. Then, in 303,000 years, Voyager 1 will pass a star called TYC 3135-52-1 at a distance of 0.3 parsecs. And in 900,000 years, Pioneer 11 will pass a star called TYC 992-192-1 at a distance of 0.245 parsecs.

These fly-bys are all at a distance of less than one light-year and in some cases might even graze the orbits of the stars’ most distant comets.

Voyager 2 is destined for a more lonely future. According to the team’s calculations, it will never come within 0.3 parsecs of another star in the next 5 million years, although it is predicted to come within 0.6 parsecs of a star called Ross 248 in the constellation Andromeda in 42,000 years.

These interstellar explorers will eventually collide with or be captured by other stars. It’s not possible yet to say which ones these will be, but Bailer-Jones and Farnocchia have an idea of the time involved. “The timescale for the collision of a spacecraft with a star is of order 10^20 years, so the spacecraft have a long future ahead of them,” they conclude.

The Pioneer and Voyager spacecraft will soon be joined by another interstellar traveler. The New Horizons spacecraft that flew past Pluto in 2015 is heading out of the solar system but may yet execute a maneuver so that it intercepts a Kuiper Belt object on its way.

After that last course correction takes place, Bailer-Jones and Farnocchia will be able to work out its final destination.

Ref: arxiv.org/abs/1912.03503 : Future stellar flybys of the Voyager and Pioneer spacecraft

]]>

You have 30 seconds. Quick! No dallying.

The answer, of course, is:

If you were unable to find a solution, don’t feel too bad. This expression is so tricky that even various powerful mathematics software packages failed too, even after 30 seconds of number-crunching.

And yet today, Guillaume Lample and François Charton, at Facebook AI Research in Paris, say they have developed an algorithm that does the job with just a moment’s thought. These guys have trained a neural network to perform the necessary symbolic reasoning to differentiate and integrate mathematical expressions for the first time. The work is a significant step toward more powerful mathematical reasoning and a new way of applying neural networks beyond traditional pattern-recognition tasks.

First, some background. Neural networks have become hugely accomplished at pattern-recognition tasks such as face and object recognition, certain kinds of natural language processing, and even playing games like chess, Go, and Space Invaders.

But despite much effort, nobody has been able to train them to do symbolic reasoning tasks such as those involved in mathematics. The best that neural networks have achieved is the addition and multiplication of whole numbers.

For neural networks and humans alike, one of the difficulties with advanced mathematical expressions is the shorthand they rely on. For example, the expression *x*^{3} is a shorthand way of writing *x* multiplied by *x* multiplied by *x*. In this example, “multiplication” is shorthand for repeated addition, which is itself shorthand for the total value of two quantities combined.

It’s easy to see that even a simple mathematical expression is a highly condensed description of a sequence of much simpler mathematical operations.

So it’s no surprise that neural networks have struggled with this kind of logic. If they don’t know what the shorthand represents, there is little chance of their learning to use it. Indeed, humans have a similar problem, often instilled from an early age.

Nevertheless, at the fundamental level, processes like integration and differentiation still involve pattern recognition tasks, albeit hidden by mathematical shorthand.

Enter Lample and Charton, who have come up with an elegant way to unpack mathematical shorthand into its fundamental units. They then teach a neural network to recognize the patterns of mathematical manipulation that are equivalent to integration and differentiation. Finally, they let the neural network loose on expressions it has never seen and compare the results with the answers derived by conventional solvers like Mathematica and Matlab.

The first part of this process is to break down mathematical expressions into their component parts. Lample and Charton do this by representing expressions as tree-like structures. The leaves on these trees are numbers, constants, and variables like *x*; the internal nodes are operators like addition, multiplication, differentiate-with-respect-to, and so on.

For example, the expression 2 + 3 x (5+2) can be written as:

And the expression

is:

And so on.

Trees are equal when they are mathematically equivalent. For example,

2 + 3 = 5 = 12 – 7 = 1 x 5 are all equivalent; therefore their trees are equivalent too.

Many mathematical operations are easier to handle in this way. “For instance, expression simplification amounts to finding a shorter equivalent representation of a tree,” say Lample and Charton.

These trees can also be written as sequences, taking each node consecutively. In this form, they are ripe for processing by a neural network approach called seq2seq.

Interestingly, this approach is often also used for machine translation, where a sequence of words in one language has to be translated into a sequence of words in another language. Indeed, Lample and Charton say their approach essentially treats mathematics as a natural language.

The next stage is the training process, and this requires a huge database of examples to learn from. Lample and Charton create this database by randomly assembling mathematical expressions from a library of binary operators such as addition, multiplication, and so on; unary operators such as cos, sin, and exp; and a set of variables, integers, and constants, such as π and e. They also limit the number of internal nodes to keep the equations from becoming too big.

Even with relatively small numbers of nodes and mathematical components, the number of possible expressions is vast. Each random equation is then integrated and differentiated using a computer algebra system. Any expression that cannot be integrated is discarded.

In this way, the researchers generate a massive training data set consisting, for example, of 80 million examples of first- and second-order differential equations and 20 million examples of expressions integrated by parts.

By crunching this data set, the neural network then learns how to compute the derivative or integral of a given mathematical expression.

Finally, Lample and Charton put their neural network through its paces by feeding it 5,000 expressions it has never seen before and comparing the results it produces in 500 cases with those from commercially available solvers, such as Maple, Matlab, and Mathematica.

These solvers use an algorithmic approach worked out in the 1960s by the American mathematician Robert Risch. However, Risch’s algorithm is huge, running to 100 pages for integration alone. So symbolic algebra software often uses cut-down versions to speed things up.

The comparisons between these and the neural-network approach are revealing. “On all tasks, we observe that our model significantly outperforms Mathematica,” say the researchers. “On function integration, our model obtains close to 100% accuracy, while Mathematica barely reaches 85%.” And the Maple and Matlab packages perform less well than Mathematica on average.

In many cases, the conventional solvers are unable to find a solution at all, given 30 seconds to try. By comparison, the neural net takes about a second to find its solutions. The example at the top of this page is one of those.

One interesting outcome is that the neural network often finds several equivalent solutions to the same problem. That’s because mathematical expressions can usually be written in many different ways.

This ability is something of a tantalizing mystery for the researchers. “The ability of the model to recover equivalent expressions, without having been trained to do so, is very intriguing,” say Lample and Charton.

That’s a significant breakthrough. “To the best of our knowledge, no study has investigated the ability of neural networks to detect patterns in mathematical expressions,” say the pair.

Now that they have, the result clearly has huge potential in the increasingly important and complex world of computational mathematics.

The researchers do not reveal Facebook’s plans for this approach. But it’s not hard to see how it could offer its own symbolic algebra service that outperforms the market leaders.

However, the competitors are unlikely to sit still. Expect a mighty battle in the world of computational mathematics.

Ref: arxiv.org/abs/1912.01412 : Deep Learning For Symbolic Mathematics

]]>The discovery was a triumph for the physics community. They had long known that Einstein’s theory of general relativity suggested that ripples in spacetime were possible. These waves squeeze and stretch space by distances smaller than the width of a proton. To spot them, physicists built a network of hugely sensitive detectors that cost of well over a billion dollars. So the discovery of the first waves in 2016 was both a relief and a significant success.

Now two physicists say that gravitational waves have been hiding in plain sight all along. Rituparno Goswami at the University of KwaZulu-Natal and George Ellis at the University of Cape Town, both in South Africa, today use some mathematical wizardry to show that tidal forces are gravitational waves. These are the same forces that cause sea levels to rise and fall as the moon moves around the Earth. “Tidal forces are actually a hidden form of gravitational waves,” they say.

First some background. Newton’s theory of gravity is based on the idea that all masses generate an attractive gravitational force explaining a wide variety of phenomena: the trajectory of a falling apple, the motion of the planets around the sun, and so on.

Newton’s theory also explain the tides. These are the result of the way gravitational forces vary with distance: the side of the Earth facing the moon experiences a slightly stronger gravitational pull than the side facing away. The result is a kind of stretching that pulls the oceans to and fro as the Earth rotates.

Goswami and Ellis begin by pointing out that Newton’s theory does not account for an important law of physics—that nothing can travel faster than the speed of light, not even gravitational forces. So it takes time for the moon’s gravitational forces to reach Earth. “No influence can travel faster than the speed of light: the tidal influence cannot be instantaneous,” say the physicists.

Einstein first formulated this cosmic speed limit in his special theory of relativity and later incorporated it into his general theory, which famously describes gravity as a kind of distortion in the fabric of spacetime. This immediately led to the idea that this fabric could support wave-like ripples.

Goswami and Ellis say that tidal forces are a form of gravitational radiation. But to be waves, they must vary in time in a special way dictated by the general theory of relativity. The physicists go on to show mathematically that the tidal forces have exactly these properties, albeit on a much smaller scale than the waves generated by black hole collisions. The result is something of a technicality, but nevertheless an interesting one.

In essence, Goswami and Ellis say that tidal forces are low-frequency gravitational waves. This theory makes some predictions that are different from Newton’s flavor of gravity. For example, Goswami and Ellis point out that it should take 1.3 seconds for tidal forces to travel from the moon to the Earth. “And if the ocean was uniformly deep everywhere without continents, the tides would lag the position of the Moon in the sky by 0.66 seconds of arc,” they say. That’s about the width of a penny as seen from two kilometers away.

Such an effect may be measurable, although Goswami and Ellis do not extend their analysis to suggest how. But it does mean that the effects of gravitational waves are much easier to spot than anyone imagined. A day at the seaside, anyone?

Ref: arxiv.org/abs/1912.00591 : Tidal Forces are Gravitational Waves

]]>There are good theoretical reasons to think this should work. The tapping should release any bubbles that are stuck to the inside walls of the can. These should then float to the surface and dissipate, making the beer less likely to foam when it is opened. But is this true?

Today, we get an answer thanks to the selfless work of Elizaveta Sopina at the University of Southern Denmark and a few colleagues. This group has tested the theory for the first time using randomized controlled trials involving 1,000 cans of lager. And luckily for the research team, the result raises at least as many questions as it answers, ensuring a strong future for beer-related research.

First some background. Beer is a water-based fermented liquid containing alcohol and proteins from ingredients such as barley and hops. It is often carbonated with high-pressure carbon dioxide gas and then stored under pressure.

Releasing this pressure dramatically reduces the amount of carbon dioxide the liquid can hold, causing bubbles to form. When the bubbles rise to the surface of the liquid, proteins stabilize the resulting foam, leading to the formation of a creamy head that is characteristic of many beers. The head helps to trap flavor molecules that give beers their unique tastes and smells.

The problem with foaming arises when beer is shaken before opening. Shaking increases the surface area of the beer inside the can and allows carbon dioxide to desaturate. The gas forms tiny bubbles centered on small particles in the liquid, known as nucleation centers.

When the can is opened, these bubbles grow rapidly in size and rise to the surface, creating foam. When this foam occupies a greater volume than there is space at the top of the can, the beer overflows. “This is inefficient, as fizzing reduces the amount of beer available for consumption and results in waste,” say Sopina and co. “Beer spray can also stain clothes or surrounding objects, and therefore is also an unpleasant and socially undesirable side-effect.”

With more than 170 billion liters of beer consumed every year (much of it by researchers in Denmark, presumably), the scale of the problem is easy to see. “Preventing, or, at least, minimizing beer fizzing is both socially and economically desirable,” say Sopina and co.

That’s where the tapping theory comes in. There is no shortage of anecdotal evidence that this techniques either works wonders or is entirely ineffective. “Given the strong Danish tradition in beer brewing and consumption, we set out to settle this matter with high-quality evidence,” say the team.

They began with the impressive achievement of persuading a local brewery to donate 1,031 cans of Pilsner-style beer for “research purposes.” After “losses” of various kinds, they were able to gather data from 1,000 cans on which to base their results.

The experiment was straightforward. The team cooled the cans in a fridge to drinking temperature and randomly divided them into two groups—those to be shaken and those not to be shaken. They further subdivided each group into cans that would be tapped and those that would be left untapped. They labeled the base of each can appropriately so no researcher involved in the shaking and tapping could easily tell them apart, even subconsciously.

The cans were then shaken using a “Unimax 2010 shaker” for two minutes at 440 rpm. “Pilot testing revealed that this shaking method successfully mimicked carrying beer on a bicycle for 10 minutes—a common way of transporting beer in Denmark,” says Sopina and co. Unwanted foaming must be at epidemic levels there.

The researchers then weighed each can, tapped it by flicking it three times on its side with a finger, and then opened it. Finally, they weighed the can again to determine the amount of beer that had been lost.

The results are palate tickling. Sopina and co compared the amount of beer lost for tapped and untapped cans that had been shaken and found no statistical difference—both lost about 3.5 grams of liquid to foaming.

They also found no meaningful difference between the cans that had not been shaken—when opened, they lost about 0.5 grams on average.

The obvious conclusion is that can tapping does not reduce foaming, a result that must be a considerable disappointment for bicycle-riding, beer-carrying Danes.

However, this negative result raises an interesting question of its own: Why doesn’t tapping work? And Sopina and co have some ideas. One is that flicking does not provide enough energy to dislodge bubbles, perhaps because the energy is absorbed by the aluminium can and the bulk of the liquid. Unfortunately, the team does not appear to have measured the energy imparted in this way, cleverly leaving the way open for more research.

Another possibility is that an assumption behind the tapping theory—that the bubbles associated with foaming must be attached to the wall of the can—is incorrect. “If most bubbles are located in the bulk liquid, the surfacing of the wall-adhered bubbles by flicking would be insignificant compared to the rapid surfacing of the bubbles in the bulk liquid,” say the team.

Finally, it may be that the microbubbles become trapped in the liquid by the same proteins that contribute to a beer’s creaminess. That would prevent them from rising at all. If that’s the case, the tapping method may still work for other fizzy drinks that do not contain these molecules. Indeed, some anecdotal evidence supports this.

If proteins are responsible, Sopina and co suggest that beer could be treated to prevent foaming by denaturing the relevant proteins, perhaps by heating the beer before it is cooled. However, the proteins play an important role in the flavor and mouthfeel of beer. “The potential negative impact on the sensory experience of the beer consumption and the risk of applying heat to a sealed pressurized metal container are important future research topics to be answered,” say the team.

And therein lies an entirely new research project for Sopina and her colleagues, or indeed any other specialists. Beer-related research is a glass that is truly bottomless.

Ref: arxiv.org/abs/1912.01999 : To Beer Or Not To Beer: Does Tapping Beer Cans Prevent Beer Loss? A Randomised Controlled Trial.

]]>The goal is to channel the heat away from sensitive components so that it can dissipate into the environment. But as devices get smaller, the challenge becomes more acute—and modern transistors, for example, are measured in nanometers.

The most cost-efficient conductors are metals such as copper, but heat travels through them equally well in all directions. That means heat can spread to any other component that is also in thermal contact with the metal.

A more effective conductor would channel heat in one direction but not in the perpendicular one. In this case, heat would travel along such a material but not across it.

This kind of asymmetric conductor would make the life of thermal engineers significantly easier. But creating them is hard.

Enter Shingi Yamaguchi at the University of Tokyo in Japan and a group of colleagues, who have created a material out of carefully aligned carbon nanotubes that conducts heat in just this way. The new substance has the potential to revolutionize the way thermal engineers design and build cooling systems for computers and other devices.

First some background. Materials scientists are well aware that carbon nanotubes are exceptional conductors. These tiny tubes have a thermal conductance in excess of 1,000 W m^{-1} K^{-1}. By comparison, copper has a thermal conductance of about 400 W m^{-1} K^{-1}.

The trouble arises when materials scientists try to make a bulk material out of nanotubes. They do this by allowing the tubes to settle on a plastic substrate, forming a layer. But the nanotubes tend to be poorly aligned or randomly arranged.

As a result, they are in poor thermal contact with each other, and this reduces the conductivity of the bulk material. “It is essential to eliminate these structural deficiencies to utilize the high thermal conductance of individual carbon nanotubes in aligned carbon nanotube assemblies,” says Yamaguchi and co.

Their solution is simple: they create a material in which the carbon nanotubes are precisely aligned and are therefore in good end-to-end thermal contact.

This is possible thanks to a technique known as controlled vacuum filtration. Back in 2012, physicists discovered that in certain circumstances, floating carbon nanotubes can form a self-organized structure in which they all become aligned as in a crystal.

The nanotubes are first mixed together in a liquid containing a surfactant that reduces its surface tension. Provided the concentration of nanotubes is below some critical level, they then begin to self-organize on the surface of the liquid and become densely aligned.

The liquid is then removed by carefully and slowly sucking it through a filter using a vacuum, leaving the nanotubes behind. The result is a thin sheet of highly aligned nanotubes with some extraordinary properties.

Yamaguchi and co say the new material conducts heat in the direction of nanotube alignment with a thermal conductance of 43 W m^{-1} K^{-1}. By contrast, the conductance in the perpendicular direction is three orders of magnitude smaller at 0.085 W m^{-1} K^{-1}, about the same as fiberglass.

In other words, the material is 500 times better at conducting in one direction than in the other—the greatest asymmetry ever observed for these kinds of materials.

The reason is simple. When the nanotubes are in thermal contact from end to end, heat travels easily from one to another. But the tubes are not in good thermal contact along their length, since the contact footprint is tiny for tubes next to each other.

Yamaguchi and co are quick to point out the limitations of their new material. Although it has hugely asymmetric properties, its highest thermal conductivity is just 43 W m^{-1} K^{-1}, about the same as tin/lead solder.

However, they think they know why it is so low compared with that of single carbon nanotubes. They say that although the nanotubes are in thermal contact from end to end, this contact is not perfect. Each jump that the heat has to make from one nanotube to the next reduces the thermal conductance. And the shorter the tubes, the more jumps are required.

Yamaguchi and co use nanotubes that are just 200 nanometers long. “This suggests that the [thermal conductance in the direction of nanotube alignment] can be even greater with longer constituent carbon nanotubes,” they say.

Making a similar material out of longer nanotubes will not necessarily be straightforward. The self-organizing behavior that creates the aligned films is more difficult for longer nanotubes. But this kind of materials science challenge will surely interest Yamaguchi and co and others. No doubt the experiments are already under way, with thermal engineers keeping their fingers crossed.

Ref: arxiv.org/abs/1911.11340 : One-Directional Thermal Transport In Densely Aligned Single-Wall Carbon Nanotube Films

]]>

The particular problem for the ordinary working Babylonian was this: Given a tax bill that has to be paid in crops, by how much should I increase the size of my field to pay it?

This problem can be written down as a quadratic equation of the form Ax^{2}+Bx+C=0. And it is solved with this formula:

Today, over 4,000 years later, millions of people have the quadratic formula etched into their minds thanks to the way mathematics is taught across the planet.

But far fewer people can derive this expression. That’s also due to the way mathematics is taught—the usual derivation relies on a mathematical trick, called “completing the square,” that is far from intuitive. Indeed, after the Babylonians, it took mathematicians many centuries to stumble across this proof.

Before and since, mathematicians have found a wide range of other ways to derive the formula. But all of them are also tricky and non-intuitive.

So it’s easy to imagine that mathematicians must have exhausted the problem. There just can’t be a better way to derive the quadratic formula.

Enter Po-Shen Loh, a mathematician at Carnegie Mellon University in Pittsburgh, who has found a simpler way—one that appears to have gone unnoticed these 4,000 years.

Loh’s approach does not rely on completing the square or any other difficult mathematical tricks. Indeed, it is simple enough to work as a general method itself, meaning students need not remember the formula at all. “The derivation has the potential to demystify the quadratic formula for students worldwide,” he says.

The new approach is straightforward. It starts by observing that if a quadratic equation can be factorised in the following way :

Then the right-hand side equals 0 when x=R or when x=S. Then those would be the roots of quadratic.

Multiplying out the right hand side gives

This is true when -B=R+S and when C=RS.

Now here comes the clever bit. Loh points out that the numbers, R and S, add up to -B when their average is -B/2.

“So we seek two numbers of the form -B/2±z, where z is a single unknown quantity,” he says. We can then multiply these numbers together to get an expression for C. So

Then some simple rearranging gives

Which means that the solution for a quadratic equation is:

Voilà! That’s the quadratic formula.

[The more general version can be derived by dividing the equation Ax^{2}+Bx+C=0 by A to give x^{2}+B/Ax+C/A=0 and then repeating the above process.]

That’s a very significant improvement on the previous method, and Loh shows why with a simple example.

Find the roots of the following quadratic: x^{2 }– 2x+4=0

The traditional method would be to work out values for A, B, and C and plug them into the quadratic formula. But Loh’s approach solves the problem intuitively. The first step is to think that the two roots of the equation must be equal to -B/2±z = 1±z

And because their product must be C=4, we can write:

So the roots are

Attempting the same problem using the traditional method is much trickier. Go on, give it a go! The new approach is much easier and more intuitive, not least because it doesn’t require the formula to be memorized at all.

An interesting question is why nobody has stumbled across and widely shared this method before.

Loh says he “would actually be very surprised if this approach has entirely eluded human discovery until the present day, given the 4,000 years of history on this topic, and the billions of people who have encountered the formula and its proof. Yet this technique is certainly not widely taught or known.”

Loh has searched the history of mathematics for an approach that resembles his, without success. He has looked at methods developed by the ancient Babylonians, Chinese, Greeks, Indians, and Arabs as well as modern mathematicians from the Renaissance until today. None of them appear to have made this step, even though the algebra is simple and has been known for centuries.

So why now? Loh thinks it is related to the way the conventional approach proves that quadratic equations have two roots. “Perhaps the reason is because it is actually mathematically nontrivial to make the reverse implication: that always has two roots, and that those roots have sum −B and product C,” he says.

Loh, who is a mathematics educator and popularizer of some note, discovered his approach while analyzing mathematics curricula for schoolchildren, with the goal of developing new explanations. The derivation emerged from this process.

The question now is how widely it will spread and how quickly. To speed adoption, Loh has produced a video about the method. Either way, Babylonian tax calculators would surely have been impressed.

Ref: arxiv.org/abs/1910.06709 : A Simple Proof of the Quadratic Formula

*Correction: We amended a sentence to say that the method has never been widely shared before and included a quote from Loh.*

The first step, then, is to divide the cloud down the middle so that the particles on the left can be controlled separately from those on the right. The next step is to inject the message into the left-hand part of the cloud, where the chaotic behavior of the particles quickly scrambles it.

Can such a message ever be unscrambled?

In a new paper, Adam Brown at Google in California and a number of colleagues, including Leonard Susskind at Stanford University, the “father of string theory,” discuss exactly how such a message can be made to surprisingly reappear.

“The surprise is what happens next,” they say. After a period in which the message seems thoroughly scrambled, it abruptly unscrambles and recoheres at a point far away from where it was originally inserted. “The signal has unexpectedly refocused, without it being at all obvious what it was that acted as the lens,” they say.

But the really extraordinary thing they point out is that such an experiment throws light on one of the deepest mysteries of the universe: the quantum nature of gravity and spacetime.

First some background explanation. The key to understanding this thought experiment lies in the nature of emergent phenomena. Brown and co say that quantum systems can display emergent phenomena in just the same way as ordinary systems do.

For example, when two people talk to each other, the phenomenon is hard to understand from the point of view of modeling each individual molecule in the air. The room in which they talk might contain a billion billion billion molecules, each one colliding with another every tenth of a nanosecond.

The conversation continues anyway. “Communication is possible despite the chaos because the system nevertheless possesses emergent collective modes—sound waves—which behave in an orderly fashion,” Brown and his colleagues write.

A similar phenomenon operates on the quantum level. And it is this emergent phenomenon, Brown and his colleagues argue, that refocuses the quantum message in the earlier example.

“When quantum effects are important, complex patterns of entanglement can give rise to qualitatively new kinds of emergent collective phenomena,” they write. “One extreme example of this kind of emergence is precisely the holographic generation of spacetime and gravity from entanglement, complexity, and chaos.”

That’s why this thought experiment is the subject of so much interest. It allows physicists to think about a simple example of an emergent quantum phenomenon and how they might create and test one in the lab.

So how might they go about such an experiment? Brown and co say there are several ways to approach it. The first step is to create a set of entangled quantum states that can then be separated into two sets to be handled separately.

One way to do this is to create a collection of entangled pairs known as Bell pairs. Brown and co note that these pairs have already been created using rubidium atoms and with trapped ions.

The next step is to insert quantum information into one half of these quantum states. The final step is to control the quantum evolution of the other half of the quantum states in a way that allows the message to reemerge.

However, experiments have already been performed that accomplish such “quantum scrambling,” in which information is spread throughout a quantum system and subsequently recovered. Notably, a group at the University of Maryland, College Park, together with collaborators at the University of California, Berkeley, and the Perimeter Institute of Theoretical Physics in Waterloo, Ontario, published a paper in Nature in March 2019 describing their successful effort to do just that.

They used a quantum computer comprising a chain of nine ytterbium ions that are cooled by lasers while being held in a radio-frequency trap. The UMD researchers implemented a seven-qubit circuit in the middle seven of the nine ions. The first qubit was “scrambled” into three pairs of qubits, spreading the information it contained into six qubits in total (one of which was the original qubit). They then measured the seventh qubit, which had been paired with the sixth qubit. With a fidelity of about 80%, the seventh qubit was found to be in a quantum state indistinguishable from the original first qubit.

Interpreting this result is not straightforward, however the group performed several control experiments that, for technical reasons too subtle to explain here, confirmed their claim that the information initially encoded only in the first qubit was truly delocalized across the entire system.

“The scrambling-induced teleportation observed in our experiment can be reinterpreted as simulating the propagation of information through a traversable wormhole that connects a pair of black holes,” the Nature paper notes.

Such experiments suggest a number of exciting possibilities. The ability to play with analogues to an emergent form of spacetime make it possible to test certain ideas about quantum gravity.

Brown and co are clearly excited. They write: “The technology for the control of complex quantum many-body systems is advancing rapidly, and we appear to be at the dawn of a new era in physics—the study of quantum gravity in the lab.”

Ref: arxiv.org/abs/1911.06314 : Quantum Gravity in the Lab: Teleportation by Size and Traversable Wormholes.

*Correction: January 14, 2020*

*This story originally said: “The bottom line is that this kind of experiment is beyond the state of the current quantum art. But it could be possible in the next few years, given the rate at which physicists are developing their quantum skills.” This statement was incorrect. The text has been edited to reflect a trapped ion experiment reported in the March 6, 2019, issue of Nature that accomplishes exactly the sort of scrambling, teleportation, and decoding that was being discussed.*

*This story has been further edited from the original version to reflect the fact that though the paper by Brown et al. published on November 14, 2019, is certainly thought-provoking, it is not the first paper to suggest that tabletop quantum computing experiments can be a useful and interesting way to gain insights into quantum gravity.*

*This story has also been edited throughout for clarity.*

*MIT Technology Review regrets the errors.*