Re: Futuristische ontwikkelingen
Geplaatst: Vr Nov 02, 2018 11:46 am
Inzenders, mijn complimenten voor het aanleveren van deze informatieve bijdragen.
Beweging van ex-moslims, hun sympathisanten en humanisten
Thursday, 01 November 2018 | IANS | Chennai
Researchers at Indian Institute of Technology-Madras (IITM) have designed and booted up India's first microprocessor, Shakti, which could be used in mobile computing and other devices.
According to IITM, the Shakti microprocessor can be used in low-power wireless systems and networking systems besides reducing reliance on imported microprocessors in communication and defence sectors.
The microprocessor can be used by others as it is on par with international standards, researchers said.
The Shakti family of processors was fabricated at Semi-Conductor Laboratory (SCL), Indian Space Research Organizations (ISRO) in Chandigarh, making it the first 'RISC V Microprocessor' to be completely designed and made in India, IITM said.
The other crucial aspect of such an indigenous design, development and fabricating approach is reducing the risk of deploying systems that may be infected with back-doors and hardware Trojans.
This development will assume huge significance when systems based on Shakti processors are adopted by strategic sectors such as defence, nuclear power installations, government agencies and departments.
"With the advent of Digital India, there are several applications that require customisable processor cores. The 180nm fabrication facility at SCL Chandigarh is crucial in getting these cores manufacturers within our Country," said Prof. Kamakoti Veezhinathan, Lead Researcher, Reconfigurable Intelligent Systems Engineering (RISE) Laboratory, Department of Computer Science and Engineering at IITM.
According to IITM, Shakti processor family targets clock speeds to suit various end-user application devices such as various consumer electronic devices, mobile computing devices, embedded low-power wireless systems and networking systems, among others.
The project is funded by Union Ministry of Electronics and Information Technology.
The impact of this completely indigenous fabrication is that India has now attained independence in designing, developing and fabricating end-to-end systems within the country, leading to self-sufficiency, IITM claimed.
With a large percentage of applications requiring sub 200 MhZ processors, the current success paves the way to producing many hand-held and control application devices.
In July 2018, an initial batch of 300 chips, named RISECREEK was produced under Project Shakti, that were fabricated at the Multinational Chip Manufacturer Intel's facility at Oregon, USA, that successfully booted the Linux operating system. Now, the fabrication has been done in India.
donderdag 1 november 2018 | 10:51 CET | Nieuws
De Chinese smartphone-producent Royole claimt ‘s werelds eerste commerciële opvouwbare smartphone te hebben gemaakt, een combinatie van mobiele telefoon en tablet met een flexibel scherm. Onder meer Huawei, Samsung en LG willen het komende jaar ook een smartphone en/of tablet uitbrengen met opvouwbaar scherm.
De Royole FlexPai kan opgevouwen of uitgevouwen worden gebruikt, waardoor het de draagbaarheid van een smartphone zou combineren met de schermgrootte van een HD-tablet. De opvouwbare FlexPai-smartphone is gebaseerd op Royole's Flexible+ platform, dat kan worden geïntegreerd in producten en toepassingen in allerlei sectoren, aldus de leverancier. Een developer-model van de nieuwe FlexPai-smartphone is beschikbaar sinds 31 oktober 2018 via de website van het bedrijf.
Uitgevouwen versus opgevouwen
Uitgevouwen, biedt de FlexPai onder meer:
Opgevouwen biedt de FlexPai ondersteuning voor twee schermen met afzonderlijke, gelijktijdige werking. De interfaces op de primaire en secundaire schermen kunnen onderling met elkaar communiceren of onafhankelijk van elkaar werken. Gebruikers kunnen ook meldingen ontvangen op de zijbalk van het randscherm, die kan worden gebruikt om oproepen, berichten en andere notificaties te beheren die het gebruik van het primaire en secundaire scherm kunnen verstoren.
- Ondersteuning voor split-screenmodus en multitasking.
- Ondersteuning van drag-and-drop tussen applicaties, waardoor hij functionaliteiten kan delen zoals een computer.
- Automatisch aanpassen van schermafmetingen.
Vouwen in diverse hoeken
Het FlexPai-scherm is volgens de producent een volledig flexibel scherm. De FlexPai kan ook in verschillende hoeken worden gevouwen. FlexPai maakt gebruik van Qualcomm's nieuwste Snapdragon 8-serie SoC. De camera-opstelling bestaat uit een 20-megapixel-telelens en een 16-megapixel groothoeklens. Andere functies zijn onder meer uitbreidbare opslag met MicroSD, vingerafdruk-ID, opladen via USB-C en stereoluidsprekers.
Het consumentenmodel van de smartphone zal tegelijkertijd beschikbaar zijn voor Chinese consumenten. De orderafhandeling begint eind december 2018.
Cami Rosso, October 27, 2018
Recent advancement in artificial intelligence, namely in deep learning, has borrowed concepts from the human brain. The architecture of most deep learning models is based on layers of processing– an artificial neural network that is inspired by the neurons of the biological brain. Yet neuroscientists do not agree on exactly what intelligence is, and how it is formed in the human brain — it’s a phenomenon that remains unexplained. Technologist, scientist, and co-founder of Numenta, Jeff Hawkins, presented an innovative framework for understanding how the human neocortex operates, called “The Thousand Brains Theory of Intelligence,” at the Human Brain Project Summit in Maastricht, the Netherlands, in October 2018.
The neocortex is the part of the human brain that is involved in higher-order functions such as conscious thought, spatial reasoning, language, generation of motor commands, and sensory perception. The researchers at Numenta posit that every part of the human neocortex learns complete models of objects and concepts. The team hypothesizes that grid cell-like neurons exist in every column of the human neocortex. The research team also proposes a new type of neuron called the displacement cell, which acts as a complement to grid cells, and is also located throughout the neocortex. Grid cells are place-modulated neurons that enable an understanding of position. The researchers believe that every cortical column learns models of complete objects by combining input with a grid cell-derived location, then integrating over movements.
To illustrate this concept, the researchers use a coffee cup as an example. When we see and touch a coffee cup, many columns in the visual and somatosensory hierarchies simultaneously observe different parts of the cup. Every column in every region learns complete models of the cup based on the sensory input (in this example, vision and touch), with an object-centric location of that input, and then integrating over movements of the sensor. The models of the cup are not identical because each model of the cup is learned from a different subset of sensory arrays. As distinct from the commonly held view, where sensory input is processed in a hierarchy of cortical regions, this theory states that the connections are not hierarchical in nature. Instead, the non-hierarchical connections may connect between brain hemispheres, and across modalities, and hierarchical levels. Due to the non-hierarchical connections, inference may occur with movement of the sensors.
According to the researchers, the neocortex has hundreds, if not thousands, of models of each object in the world, and the integration of the observed features occurs in every column, at all levels of the hierarchy, not just at the top of the hierarchy — hence the name, “The Thousand Brains Theory of Intelligence.” The framework redefines how the human neocortex functions. According to the researchers, the neocortex contains thousands of models functioning not only in hierarchy, but also in parallel. It’s an innovative theory that challenges conventional views and may impact both artificial intelligence and neuroscience in the future.
By Rob Mitchum, Oct 16, 2018
Behind most of today’s artificial intelligence technologies, from self-driving cars to facial recognition and virtual assistants, lie artificial neural networks. Though based loosely on the way neurons communicate in the brain, these “deep learning” systems remain incapable of many basic functions that would be essential for primates and other organisms.
However, a new study from University of Chicago neuroscientists found that adapting a well-known brain mechanism can dramatically improve the ability of artificial neural networks to learn multiple tasks and avoid the persistent AI challenge of “catastrophic forgetting.” The study, published in Proceedings of the National Academy of Sciences, provides a unique example of how neuroscience research can inform new computer science strategies, and, conversely, how AI technology can help scientists better understand the human brain.
When combined with previously reported methods for stabilizing synaptic connections in artificial neural networks, the new algorithm allowed single artificial neural networks to learn and perform hundreds of tasks with only minimal loss of accuracy, potentially enabling more powerful and efficient AI technologies.
“Intuitively, you might think the more tasks you want a network to know, the bigger the network might have to be,” said David Freedman, professor of neurobiology at UChicago. “But the brain suggests there's probably some efficient way of packing in lots of knowledge into a fairly small network. When you look at parts of the brain involved in higher cognitive functions, you tend to find that the same areas, even the same cells, participate in many different functions. The idea was to draw inspiration from what the brain does in order to solve challenges with neural networks.”
In artificial neural networks, “catastrophic forgetting” refers to the difficulty in teaching the system to perform new skills without losing previously learned functions. For example, if a network initially trained to distinguish between photos of dogs and cats is then re-trained to distinguish between dogs and horses, it will lose its earlier ability.
“If you show a trained neural network a new task, it will forget about its previous task completely,” said Gregory Grant, who is now a researcher in the Freedman lab. “It says, ‘I don't need that information,’ and overwrites it. That's catastrophic forgetting. It happens very quickly; within just a couple of iterations, your previous task could be utterly obliterated.”
By contrast, the brain is capable of “continual learning,” acquiring new knowledge without eliminating old memories, even when the same neurons are used for multiple tasks. One strategy the brain uses for this learning challenge is the selective activation of cells or cellular components for different tasks—essentially turning on smaller, overlapping sub-networks for each individual skill, or under different contexts.
The UChicago researchers adapted this neuroscientific mechanism to artificial neural networks through an algorithm they called “context-dependent gating.” For each new task learned, only a random 20 percent of a neural network is activated. After the network is trained on hundreds of different tasks, a single node might be involved in dozens of operations, but with a unique set of peers for each individual skill.
When combined with methods previously developed by Google and Stanford researchers, context-dependent gating allowed networks to learn as many as 500 tasks with only a small decrease in accuracy.
“It was a little bit surprising that something this simple worked so well,” said Nicolas Masse, a postdoctoral researcher in the Freedman lab. “But with this method, a fairly medium-sized network can be carved up a whole bunch of ways to be able to learn many different tasks if done properly.”
As such, the approach likely has great potential in the growing AI industry, where companies developing autonomous vehicles, robotics and other smart technologies need to pack complex learning capabilities into consumer-level computers. The UChicago team is currently working with the Polsky Center for Entrepreneurship and Innovation to explore commercialization options for the algorithm.
The computational research also benefits the laboratory’s original focus on better understanding the primate brain by recording its activity as animals learn and behave. Modeling and testing strategies that enable learning, attention, sensory processing and other functions in a computer can motivate and suggest new biological experiments that probe the mechanisms of intelligence both natural and artificial, the researchers said.
“Adding in this component of research to the lab has really opened a lot of doors in terms of allowing us to think about new kinds of problems, new kinds of neuroscience topics and problems that we normally can't really address using the experimental techniques currently available to us in the lab,” Freedman said. “We hope this is the starting point for more work in the lab to both identify those principles and to help create artificial networks that continue learning and building on prior knowledge.”
By Rob Mitchum, Oct 16, 2018
The study, published in Proceedings of the National Academy of Sciences, provides a unique example of how neuroscience research can inform new computer science strategies, and, conversely, how AI technology can help scientists better understand the human brain.
Hans v d Mortel sr schreef:xplosive schreef:
By Rob Mitchum, Oct 16, 2018
The study, published in Proceedings of the National Academy of Sciences, provides a unique example of how neuroscience research can inform new computer science strategies, and, conversely, how AI technology can help scientists better understand the human brain.
Misschien had Rob een andere woordkeuze moeten maken, hoe dan ook zullen wij nooit begrijpen hoe de hersenen precies werken.
Hans v d Mortel sr schreef:Waarmee ook indirect is bewezen dat de mens geen vrije wil heeft.
IBM’s Jeopardy winning computer Watson is a serious threat, not just to the livelihood of medical diagnosticians, but to other professionals who may find themselves going the way of welders. Besides its economic threat, the advance of AI seems to pose a cultural threat: if physical systems can do what we do without thought to give meaning to their achievements, the conscious human mind will be displaced from its unique role in the universe as a creative, responsible, rational agent.
But this worry has a more powerful basis in the Nobel Prize winning discoveries of a quartet of neuroscientists—Eric Kandel, John O’Keefe, Edvard, and May-Britt Moser. For between them they have shown that the human brain doesn’t work the way conscious experience suggests at all. Instead it operates to deliver human achievements in the way IBM’s Watson does. Thoughts with meaning have no more role in the human brain than in artificial intelligence.
O’Keefe and the Mosers learned how they could read off the rat’s location, direction, speed and its environment from the firing of particular neuronal circuits in the entorhinal cortex. The Mosers correlated specific locations of the rat and landmarks in its cage with specific neuronal circuits distributed around the entorhinal cortex. Then they could interpret the firings as a correct representation, a map for them, of where the rat is, where it’s going and what’s in the cage. They could read off the rat’s location without watching the rat at all!
Of course you could argue that what Nobel Prize winning research shows about rats is irrelevant to humans. But you’d be flying in the face of clinical evidence about human deficits and disorders, anatomical and physiological identities between the structure of rat and human brains, and the detailed molecular biology of learning and information transmission in the neuronal circuitry of both us and Rattus rattus, the very reasons neuroscientists interested in human brains have invested so much time and effort in learning how rat brains work. And won Nobel Prizes for doing it.
Neuroscience shows that that in our brains the neural circuits neither have nor need content to do their jobs.
What does all this mean? Watson may beat us at Jeopardy, but we are convinced we have something AI will always lack: We are agents in the world, whose decisions, choices, actions are made meaningful by the content of the belief/desire pairings that bring them about. But what if the theory of mind that underwrites our distinctiveness is build on sand, is just another useful illusion foisted upon us by the Darwinian processes that got us here? Then it will turn out that neuroscience is a far greater threat to human distinctiveness than AI will ever be.
xplosive schreef:Bedoeld wordt : bedreigend voor het beeld dat een mens iets unieks bezit dat niet door iets anders geëvenaard of overtroffen zou kunnen worden.
November 5, 2018, American Physical Society
Scientists are working to dramatically speed up the development of fusion energy in an effort to deliver power to the electric grid soon enough to help mitigate impacts of climate change. The arrival of a breakthrough technology—high-temperature superconductors, which can be used to build magnets that produce stronger magnetic fields than previously possible—could help them achieve this goal. Researchers plan to use this technology to build magnets at the scale required for fusion, followed by construction of what would be the world's first fusion experiment to yield a net energy gain.
The effort is a collaboration between Massachusetts Institute of Technology's Plasma Science & Fusion Center and Commonwealth Fusion Systems, and they will present their work at the American Physical Society Division of Plasma Physics meeting in Portland, Ore.
Fusion power is generated when nuclei of small atoms combine into larger ones in a process that releases enormous amounts of energy. These nuclei, typically heavier cousins of hydrogen called deuterium and tritium, are positively charged and so feel strong repulsion that can only be overcome at temperatures of hundreds of millions of degrees. While these temperatures, and thus fusion reactions, can be produced in modern fusion experiments, the conditions required for a net energy gain have not yet been achieved.
One potential solution to this could be increasing the strength of the magnets. Magnetic fields in fusion devices serve to keep these hot ionized gases, called plasmas, isolated and insulated from ordinary matter. The quality of this insulation gets more effective as the field gets stronger, meaning that one needs less space to keep the plasma hot. Doubling the magnetic field in a fusion device allows one to reduce its volume—a good indicator of how much the device costs—by a factor of eight, while achieving the same performance. Thus, stronger magnetic fields make fusion smaller, faster and cheaper.
A breakthrough in superconductor technology could allow fusion power plants to come to fruition. Superconductors are materials that allow currents to pass through them without losing energy, but to do so they must be very cold. New superconducting compounds, however, can operate at much higher temperatures than conventional superconductors. Critical for fusion, these superconductors function even when placed in very strong magnetic fields.
While originally in a form not useful for building magnets, researchers have now found ways to manufacture high-temperature superconductors in the form of "tapes" or "ribbons" that make magnets with unprecedented performance. The design of these magnets is not suited for fusion machines because they are much too small. Before the new fusion device, called SPARC, can be built, the new superconductors must be incorporated into the kind of large, strong magnets needed for fusion.
Once the magnet development is successful, the next step will be to construct and operate the SPARC fusion experiment. SPARC will be a tokamak fusion device, a type of magnetic confinement configuration similar to many machines already in operation.
As an accomplishment analogous to the Wright brothers' first flight at Kitty Hawk, demonstrating a net energy gain, the aim of fusion research for more than 60 years, could be enough to put fusion firmly into national energy plans and launch commercial development. The goal is to have SPARC operational by 2025.
By Rob Thubron on November 2, 2018, 7:23 AM
We’re used to hearing about supercomputers such as IBM’s 200-petaflop Summit machine, but the million-processor-core ‘Spiking Neural Network Architecture’ mahcine, or ‘SpiNNaker,’ is a bit different. It’s the world’s largest neuromorphic supercomputer, designed to copy the workings of the human brain and unlock its secrets, and it’s been switched on for the first time.
Built by the University of Manchester’s School of Computer Science, it took 12 years and $19.5 million to complete the project, which had its one-millionth processor core fitted last week. SpiNNaker is capable of performing 200 million million actions per second, with each chip containing 100 million moving parts.
The machine can model more biological neurons in real time than any other computer in the world. These neurons are basic brain cells present in the nervous system that communicate primarily by emitting ‘spikes’ of pure electro-chemical energy.
SpiNNaker works by sending billions of small amounts of information simultaneously to thousands of different destinations, thereby mimicking the parallel communication architecture of the brain. This makes it different from traditional computers, which send large amounts of data from one point to another via a standard network.
“SpiNNaker completely re-thinks the way conventional computers work. We’ve essentially created a machine that works more like a brain than a traditional computer, which is extremely exciting,” said Steve Furber, Professor of Computer Engineering.
“The ultimate objective for the project has always been a million cores in a single computer for real-time brain modeling applications, and we have now achieved it, which is fantastic.”
SpiNNaker brings the creators’ ultimate aim of building a model of a billion biological neurons in real time a step closer. The neuromorphic supercomputer will run large-scale real-time simulations of different regions of the brain, including the Basal Ganglia, which is an area affected by Parkinson’s disease. It’s hoped this will lead to neurological breakthroughs in areas such as pharmaceutical testing.
The SpiNNaker has also been used to power a robot called the SpOmnibot, which can navigate the real-world by identifying objects and moving toward or ignoring them.
“Neuroscientists can now use SpiNNaker to help unlock some of the secrets of how the human brain works by running unprecedentedly large-scale simulations. It also works as a real-time neural simulator that allows roboticists to design large-scale neural networks into mobile robots so they can walk, talk and move with flexibility and low power,” added Professor Furber.
SpiNNaker (a contraction of Spiking Neural Network Architecture) is a million-core computing engine whose flagship goal is to be able to simulate the behaviour of aggregates of up to a billion neurons in real time.
"We found that on average the human brain has 86bn neurons. And not one [of the brains] that we looked at so far has the 100bn. Even though it may sound like a small difference the 14bn neurons amount to pretty much the number of neurons that a baboon brain has or almost half the number of neurons in the gorilla brain. So that's a pretty large difference actually."
In het licht van de in deze draad al vermelde ontwikkelingen waarmee aan nieuwe computers een meer dan duizendvoudige snelheid en capaciteit gegeven kan worden zijn we misschien nog maar enkele jaren verwijderd van een computer waarmee de werking van een menselijk brein in zijn geheel kan worden gesimuleerd.
Quantum 'compass' could allow navigation without relying on satellites
November 9, 2018 by Hayley Dunning, Imperial College London
Close-up of the accelerometer. Credit: Imperial College London
The UK's first quantum accelerometer for navigation has been demonstrated by a team from Imperial College London and M Squared.
Most navigation today relies on a global navigation satellite system (GNSS), such as GPS, which sends and receives signals from satellites orbiting the Earth. The quantum accelerometer is a self-contained system that does not rely on any external signals.
This is particularly important because satellite signals can become unavailable due to blockages such as tall buildings, or can be jammed, imitated or denied – preventing accurate navigation. One day of denial of the satellite service would cost the UK £1 billion.
Now, for the first time, a UK team has demonstrated a transportable, standalone quantum accelerometer at the National Quantum Technologies Showcase, an event demonstrating the technological progress arising from the UK National Quantum Technologies Programme – a £270m UK Government investment over five years.
The device, built by Imperial College London and M Squared, was funded through the Defence Science and Technology Laboratory's Future Sensing and Situational Awareness Programme, the Engineering and Physical Sciences Research Council, and Innovate UK. It represents the UK's first commercially viable quantum accelerometer, which could be used for navigation.
Accelerometers measure how an object's velocity changes over time. With this, and the starting point of the object, the new position can be calculated.
Using the precision of ultra-cold atoms
Accelerometers have existed for some time, and are present today in technologies like mobile phones and laptops. However, these devices cannot maintain their accuracy over longer periods without an external reference.
The quantum accelerometer relies on the precision and accuracy possible by measuring properties of supercool atoms. At extremely low temperatures, the atoms behave in a 'quantum' way, acting like both matter and waves.
Dr. Joseph Cotter, from the Centre for Cold Matter at Imperial, said: "When the atoms are ultra-cold we have to use quantum mechanics to describe how they move, and this allows us to make what we call an atom interferometer."
As the atoms fall, their wave properties are affected by the acceleration of the vehicle. Using an 'optical ruler', the accelerometer is able to measure these minute changes very accurately.
To make the atoms cold enough, and to probe their properties as they respond to acceleration, very powerful lasers that can be precisely controlled are needed.
Putting the UK at the heart of the coming quantum age
Dr. Joseph Thom, Quantum Technology Scientist at M Squared, said: "As part of our work in commercialising cold atom quantum sensors, we developed a universal laser system for cold atom-based sensors that we have already implemented in our quantum gravimeter. This laser is now also used in the quantum accelerometer we have built in collaboration with Imperial. Combining high power, exceptionally low noise and frequency tunability, the laser system cools the atoms and provides the optical ruler for the acceleration measurements."
The current system is designed for navigation of large vehicles, such as ships and even trains. However, the principle can also be used for fundamental science research, such as in the search for dark energy and gravitational waves, which the Imperial team are also working on.
Professor Ed Hinds, from the Centre for Cold Matter at Imperial, said: "I think it's tremendously exciting that this quantum technology is now moving out of the basic science lab and being applied to problems in the wider world, all from the fantastic sensitivity and reliability that you can only get from these quantum systems."
Dr. Graeme Malcolm, founder and CEO of M Squared, said: "This commercially viable quantum device, the accelerometer, will put the UK at the heart of the coming quantum age. The collaborative efforts to realise the potential of quantum navigation illustrate Britain's unique strength in bringing together industry and academia – building on advancements at the frontier of science, out of the laboratory to create real-world applications for the betterment of society."
https://phys.org/news/2018-11-quantum-c ... UQOwHfLj-c
Apple denkt na over videobellen met Apple Watch
13 november 2018
Apple heeft een camerasysteem voor zijn Apple Watch gepatenteerd. Door beelden samen te voegen wil het bedrijf nadelen van smartwatch-camera's vermijden.
Apple heeft het patent dinsdag toegewezen gekregen, ontdekte AppleInsider. Apple beschrijft een camerasysteem op een smartwatch dat bestaat uit één of twee groothoeklenzen op zowel het horloge zelf als in het horlogebandje.
De camera's nemen constant beelden op, die door het systeem worden samengevoegd. Het resultaat zou één beeld moeten zijn dat automatisch op het onderwerp is gericht.
Dat zou bijvoorbeeld werken bij videobellen. Apple stelt dat traditionele camera's op een horloge zorgen voor 'onflatteus' beeld dat in de neus filmt, omdat gebruikers het horloge niet recht voor hun gezicht houden.
Het systeem van Apple zou met behulp van gezichtsherkenning de bewegingen van de gebruiker opvangen, en omzetten naar een deels kunstmatig samengestelde weergave van het gezicht van de gebruiker.
Videobellen zonder pols op te tillen
Bij videobellen zou het voor de ontvanger zo lijken alsof de beller de camera recht voor zijn gezicht houdt, terwijl die camera eigenlijk veel lager staat. Het gezicht van de beller zou daarbij ook stil in beeld blijven staan, ook al wordt het horloge bewogen.
Recente iPhones en nieuwe iPad Pro-modellen gebruiken al een gezichtsscanner, waarmee gebruikers 3D-emoji's hun gezichtsbewegingen kunnen laten imiteren. Mogelijk wil Apple een aangepaste versie van die techniek toepassen.
Gebruikers zouden met het systeem ook voor zich uit kunnen filmen. Ook dan worden beelden van meerdere camera's samengevoegd, zodat nauwkeurig richten niet nodig is.
Hoewel het patent aan Apple is toegekend, is niet zeker of het bedrijf de techniek ook daadwerkelijk zal toepassen. Verschillende andere bedrijven voegden eerder wel camera's toe aan slimme horloges, maar die sloegen niet aan. Recente smartwatches hebben in de regel geen camera.
https://www.nu.nl/gadgets/5570033/apple ... watch.html
New Research Shows That Time Travel Is Mathematically Possible
Ateeq ahmed, May 01, 2018
Even before Einstein theorized that time is relative and flexible, humanity had already been imagining the possibility of time travel. In fact, science fiction is filled with time travelers. Some use metahuman abilities to do so, but most rely on a device generally known as a time machine.
Now, two physicists think that it’s time to bring the time machine into the real world — sort of.
“People think of time travel as something as fiction. And we tend to think it’s not possible because we don’t actually do it,” Ben Tippett, a theoretical physicist and mathematician from the University of British Columbia, said in a UBC news release. “But, mathematically, it is possible.”
Essentially, what Tippet and University of Maryland astrophysicist David Tsang developed is a mathematical formula that uses Einstein’s General Relativity theory to prove that time travel is possible, in theory. That is, time travel fitting a layperson’s understanding of the concept as moving “backwards and forwards through time and space, as interpreted by an external observer,” according to the abstract of their paper, which is published in the journal Classical and Quantum Gravity.
Oh, and they’re calling it a TARDIS — yes, “Doctor Who” fans, hurray! — which stands for a Traversable Acausal Retrograde Domain in Space-time.
Feasible but Not Possible. Yet
“My model of a time machine uses the curved space-time to bend time into a circle for the passengers, not in a straight line,” Tippet explained. “That circle takes us back in time.”
Simply put, their model assumes that time could curve around high-mass objects in the same way that physical space does in the universe. For Tippet and Tsang, a TARDIS is a space-time geometry “bubble” that travels faster than the speed of light.
“It is a box which travels ‘forwards’ and then ‘backwards’ in time along a circular path through spacetime,” they wrote in their paper.
Unfortunately, it’s still not possible to construct such a time machine.
“While is it mathematically feasible, it is not yet possible to build a space-time machine because we need materials — which we call exotic matter — to bend space-time in these impossible ways, but they have yet to be discovered,” Tippet explained.
Indeed, their work isn’t the first to suggest that time traveling can be done. Various other experiments, including those that rely on photon stimulation, suggest that time travel is feasible. Another theory explores the potential particles of time. However, some think that a time machine wouldn’t be feasible because time traveling itself isn’t possible. One points to the intimate connection between time and energy as the reason time traveling is improbable. Another suggests that time travel isn’t going to work because there’s no future to travel to yet.
Whatever the case may be, there’s one thing that these researchers all agree on. As Tippet put it, “Studying space-time is both fascinating and problematic.”
References: ScienceAlert, IOP Science, Phys.Org
https://www.physics-astronomy.org/2018/ ... qhHeqvb6GA
Worden De Wallen overbodig? Zelfrijdende auto is in de toekomst peeskamertje, voorspellen toerismeonderzoekers
Door Graham Rapier
De zelfrijdende auto van Google is misschien wat te krap om in te seksen. Wellicht heeft Elon Musk met zijn Tesla daar een oplossing voor.
Rondrijdende peeskamertjes kunnen de rosse buurten van de wereld wel eens gaan vervangen wanneer de zelfrijdende auto zijn intrede doet. Als we de wagen niet hoeven te besturen, hebben we immers de handen vrij!
Dat schrijven toerismeonderzoekers Scott Cohen en Debbie Hopkins in het wetenschappelijke blad Annals of Tourism Research. Ze halen in eerste instantie lovehotels aan, waar je betaalt per uur in plaats van per nacht. “Autonome voertuigen zullen waarschijnlijk lovehotels gaan vervangen. Dit zal behoorlijk consequenties hebben voor stedelijk toerisme, omdat seks een centrale rol speelt in de ervaring van veel toeristen.” Wanneer we straks niet langer achter het stuur zitten, blijft er tijd over voor andere zaken. Bovendien zal er in de auto’s ook veel meer ruimte zijn, aangezien een stuurwiel, pedalen en alle andere bedieningspanelen ook niet echt nodig zijn.
Overigens is op dit moment nog veel onduidelijk over het interieur van zelfrijdende auto’s, vandaar dat de toepassing op toerisme en reizen nog lang niet helder te schetsen is. De wetenschappers achter dit onderzoek willen ook juist een helpende hand bieden in die richting, door nu al mogelijkheden aan te reiken die een reguliere auto-ontwerper wellicht niet zo snel zou bedenken.
Tegelijkertijd, zo schrijven ze, verschaffen deze onduidelijkheden ook kansen voor onderzoekers en filosofen, om diep na te denken over hoe stadsleven eigenlijk werkt en wat bezoekers kunnen toevoegen in de toekomstige stad.
Commerciële zelfrijdende taxidienst start in december
13 november 2018
Google-zusterbedrijf Waymo start in december een zelfrijdende taxidienst die de concurrentie aangaat met bedrijven als Uber.
De taxidienst wordt onder een nieuwe naam geïntroduceerd, zegt een ingewijde tegen Bloomberg.
Waymo zou de dienst niet groots willen aankondigen, ook zou er niet direct een app in de appwinkels komen. De dienst zou kleinschalig starten, met tientallen tot honderden geautoriseerde testpassagiers in en om de Amerikaanse stad Phoenix.
Sinds eind 2017 test Waymo zijn zelfrijdende auto's zonder menselijke chauffeur op de openbare weg. Phoenix heeft speciale regelgeving waardoor zulke tests toegestaan zijn. Waymo testte die auto's de afgelopen maanden al met een groep van zo'n vierhonderd vrijwillige gezinnen.
Tot nu toe mochten testgebruikers niets zeggen over hun deelname. Als Waymo zijn nieuwe dienst lanceert, zouden gebruikers foto's en video's mogen maken en vrienden of de media mogen meenemen tijdens een rit.
Verschillende bedrijven werken aan zelfrijdende taxi's
Google werkt al bijna tien jaar aan zelfrijdende auto's. In de beginjaren gebeurde dit grotendeels in het geheim, inmiddels rijden de wagens van Google al enkele maanden tot jaren op de openbare weg in enkele Amerikaanse steden.
De auto's moesten in eerste instantie nog worden getest met menselijke chauffeurs achter het stuur, om in te kunnen grijpen wanneer de systemen van de auto niet goed werkten. Inmiddels is het systeem van Google geavanceerd genoeg om zonder zo'n chauffeur veilig de weg op te gaan.
Waymo zou in sommige auto's van zijn taxidienst nog wel menselijke chauffeurs achter het stuur zetten om nieuwe passagiers gerust te stellen.
Naast Waymo werken verschillende andere Amerikaanse bedrijven aan zelfrijdende auto's. Zo wil ook Uber uiteindelijk auto's zonder chauffeur inzetten. Auto's van Tesla moeten op termijn ook volledig zelfrijdend worden, waarna klanten volgens Tesla-CEO Elon Musk kunnen bijverdienen door hun auto als zelfrijdende taxi in te zetten.
Door: NU.nl | Beeld: Waymo
https://www.nu.nl/gadgets/5570543/comme ... ember.html
Jeff Bezos, The World's Richest Man, Says He Will Use His $131 Billion Fortune To Fund Space Travel And Create Human Settlements Beyond Earth
November 18, 2018
The world's richest person Jeff Bezos already liquidates $1 billion (£0.7 billion) of his fortune in space travel every year. That figure is only set to increase as he now says he thinks the best use of his astonishing $131 billion (£96 billion) wealth is getting man into the space.
He says our solar system could support a trillion humans meaning we would have 'a thousand Einsteins and a thousand Mozarts' giving humanity incredible powers.
Bezos has previously called his Blue Origin space company his 'most important project' and says he one day aims to have human settlements beyond Earth. Bezos made the comments during an interview with CEO of Axel Springer Mathias Döpfner and Business Insider's US editor-in-chief Alyson Shontell in Berlin.
'The only way that I can see to deploy this much financial resource is by converting my Amazon winnings into space travel,' Bezos said, writes Business Insider. 'I am going to use my financial lottery winnings from Amazon to fund that', he said. The Amazon CEO says he has been interested in space since he was five years old and believes humanity needs to explore it in order to cultivate a more advanced civilisation.
'I am very lucky that I feel like I have a mission-driven purpose with Blue Origin that is, I think, incredibly important for civilization long term', he said. He claimed that humanity risks reaching a state of stasis and getting to new frontiers would mean our species stays productive and dynamic.
'The solar system can easily support a trillion humans', he said. 'And if we had a trillion humans, we would have a thousand Einsteins and a thousand Mozarts and unlimited, for all practical purposes, resources and solar power unlimited for all practical purposes', he said.
After the interview the Amazon CEO shared an image from late-night TV host Conan O'Brien on his Instagram. O'Brien had made a joke about how Bezos planned to spend his fortune during the Tuesday night episode of 'Conan'.
'Amazon CEO Jeff Bezos says he is planning to spend the majority of his fortune getting himself into space', O'Brien said. 'He said, "I've seen what you people buy, and I don't want to be near you"', O'Brien joked.
Jeff Bezos' space tourism project with Blue Origin is competing with a similar programme in development by Space X, the rocket firm founded and run by Tesla CEO Elon Musk, and Virgin Galactic, backed by Richard Branson. Bezos is pursuing Blue Origin with vigour as he tries to launch his 'New Glenn' rocket into low-Earth orbit by 2020.
As part of this mission he successfully launched his eighth rocket test flight from West Texas on Sunday. The system consists of a pressurised crew capsule atop a reusable 'New Shepard' booster rocket.
While the New Shepherd capsule is designed to carry up to six 'space tourists,' it was unmanned for this flight. The event marked the seventh successful booster flight in a row for Blue Origin.
Blue Origin will presumably start test flights to move toward that goal later this year, but no target dates have been made publicly known at this time.
https://www.thescinewsreporter.com/2018 ... 2sN5Av_Qg8
Pilgrim schreef:'The solar system can easily support a trillion humans', he said.
Scientists turn nuclear waste into diamond batteries
Philip Perry - 06 December, 2016
They'll reportedly last for thousands of years. This technology may someday power spacecraft, satellites, high-flying drones, and pacemakers.
Nuclear energy is carbon free, which makes it an attractive and practical alternative to fossil fuels, as it doesn't contribute to global warming. We also have the infrastructure for it already in place. It's nuclear waste that makes fission bad for the environment. And it lasts for so long, some isotopes for thousands of years. Nuclear fuel is comprised of ceramic pellets of uranium-235 placed within metal rods. After fission takes place, two radioactive isotopes are left over: cesium-137 and strontium-90.
These each have half-lives of 30 years, meaning the radiation will be half gone by that time. Transuranic wastes, such as Plutonium-239, are also created in the process. This has a half-life of 24,000 years. These materials are highly radioactive, making them extremely dangerous to handle, even with short-term exposure.
The typical nuclear power plant creates about 2,300 tons of waste annually. 99 reactors are currently employed in the United States. That's a lot of waste per year. The US is currently stockpiling 75,000 tons of nuclear waste. It is carefully stored and maintained. However, just like anything else it is vulnerable to natural disasters, human error, even terrorism. Storage is also costly. American taxpayers are on the hook for tens of millions of dollars.
So what can be done? Researchers at the University of Bristol in the UK have a solution. Geochemist Tom Scott and colleagues have invented a method to encapsulate nuclear waste within diamonds, which as a battery, can provide a clean energy supply lasting in some cases, thousands of years.
Scott said there were no emissions, no moving parts, no maintenance, and zero concerns about safety. The radiation is locked safely away inside the gemstone. All the while, it generates a small, steady stream of electricity. Nickel–63, an unstable isotope, was used in this first experiment. It created a battery with a half-life of a century.
There are other substances which would last over ten times longer, while helping to reduce our nuclear waste stockpile. Older nuclear reactors, in service between the 1950s and the 1970's, used graphite blocks to cool the uranium rods. But after years of service these blocks become covered in a layer of carbon-14, a radioactive isotope with a half-life of around 5,730 years. Once a power plant is decommissioned, those blocks must be stored as well.
By heating the blocks, scientists can turn carbon-14 into a gas, which would be gathered and compressed into a diamond—since diamonds are just another form of carbon, anyway. Each gemstone emits short-range radiation, which is easily contained by just about any solid material. Since diamond is the strongest substance on Earth, it can be safely stored inside. Researchers covered their work in a lecture at the university entitled, “Ideas to change the world."
The diamond batteries only put out a small amount of current. They can't replace contemporary ones quite yet. Scott told Digital Trends, "An alkaline AA battery weighs about 20 grams, has an energy density storage rating of 700 Joules/gram, and [uses] up this energy if operated continuously for about 24 hours." Meanwhile, "A diamond beta-battery containing 1 gram of C14 will deliver 15 Joules per day, and will continue to produce this level of output for 5,730 years — so its total energy storage rating is 2.7 TeraJ." Another stumbling block is cost, as anyone who has ever saved up for an engagement ring can attest.
Once these hurdles are overcome, possible applications include powering spacecraft, satellites, high- flying drones, and medical devices such as pacemakers—anything really where batteries are difficult or impossible to charge, or change. One tantalizing speculation: powered by such crystals, interstellar probes could operate even in the darkest reaches of space, where solar power is no longer feasible.
Applications abound. So much so, that Dr. Scott and colleagues are asking the public for other possible uses. Weigh in with yours at: #diamondbattery
To learn more about this project click here:
https://bigthink.com/philip-perry/scien ... 1542484696
Kristin Houser | November 13th 2018
It’s A Hot One
Things are heating up in China.On Tuesday, a team from China’s Hefei Institutes of Physical Science announced that its Experimental Advanced Superconducting Tokamak (EAST) reactor — an “artificial sun” designed to replicate the process our natural Sun uses to generate energy — just hit a new temperature milestone: 100 million degrees Celsius (180 million degrees Fahrenheit).
For comparison, the core of our real Sun only reaches about 27 million degrees Fahrenheit — meaning the EAST reactor was, briefly, more than six times hotter than the closest star.
When two hydrogen nuclei combine, they produce an enormous amount of energy. That process, known as nuclear fusion, is how our Sun generates light and heat, and it’s the great white whale of the energy world — if we could find a way to harness it, we’d have a near-limitless source of clean energy.
Tokamaks like EAST could help us do just that. They’re devices that use magnetic fields to control plasma in a way that could support stable nuclear fusion, and it’s this plasma that EAST heated to such an incredible temperature.
Not only is EAST’s new plasma temperature milestone remarkable because, wow, it’s really hot, it’s also the minimum temperature scientists believe is needed to produce a self-sustaining nuclear fusion reaction on Earth.
Now that China’s “artificial sun” is capable of heating plasma to the necessary temperature, researchers can focus on the next steps along the path to stable nuclear fusion.
Dit is jouw unieke kans om ’s werelds bekendste mensrobot te ontmoeten! Bright Day haalt robot Sophia voor de eerste keer naar Nederland.