De eerste tunnel van Musks ondergrondse hogesnelheidslijn is een feit
Vivian Lammerse, 19 december 2018
En ter demonstratie testte een Tesla de tunnel even uit.
“Het verkeer drijft me gek. Ik ga een boormachine voor tunnels bouwen en ga gewoon beginnen met graven,” Twitterde Elon Musk twee jaar geleden. En ja, als Musk zoiets roept, kun je ervan uitgaan dat hij geen grapje maakt. Zijn Boring Company ging er gelijk mee aan de slag. Het idee is dat grote steden straks kleine ‘stations’ hebben, waar je in en uit kunt stappen en die toegang bieden tot het ondergrondse netwerk dat je razendsnel zeer dicht bij je eindbestemming brengt. Dat gaat er zo ongeveer uit zien:
Testtunnel En de eerste testtunnel is na twee jaar graven aldaar. Deze loopt onder Hawthorne door, een plaats in de Amerikaanse staat Californië. Hier is zowel het hoofdkwartier van Space X als van Tesla gevestigd. Hoewel de testtunnel niet heel lang is – nog geen twee kilometer – kunnen er toch flinke snelheden worden bereikt. Zo zegt Musk dat voertuigen er met 250 kilometer per uur doorheen kunnen razen.
Demonstratie Om de tunnel even te demonstreren, werd er – verrassing – een Tesla tevoorschijn getoverd. Musk laat echter weten dat de tunnel ook geschikt is voor andere voertuigen. Het idee is dat auto’s met een lift naar het ondergrondse transportnetwerk worden gebracht, waar ze op ‘elektrische skates’ door de tunnel zoeven. Dit bleek voor de demonstratie echter nog wat te complex. Daarom gebruikte Musk voor de Tesla inklapbare wieltjes, die de auto als een trein over een spoor leidde. Ook hield de Tesla een gematigde 80 kilometer per uur aan.
Van deur tot deur Het idee is dat er in de toekomst zo’n 1000 kleine stations komen die je vlak voor de deur afzetten. Of zelfs alvast binnen. Zo wil Musk in de kelder van vrijwel elk kantoor een lift bouwen, waardoor je letterlijk van deur tot deur wordt gebracht.
Het verkennen van snellere en goedkopere manieren om te boren is ook nuttig voor het andere supersnelle transportsysteem waar Musk op dit moment aan werkt: de hyperloop. Zo wil hij in tunnels een vacuüm creëren waar Hypoerloop-pods met snelheden van 1200 kilometer per uur door heen kunnen reizen. Zo zouden grote afstanden gemakkelijk afgelegd kunnen worden.
Chinese scholen volgen leerlingen met slimme uniformen
29 december 2018
In China maken scholen gebruik van uniformen met chips erin, zodat ze kunnen bijhouden of scholieren wel naar de lessen komen.
De uniformen zijn bedoeld om een betere aanwezigheid onder scholieren te stimuleren, schrijft de Chinese krant The Global Times.
In de uniformen zitten twee chips in de schouders. Daarmee kunnen scholen volgen wanneer scholieren het gebouw binnenkomen en wanneer ze vertrekken. De informatie wordt naar ouders en leraren gestuurd.
Als leerlingen zonder toestemming proberen te spijbelen, gaat een alarm af. Volgens The Verge maken de scholen ook gebruik van gezichtsherkenning bij de deur, om te controleren dat leerlingen hun eigen jasje dragen.
Met de chips kan daarnaast gedetecteerd worden of een leerling in slaap is gevallen tijdens de les. Ook zouden ze er betalingen mee kunnen doen in de schoolkantine.
De uniformen worden bij tien Chinese scholen gebruikt. Volgens de scholen worden scholieren in hun vrije tijd niet gevolgd.
Peking voorziet 120.000 woningen van gezichtsherkenning
31 december 2018
In de Chinese stad Peking worden 120.000 woningen voorzien van slimme sloten met gezichtsherkenning. Zo komen alleen inwoners en familieleden binnen.
Het project moet eind juni 2019 zijn afgerond, schrijft de South China Morning Post op basis van The Beijing News. Met de sloten moet ook illegale onderverhuur worden tegengegaan.
Het systeem is al in 47 woningen in Peking geïnstalleerd. Van zo'n honderdduizend inwoners en hun families zijn gezichtsscans verzameld.
Door gezichtsdata van bezoekers te vergelijken met informatie van bezoekers, kan het systeem de deur openen of juist dichthouden. Het huis is dan te betreden voor familie en vrienden, maar blijft gesloten voor vreemden.
Ook houdt het systeem oudere, eenzame bewoners in de gaten. Als ze lang niet gezien worden, worden beheerders op de hoogte gesteld.
Een wapensysteem voor de 21e eeuw: Poetin presenteert supersonische glider met 20 keer de snelheid van het geluid
Geplaatst op 1 januari 2019
De Russische president Poetin sprak van een “excellent nieuwjaarsgeschenk aan de natie” – het gaat over een nieuw soort wapen, waarvan hij daarvoor de tests bijwoonde. Het gaat om een supersonisch raketsysteem, dat in 2019 aan de Russische strijdkrachten overgedragen moet worden. Poetin prees het systeem als “groot succes”.
Het supersonisch systeem met de naam “Avangard” is volgens Poetin onmogelijk te onderscheppen en zou Rusland´s veiligheid de komende tientallen jaren garanderen. Daarvoor had de chef van het Kremlin in de controlekamer van het ministerie van Defensie de start van het wapen op de raketbasis Dombarovski in de zuidelijke Oeral gevolgd. Het doel van het wapen zou de 6000 kilometer verwijderde schietbaan Kura op het schiereiland Kamtsjatka geweest zijn.
Het “Avangard”-systeem zou over een intercontinentale reikwijdte beschikken en in de atmosfeer een snelheid van 20 keer het geluid kunnen bereiken. Geen enkele huidige en toekomstige raketafweer zou het kunnen onderscheppen, zei Poetin tegenover journalisten, en geen enkel ander land zou over een vergelijkbaar wapensysteem beschikken. In werkelijkheid worden er sinds geruime tijd in Rusland, de VS en China proeven gedaan met supersonische raketsystemen. Ook uit China werden succesvolle tests gemeld. Deskundigen waren er verbaasd over dat Rusland met het “Avangard”-systeem nu al een supersonisch wapen voor het front ontwikkeld zou hebben.
Several years ago the huge manufacturing capacity push by the Chinese government slashed the cost of solar panels based on silicon cell technology. The huge influx of dirt cheap China-manufactured solar panels was condemned as 'dumping' by Western competitors and many of their governments. In reality it was and, heavily subsidised by the Chinese government, many of these manufacturers were subsequently unable to sustain commercial viability and went out of business.
However, there was also a positive outcome. The strongest survived, Western manufacturers also had to become more price competitive, and the end cost of solar panels came down so much that under the right conditions, producing electricity from solar power achieved grid parity with fossil fuels.
The actual technology of solar panels, that use silicon cells to capture lights and convert it into electrical power, hasn't changed much over the past several years. Efficient and price competitive enough to be a viable alternative and compliment to fossil fuels, there was no pressing need for it to. But a new kind of solar cell technology first experimented with by Japanese researchers in 2009 has now reached to a point where it is ready to be commercialised. And it is even cheaper and more efficient than existing solar cell technology.
In the UK, Oxford PV, an Oxford University spin-out has received a $3 million government grant to commercialise the new solar cell technology. And on the other side of the Atlantic Swift Solar, a U.S. start-up, has raised $7 million to bring the same technology to market.
The latest technology in the world of solar power is called 'perovskite cell'. It is thought that solar panels that use perovskite cell technology could cost less than half that of silicon alternatives while offering greater efficiency in the electricity they are able to produce from the same sunlight. The 'light harvesting' active layer of these new panels uses a material based on a hybrid organic-inorganic lead or tin-halide compound. The new solar cell chemistry was recently explained during a Ted Talk given by Sam Stranks, one of Swift Solar's co-founders and the start-up's lead scientific advisor:
"These thin crystalline films are made by mixing two inexpensive readily abundant salts to make an ink that can be deposited in many different ways… "
Oxford PV's perovskite cell panels have also achieved conversion efficiencies of 37%, which is much higher than the 25% efficiency achieved by most of the silicon cell solar panels used domestically on rooftops. Panels at less the price and significantly more efficient in producing electricity could be expected to lead to a new wave of installations globally and significantly boost the share of our energy mix that comes from renewables. And the new panels are expected to be commercially available in the new year. Oxford PV CTO Chris Case recently commented:
"Today, commercial-sized perovskite-on-silicon tandem solar cells are in production at our pilot line and we are optimizing equipment and processes in preparation for commercial deployment."
Re: Futuristische ontwikkelingen
Geplaatst: Za Jan 12, 2019 12:14 am
door Hans v d Mortel sr
Mahalingam schreef:Zouden er in China veel autodiefstallen zijn? Ik denk het niet. Dus zal zo'n vingerscanner wel bruikbaar zijn. In andere streken van de wereld met veel meer criminaliteit, ik denk aan Zuid-Afrika en Brazilië in dit verband, is dit ronduit gevaarlijk. Dan hakken ze je vinger af om er met de auto vandoor te gaan.
Inderdaad! Stom; niet aan deze mogelijkheid gedacht. Maar . . . hoe weet je nu om welke auto het gaat als er geen chip in zit?
Door ook de tong af te snijden.
In de US is in niet een gevangenis meer plaats voor veroordeelden. Logisch toch sinds men de doodstraf in bijna alle staten van Amerika heeft afgeschaft.
Chinese company Royole has presented the world's first foldable smartphone, the FlexiPai, at CES in Las Vegas.
The FlexPai is a tablet that has small hinges allowing it to be folded up to the size of a smartphone. Once its AMOLED display is folded back out, the device effectively has two screens that can operate completely independently.
Royole CEO Bill Liu says the display can be folded up to 200,000 times without wear. The device's flexibility also protects it from breaking when dropped, the company says.
The FlexPai uses the latest Qualcomm Snapdragon 8 processor and has a 3,800 mAh battery. The smartphone's camera has a 20 megapixel telephoto lens and a 16 megapixel wide-angle lens, which can also be used for taking selfies.
The phone presented at CES is still only a prototype for software developers interested in creating applications for the device, and it's not yet clear when the device will go on sale. Developers can buy it for €1,300 (RM6,129). The company did not name a retail price.
Liu also said that the display technology could be useful in other fields as well, such as for large displays in cars or planes, as well as for billboards or even art installations.
There are rumours that other manufacturers like Huawei are looking into foldable devices as well.
By Jeff McMahon, Jan 14, 2019, 12:01am Energy From Fusion In 'A Couple Years,' CEO Says, Commercialization In Five
TAE Technologies will bring a fusion-reactor technology to commercialization in the next five years, its CEO announced recently at the University of California, Irvine.
"The notion that you hear fusion is another 20 years away, 30 years away, 50 years away—it's not true," said Michl Binderbauer, CEO of the company formerly known as Tri Alpha Energy. "We're talking commercialization coming in the next five years for this technology."
That trajectory is considerably sooner than Binderbauer described when he took over as CEO in 2017. It would put TAE ahead of two formidable competitors. The 35-nation ITER project expects to complete its demonstration reactor in France in 2025. Vancouver-based General Fusion Inc. is devoting the next five years, with support from the Canadian government, to developing a prototype of its fusion reactor. And the Massachusetts Institute of Technology announced last March that it expects to bring its fusion reactor to market in ten years.
For more than 20 years TAE has been pursuing a reactor that would fuse hydrogen and boron at extremely high temperatures, releasing excess energy much as the sun does when it fuses hydrogen atoms. Lately the California company has been testing the heat capacity of its process in a machine it named Norman after the late UC Irvine physicist Norman Rostoker.
Its next device, dubbed Copernicus, is designed to demonstrate an energy gain. It will involve deuterium-tritium fusion, the aim of most competitors, but a milestone on TAE's path to a hotter, but safer, hydrogen-boron reaction.
Binderbauer expects to pass the D-T fusion milestone soon.
"What we're really going to see in the next couple years is actually the ability to actually make net energy, and that's going to happen in the machine we call Copernicus," he said in a "fireside chat" at UC Irvine.
Clarification: The headline on this story originally began, "Energy From Fusion In Two Years." Two days after the story was published, Binderbauer backed away from that statement through a spokesperson, who said, "While Michl said a 'couple years,' he meant a small number of years. Not literally two."
Binderbauer appeared at UC Irvine alongside actor Harry Hamlin (of "Clash of the Titans" and "LA Law"), who was an early supporter and a co-founder of the company.
Microsoft co-founder Paul Allen was another prominent supporter, and Alphabet/Google a shareholder.
TAE has been funded so far by more than $500 million in private equity.
"An endeavor as monumental as this requires an upfront commitment of very substantial proportions, which runs counter to the way most R&D money is parceled out," according to the company's website. "With TAE operating as a private company, we have been able to research, experiment and iterate more rapidly than our competition."
But TAE is ready now to talk to the government.
"We're in the process of funding and putting that project together right now," Binderbauer said of Copernicus. "We're working actually for the first time with the DOE, in some form of a relationship where they're gonna contribute some in-kind, and this will be a sort of public-private partnership to pull that off, and then it goes to commercialization."
Researchers from North Carolina State University, the University of North Carolina and Arizona State University have developed an intelligent system for “tuning” powered prosthetic knees, allowing patients to walk comfortably with the prosthetic device in minutes, rather than the hours necessary if the device is tuned by a trained clinical practitioner. The system is the first to rely solely on reinforcement learning to tune the robotic prosthesis.
When a patient receives a robotic prosthetic knee, the device needs to be tuned to accommodate that specific patient. The new tuning system tweaks 12 different control parameters, addressing prosthesis dynamics, such as joint stiffness, throughout the entire gait cycle.
Normally, a human practitioner works with the patient to modify a handful of parameters. This can take hours. The new system relies on a computer program that makes use of reinforcement learning to modify all 12 parameters. It allows patients to use a powered prosthetic knee to walk on a level surface in about 10 minutes.
“We begin by giving a patient a powered prosthetic knee with a randomly selected set of parameters,” says Helen Huang, co-author of a paper on the work and a professor in the Joint Department of Biomedical Engineering at NC State and UNC. “We then have the patient begin walking, under controlled circumstances.
“Data on the device and the patient’s gait are collected via a suite of sensors in the device,” Huang says. “A computer model adapts parameters on the device and compares the patient’s gait to the profile of a normal walking gait in real time. The model can tell which parameter settings improve performance and which settings impair performance. Using reinforcement learning, the computational model can quickly identify the set of parameters that allows the patient to walk normally. Existing approaches, relying on trained clinicians, can take half a day.”
While the work is currently done in a controlled, clinical setting, one goal would be to develop a wireless version of the system, which would allow users to continue fine-tuning the powered prosthesis parameters when being used in real-world environments.
“This work was done for scenarios in which a patient is walking on a level surface, but in principle, we could also develop reinforcement learning controllers for situations such as ascending or descending stairs,” says Jennie Si, co-author of the paper and a professor of electrical, computer and energy engineering at ASU.
“I have worked on reinforcement learning from the dynamic system control perspective, which takes into account sensor noise, interference from the environment, and the demand of system safety and stability,” Si says. “I recognized the unprecedented challenge of learning to control, in real time, a prosthetic device that is simultaneously affected by the human user. This is a co-adaptation problem that does not have a readily available solution from either classical control designs or the current, state-of-the-art reinforcement learning controlled robots. We are thrilled to find out that our reinforcement learning control algorithm actually did learn to make the prosthetic device work as part of a human body in such an exciting applications setting.”
Huang says researchers hope to make the process even more efficient. “For example, we think we may be able to improve the process by identifying combinations of parameters that are more or less likely to succeed, and training the model to focus first on the most promising parameter settings.”
The researchers note that, while this work is promising, many questions need to be addressed before it is available for widespread use.
“For example, the prosthesis tuning goal in this study is to meet normative knee motion in walking,” Huang says. “We did not consider other gait performance (such as gait symmetry) or the user’s preference. For another example, our tuning method can be used to fine-tune the device outside of the clinics and labs to make the system adaptive over time with the user’s need. However, we need to ensure the safety in real-world use since errors in control might lead to stumbling and falls. Additional testing is needed to show safety.”
The researchers also note that, if the system does prove to be effective and enter widespread use, it would likely reduce costs for patients by limiting the need for patients to make clinical visits to work with practitioners.
Quantexa uses context-aware AI to helping uncover the vast criminal networks that underpin human trafficking, child exploitation and modern-day slavery.
Nextbigfuture interviewed Alexon Bell, head of Compliance Products at Quantexa. Alexon is a hands-on AML (Anti-Money Laundering) practitioner with over 16 years’ experience helping financial institutions with AML strategies, architectures and implementations. He has a wealth of experience in helping customers deploy and crucially optimise AML, KYC (Know Your Customer) and sanctions screening solutions, having held leadership roles at Actimize (Fortent), SAS as EMEA/AP head of compliance solutions and Oracle. As a thought leader, he works with banks to help them tackle the latest emerging threats, ensure coverage and prepare for new regulation in the fight against organised crime and terrorism.
Anti-human trafficking will be especially relevant as the Superbowl will be played next week. Big sporting events like the Superbowl, World Cup and the Olympics, regularly trigger an influx of sex-trade workers, with many being victims of human trafficking.
Arrests of pimps running underage sex rings are reported at the NFL Superbowl almost every year. Girls are being trafficked from as far away as Hawaii to hook up with clients via the Internet, hotels and strip clubs.
Some 1.5 million people in the United States are victims of trafficking, mostly for sexual exploitation. The majority are children, according to a U.S. Senate report published last year.
The Quantexa systems have advantages over other systems in that they can identify different companies and individuals in a more precise way. They can represent the correct relationships which are valid so that they do not generate false warnings. They can handle networks with over 15 billion records.
They are already helping many global organizations to solve financial crimes and indentify previously unknown criminal networks.
Beating Crime by Following the Money
Some common advice for investigators whether working in journalism or crime, is to follow the money. Money is the one thing that organized crime has in common, putting banks at the center in the fight to stop it.
By combining an institution’s transaction knowledge with information from outside databases like Dunne and Bradstreet or the Panama Papers, organizations can get a much more accurate view of entities and their relationships to each other – allowing them to uncover criminal enterprises and the channels being used to wash their dirty money.
Quantexa’s technology has already been helping banks to analyze their customers’ wider networks in their full context, using internal, publicly available and transactional data to flag suspicious activity. This combined with Deloitte’s market-leading financial crime prevention expertise will allow financial institutions to more accurately detect potentially illegal activity and provide a complete understanding of the overall risk.
Re: Futuristische ontwikkelingen
Geplaatst: Wo Jan 30, 2019 3:00 am
Israeli Team May Have Discovered ‘Complete Cure for Cancer’
By United with Israel Staff - Jan 28, 2019
Using cutting-edge genetic coding to kill diseased cells, Israeli researchers may have discovered the first “complete cure for cancer.”
A treatment pioneered by a team of Israeli scientists may represent the first “complete cure for cancer,” according to Dan Aridor, chairman of the board of the company developing the treatment, Israel’s Accelerated Evolution Biotechnologies Ltd. (AEBi).
“Our cancer cure will be effective from day one, will last a duration of a few weeks and will have no or minimal side-effects at a much lower cost than most other treatments on the market,” Aridor told the Jerusalem Post.
The treatment is called MuTaTo (multi-target toxin) and was analogized to a “cancer antibiotic” in a report in the Post. The team developed the cancer therapy after assessing a variety of cancer drugs and treatments that failed in the past.
AEBi CEO Dr. Ilan Morad explained to the Post, “We made sure that the treatment will not be affected by mutations; cancer cells can mutate in such a way that targeted receptors are dropped by the cancer.”
Nieuwe technologie projecteert stem of muziek rechtstreeks je hoofd in
Stel dat zo’n krachtige zender ooit in een satelliet wordt geïnstalleerd... (Afbeelding: Getty Images (3)).
Het klinkt als een science fiction of de paranoïde fantasie van een schizofreen, maar het is toch echt waar: een wetenschappelijk team van het prestigieuze MIT heeft technologie ontwikkeld waarmee op afstand een stem of muziek in je hoofd kan worden geprojecteerd. Iedere onwetende ‘ontvanger’ van zo’n plotselinge stem in je hoofd zou meteen denken gek te zijn geworden, maar volgens het MIT zal de technologie enkel gebruikt worden om de mensheid te helpen. Ja vast.
Het MIT team beschreef in het blad Optics Letters twee verschillende methodes hoe geluiden, muziek en opgenomen spraak met een laser in de hoofden van mensen kan worden gezonden. Beide technieken maken gebruik van het zogenaamde foto-akoestische effect, dat de formatie van geluidsgolven als gevolg van het absorberen van licht door een bepaald materiaal of stof omvat.
De wetenschappers slaagden erin om vanaf een afstand van ongeveer 2 ½ meter een geluid met een sterkte van 60 decibel (achtergrond muziek of een gesprek in een restaurant) in iemands hoofd te projecteren, zonder dat iemand de bron van dat geluid kon horen. (2)
Het team werkt eraan om deze afstand te vergroten, waardoor de onzichtbare stem ooit gebruikt zou kunnen worden in gevaarlijke situaties zoals een massale schietpartij. De autoriteiten zouden dan iedereen op een locatie of in een bepaald gebied of gebouw onmiddellijk kunnen waarschuwen zonder gebruik te hoeven maken van smartphones en alarmbellen, en zonder dat anderen deze waarschuwing zouden kunnen horen.
‘Stem van God’ Goedgelovige mensen zouden zomaar kunnen denken dat ze de stem van God of Allah in hun hoofd horen. Wat een fantastische manier voor religieuze leiders om hun volgelingen totaal te laten gehoorzamen en te bevelen allerlei dingen te doen (of te laten).
Net van totale controle steeds strakker Op bredere schaal zien we zeker de afgelopen 10 jaar het aantal camera’s op straat exponentieel toenemen. Gezichtsherkenning is inmiddels normaal, en social media zoals Facebook hebben honderden miljoenen mensen zover gekregen om alles van zichzelf te laten zien aan iedereen die het maar wil weten (wij kunnen ons dat niet voorstellen, maar beseffen dat we een kleine minderheid zijn). Via onze smartphones (aan of uit) kunnen autoriteiten (of bedrijven en adverteerders) nauwkeurig onze dagelijkse gangen nagaan en op ieder moment uitvinden waar we ons bevinden.
Wat handig, want wat kan er mis gaan? Daar komt in de toekomst dus deze techniek bij waarmee bepaalde partijen u stemmen, geluiden en muziek kunnen laten horen. Wat handig, zal de soms angstwekkend naïeve en doodeenvoudig te manipuleren jongere generatie ongetwijfeld denken. Volgens sommigen bestaat deze technologie al veel langer, dus wie weet waar en van wie die ‘wat handig’ gedachte in realiteit van afkomstig is...?
We moeten er niet aan denken wat er gebeurt als politie en het leger de beschikking krijgen over laser apparatuur waarmee met een afgrijselijk harde en pijnlijke toon een complete mensenmassa in één keer uitgeschakeld en letterlijk gillend gek gemaakt kan worden. Emanuel Macron zal zich met het oog op de Gele Hesjes vermoedelijk verlekkerd in zijn handen wrijven, en met hem de hele EU-elite in Brussel.
Overigens werden dit soort wapens al tientallen jaren geleden in de VS ontwikkeld. In 1989 werd er patent aangevraagd op een methode waarmee met microgolven geluidsgolven van grote afstand in het gehoorcentrum van mensen (en dieren) konden worden gezonden. (2)
Ayar Labs, a silicon photonics startup based in Emeryville, California, is getting set to tape out its electro-optical I/O chip, which will become the basis of its first commercial product. Known as TeraPHY, it’s designed to enable chip-to-chip communication at lightning speed. The company is promising bandwidth in excess of one terabit per second, while drawing just a tenth the power of conventional electrically-driven copper pins.
The problems of copper pins and electrical signaling encompasses to power, data reach, and chip real estate limitations. As the performance of processors advance, faster data rates are required; more data is needed to feed the chips and more data is generated during processing. Although growth in processor performance has slowed over the past several years due to the erosion of Moore’s Law, electrically-driven chip communication has still been unable to keep pace. According to Ayar Labs Chief Strategy Officer and company cofounder Alex Wright-Gladstein, there is a consensus that the highest data rate that will be able to exit a chip package is 100 gigabits per second (Gbps). “The industry really didn’t even agree that we would reach a limit before 2018, but that debate has gone away entirely,” she told The Next Platform.
Looking just slightly into the future, a 10-teraflop processor that could be the basis for an exascale supercomputer would need something on the order of 10 Tbps of chip I/O to be usable. But that would require around 2,000 copper pins, which together would draw about 100 watts – not the chip just the I/O pins. If that sounds problematic, that’s because it is.
Silicon photonics, the melding of optical communications and semiconductors for chip-level I/O, is widely viewed as the solution. Intel, IBM, HPE, Fujitsu and others have been working on the technology for years, but cost-effective solutions have been elusive, mainly because these devices tend to be constructed with exotic compounds and rely on special semiconductor manufacturing techniques. That’s fine for long-haul optical fiber communications, but for short-haul electro-optics, costs need to align with server and chip pricing.
Here is where Ayar stands out from its competition. The startup has managed to build its devices using standard CMOS – the same manufacturing technology used to etch integrated circuits on commercial microprocessors. And it doesn’t need to be a leading-edge process node either. The first TeraPHY chip will use GlobalFoundaries’s 45nm CMOS SOI process, the same one used by the Blue Gene/Q processor back when IBM had this manufacturing technology in-house. As a consequence, the company has apparently avoided the major cost roadblock that vexes other electro-optical solutions.
Ayar’s technology has its roots in a 10-year, $20 million project funded by DARPA that brought in researchers from MIT, UC Berkeley and the University of Colorado, Boulder. The project, known as Photonically Optimized Embedded Microprocessors (POEM), aimed to solve the I/O bottleneck at the level of the processor. The research team demonstrated a prototype electro-optical chip, which was subsequently described in an academic paper published in December 2015. Wright-Gladstein, an MIT MBA grad, convinced researchers Mark Wade, Chen Sun, Rajeev Ran, and Milos Popovic that the technology should be commercialized and together they cofounded Ayar Labs.
According to the company’s website, the initial TeraPHY device will be available as a 1.6 Tbps optical transceiver, comprised of four 400G transceivers per module. All the componentry except for the light source (which is supplied by a separate 256-channel laser module, called SuperNova) has been integrated into the device. That includes the electrical interfaces, the optical modulators, the photodetectors, and the dense wavelength division multiplexing (DWDM) wavelength multiplexer/demultiplexer, as well as all the driver and control circuitry.
Wright-Gladstein says the solution not only delivered a high performance electro-optical device, but was able to do so in an area 1/100 the size of a typical long-haul optical transceiver. “Because of that 100x size difference, you’re now crossing that threshold set by electrical SerDes, and you’re making an optical I/O that’s smaller than your electrical I/O,” she says.
Forgoing the more exotic designs of other silicon photonics solutions required some extra tinkering, according to Wright-Gladstein, who admitted the chip can be a little “finicky” to use. But if you integrate the device alongside high-performance transistors, you can use those transistors to operate the TeraPHY very reliably, she maintains.
Physically, TeraPHY is in the form of an “chiplet,” a chunk of silicon that is meant to be integrated into the kind of multi-chip modules that are becoming more commonplace in high-end processor packages. The TeraPHY tape out is slated for the end of the current quarter (Q1 2019), with the first integrated products from Ayar’s silicon technology partners due to hit the street in 2020.
The identity of those partners is still a mystery, although in recent conversation we had with Wright-Gladstein and Ayar CEO Charlie Wuischpard, they indicated that their first commercial offering will initially show up in datacenter switches. That would make a lot of sense, inasmuch as TeraPHY is essentially an optical transceiver, something switch vendors would already be familiar with integrating into their boxes.
More generally though, TeraPHY is meant to be integrated into all kinds of multi-chip packages — CPUs, GPUs, FGPAs, and custom ASICs, as well as DIMMs – destined for datacenter duty. Not only will the optical communication greatly speed up chip-to-chip and chip-to-memory data transfers, it will also allow for the disaggregation of server processors, memory, and local storage, paving the way for more efficient and flexible designs. ‘What we [intend to] do is enable the Intels of the world, the AMDs, the Nvidias, the HPEs, and the Crays to build new architectures based on this technology,” explained Wuischpard.
Ayar’s main customer base, at least initially, will be the owners of high performance computers and hyperscale cloud datacenters. These are the customers with the most insatiable needs for bandwidth, an obsessive concern for keeping power use in check, and a healthy appreciation for optical communication technology. That makes them a good match for these first-generation silicon photonics products. Ayar pegs the total addressable market at $29 billion.
Further down the road, the company is eyeing other application areas, including autonomous vehicles, IoT, and various kinds of mobile devices. “We want to get every chip communicating using light,” says Wright-Gladstein.
Ideally, Ayar would be able to sell its wares to multiple chip and system vendors, but if the technology is even half as capable as Ayar portrays it, the company will be acquired before too much longer. Although practically any server, network, storage, or telecom vendor could benefit from owning their own silicon photonics IP, the most value would be derived from chipmakers like Intel and AMD. If the TeraPHY chiplets deliver as advertised, they could substantially improve the capabilities of their respective datacenter offerings: Xeon processors and Stratix and Arria FPGAs in Intel’s case, and EPYC processors and Radeon GPUs in AMD’s case. Heterogeneous packages (CPU-FPGA and CPU-GPU) would also be good targets.
Given AMD’s enthusiasm for the chiplet package concept in its EPYC line, and its need to come up with stronger differentiation for its datacenter silicon relative to its more dominant rivals (Intel and Nvidia), it might be the ideal buyer of an optical I/O chiplet company. Intel, meanwhile, has its own silicon photonics program in motion, so might be reticent to add a competing technology.
With the TeraPHY chiplets sampling later this year and 2020 commercial deployments just around the corner, the next couple of years will likely demonstrate whether Ayar has something worth acquiring – either the whole company or its products. And if it does, the buyers will line up accordingly.
Fusion power — the process that keeps stars like the Sun burning — holds the promise of nearly unlimited clean power. But scientists have struggled for decades to make it a practical energy source.
Now, laser scientists say a machine learning breakthrough has smashed the standing record for a fusion power yield. It doesn't mean fusion power is practical yet, but the prestigious journal Naturecalled the result "remarkable" and wrote that it has "major implications" — so, at the very least, it's another hint that the long-deferred technology is starting to come into focus.
60 Lasers Scientists at the Massachusetts Institute of Technology and the University of Rochester blasted a deuterium-tritium pellet with 60 laser beams to turn it into plasma. The energy output of that type of experiment depends on delicate variations in how the lasers pulse, so the researchers fed fusion simulation data into a machine learning algorithm and used its suggestion in a real life experiment.
"This was a very, very unusual pulse shape for us," said Michael Campbell, the director of the university's Laboratory for Laser Energetics, in a blog post.
Bridging The Gap The result was promising: in a new paper, the researchers describe how it tripled the previous record for direct-drive laser fusion. The next step, the researchers say, is to try again with a bigger laser.
"We were inspired from advances in machine learning and data science over the last decade," said researcher Varchas Gopalaswamy in the blog post. His colleague Riccardo Betti added that the "approach bridges the gap between experiments and simulations to improve the predictive capability of the computer programs used in the design of experiments."
De ontwikkelingen voor wat betreft kunstmatige intelligentie versnellen dus blijkbaar ook de weg naar rendabele kernfusie. Zo versnelt het ene proces het andere. Het geheel aan elkaar versnellende processen zal waarschijnlijk nog wel verder groeien verwacht ik. Als rendabele kernfusie eenmaal een feit is zal de vrijkomende energie nog meer processen gaan versnellen.
Re: Futuristische ontwikkelingen
Geplaatst: Ma Feb 04, 2019 8:33 am
door Ali Yas
It doesn't mean fusion power is practical yet
Men had erbij moeten schrijven: langs deze weg. Men doet hier net alsof men niet weet van het bestaan van tokamaks en stellarators.
By Patrick Nelson, Network World | FEB 7, 2019 3:26 AM PT
New light absorption techniques in semiconductors get clock rates 5,000 times faster than current PCs. It's one more step towards an all-light computing environment.
Electrical currents are best created using semiconductor crystals that absorb light, say researchers who have announced a significant, potential computer-speed breakthrough. The team obtained ultrafast clock rates in the terahertz of frequencies, using light. That is significantly higher than existing single-gigahertz computer clock rates.
The "bursts of light contain frequencies that are 5,000 times higher than the highest clock rate of modern computer technology," researchers at the Forschungsverbund research association in Germany announced in a press release last month. A chip's oscillating frequencies, called clock rate, is one measurement of speed.
In the German experiments, conducted by the association's Max-Born Institute, extremely short, intense light pulses from near-Infrared to a visibly orange color were used to generate oscillating currents in a semiconductor called gallium arsenide. The chip emitted terahertz radiation because of the oscillations. "Electric currents can be generated," the group say. The breakthrough offers "novel, interesting applications in high frequency electronics" that could conceivably mean much faster computers than are available now.
All light and photons There are those who think all computers, and other electronics, will eventually be run on light and forms of photons and that we will ultimately see a shift over to all-light. Indeed, in terms of creating current, solar panels already convert light into electrical current.
Facebook's initial plans for its data-carrying space laser satellites were revealed in January, according to IEEE Spectrum. The publication says construction permits pulled at Los Angeles County's building department show a Facebook-linked company is building observatories on a mountaintop there, and they will be part of a laser data project in space. Again, more efficient.
Growing lasers on chip silicon is another angle in this photon and light movement. Lasers could reduce the major bottlenecks one sees at the copper wire part of a chip. Conveniently, silicon-germanium, a material used to make microprocessors, has some light-absorbing properties.
Finnish Aalto University, along with Université Paris-Sud, this week is in fact claiming that it can propagate data in a microchip better using a new kind of nanoscale amplifier. It corrects a problem whereby very fast attenuation of light within the chip hinders the flow of information when it flows from one processor to another, the group explains in a press release. They're using an atomic layer to get the results.
Storage, too Even storage, something that has not been thought of as being a suitable light-based medium because traditional lasers haven't been fast enough, may now be heading towards the light: a hybrid, data center-geared, hard drive concept uses ultrashort light pulses to write to magnetic media very quickly and efficiently. It's up to a thousand times faster than today's hard drives, Eindhoven University of Technology (TU/e) in Holland, announced last month.
"Boosting performance through electronic methods is getting to be very difficult, which is why we're looking towards photonics for answers," says Aalto doctoral candidate John Rönn, in the school's announcement.
Re: Futuristische ontwikkelingen
Geplaatst: Vr Feb 08, 2019 10:36 pm
Richard Branson wil deze zomer zelf de ruimte in
08 februari 2019
De Britse miljardair Richard Branson wil voor het vijftigjarige jubileum van de maanlanding een ruimtereis maken met behulp van zijn eigen Virgin Galactic-ruimtevliegtuig, zegt hij tegen persbureau AFP.
Branson was deze week in de Amerikaanse hoofdstad Washington voor een ceremonie ter ere van zijn ruimtevaartbedrijf.
Virgins SpaceShipTwo slaagde er in december vorig jaar in om de grens van de atmosfeer van de aarde te bereiken, op 80 kilometer afstand van het oppervlak.
De eerste bemande maanlanding vond plaats op 20 juli 1969. De Amerikanen Neil Armstrong en Buzz Aldrin zetten toen als eerste mensen voet op het maanoppervlak.
SpaceShipTwo vermoedelijk in juli klaar met tests Het is de bedoeling dat SpaceShipTwo op termijn wordt gebruikt voor toeristische reizen naar de ruimte. Momenteel wordt het schip nog uitgebreid getest, maar Branson verwacht dat dat proces in juli klaar is.
Het project van Virgin Galactic liep de afgelopen jaren een flinke achterstand op na een dodelijk ongeluk met SpaceShipTwo in de Mojavewoestijn. Daarbij kwam een piloot om het leven.
A cheap, wearable, high-resolution equivalent to MRI can be created using holograms that reform red light after it passes through the body.
Gigapixel Cameras Can Track All Human Physical Movement High-resolution cameras were already used to record all movement over sections of Iraq, Afghanistan and parts of the USA. They were successfully used to find terrorist bomber and murderers.
A high-resolution camera is placed in a Cessna or a long duration drone. An area the size of a city is filmed so that one pixel is one person. The pixel-person is digitally highlighted and tracked by video gamers. All events are recorded.
When a murdered body is found or a bombing occurs. The constant recording is rewound and everyone who passed through that area is traced.
Complete physically tracking of individuals, cars, planes and ships and across entire countries or even the world has trivial cost.
This information can be combined with all of the financial, retail, medical and social media data.
China is already deploying the cameras and monitoring of financial, retail, medical and social media data for state control. This capability could be exported or copied.
Re: Futuristische ontwikkelingen
Geplaatst: Za Feb 09, 2019 2:33 am
‘The breakthrough of the 21st century as far as I’m concerned’: How this product is changing lives
Nick Whigham - February 1, 2019
The MyEye camera will follow the user’s finger to the page. Source: Supplied
Sydney woman Lisa Hayes has had a debilitating illness since birth, but a small device she got for free has changed everything.
For most of us, receiving junk mail is an annoyance. For Sydney woman Lisa Hayes, it’s a thrill.
She was born completely blind and has never known what it’s like to scan through the items in unsolicited catalogues that get stuffed into her letter box. That was until last September when she received a small device that clips onto a pair of glasses and uses sophisticated artificial intelligence technology to recognise faces and read text for her.
“It’s one of the best things I’ve ever had,” she tells news.com.au. Ms Hayes, 50, and says the device has transformed her life.
“It has got be the breakthrough of the 21st century as far as I’m concerned.”
The product, called MyEye 2, is the second version of the assistive wearable technology made by Israeli company OrCam.
Designed for the blind and visually impaired, the device clips on to the side of a pair of glasses. On the front is a camera with real time visual recognition technology and on the back is a small speaker that discreetly relays the information into the ear of the user — and comes in 23 different languages.
Ms Hayes has been proficient in braille from an early age but she now relishes being able to read a book or magazine article recommended to her by friends.
“Being totally blind since birth, I’ve never been able to read a print book,” she said. “I can now actually read. I can read medication boxes, I can pick up junk mail.
“I feel like I’m part of the real world.”
The clip on device is small and light. Source: Supplied
The company showcased the device at the Consumer Electronics Show (CES) earlier this month, where news.com.au had a hands-on demonstration. It was easily one of the most exciting products on display during the massive trade show.
To prompt the device to read, users hold their finger, pointing upwards in front of their nose, and then slowly point to the top of the page where they wish to begin reading. The device will then start dictating the text to them.
The MyEye 2 can also recognise familiar faces and tell you when a particular friend is approaching, or otherwise say “a man is in front of you”.
The technology can read barcodes on products at the supermarket and tell you what it is, recognise colours and tell the denomination of paper money.
Performing certain gestures, such as holding your wrist in front of your face for example, will prompt the device to tell you the time and date.
“Everything can be done with a hand gesture,” says Elad Serfaty, VP of marketing at OrCam.
“The users can be from young to senior people, and you don’t want such a sophisticated device to have a sophisticated operating system.”
According to the company, its MyEye device is the only wearable artificial vision tech that is activated by an intuitive pointing gesture or by simply following the wearer’s gaze — allowing for hands-free use without the need of a smartphone or Wi-Fi.
The device does not require an internet connection, providing real time visual information and audio communication while allowing complete freedom of movement and ensuring data privacy because the activity is not stored anywhere.
For Ms Hayes, not only has the device helped change her daily habits but best of all, she got it for free thanks to the National Disability Insurance Scheme (NDIS).
“One of our missions is to get the right reimbursement or funding for the device,” Mr Serfaty said. “This type of community, the visually impaired, are not usually not the strongest financially so we try to find ways for them to get it for free.”
The company works with Visions Australia and distributor Quantum RLV to provide the device to Australians but it costs nearly $7,000 to buy outright.
“There are many different ways to get the device funded,” says John Wolff, OrCam’s regional director for Australia and New Zealand.
One potential subsidy is through the government’s Job Access program aimed at driving disability employment, and another is through Veteran’s Affiars. But getting funding through the NDIS has been hit and miss for some patients.
“We’d like to see more approvals but unfortunately often times the people that are assessing the technology at the NDIS aren’t specialists in low vision or blind issues so they don’t really understand how these devices can specifically help people, they just look at the price,” Mr Wolff said.
“There are approvals that go through but one thing that’s been challenging for us, and maybe it’s because the NDIS is a new system.”
Despite the high price tag, OrCam says it has hundreds of users in Australia, including some who bought the device out right. The company likes to compare it to a hearing aid, with a middle level hearing aid costing about the same amount.
The MyEye 2 device weighs just 22.5 grams and attaches to glasses via small magnets, providing a relatively discreet appearance.
“If you’ve got a disability, people are looking at you anyway but if you’ve got something that’s big a bulky, you stand out more,” Ms Hayes said.
Her only complaint is the short battery life: “It could last a bit longer, two hours is the maximum you’ll get if you’re using it continuously.”
To help overcome this, OrCam has a small charging unit that can clip onto a user’s belt so they can charge and swap batteries with ease.
Ms Hayes counts herself lucky she was able to get the device for free — something she attributes to having a specialist lodge her report with the NDIS. But after just a handful of months with the assistive tech, it’s given her a new lease on life.
De mens kan niet eens zijn eigen dood recenseren. Dit soort artikelen dat met een dergelijke nutteloze vraag begint, sla ik maar al te graag over. Desalniettemin houd ik rekening met de ander. Uitsluitend omdat ik nu eenmaal een FFI-fan ben. Maar alle moslims mogen wat mij betreft liefst vandaag nog verdwenen zijn omdat zij aan hun eigen virus Islam zijn komen te overlijden. Moslims en futurisme. Dat is een paradoxale gedachte zonder toekomst.
If we just found a planet that is 1 light year away and has humans on it, how long would it take for us to meet them with our current technology?
James Swingland, MSci Physics, MRes Bioimaging, PhD Computational biology, 2 years Data Science
Updated Jan 22
If we just found a planet that is 1 light year away and has humans on it, how long would it take for us to meet them with our current technology?
With our current technology, forever. It just isn't possible yet. The furthest one of our probes has gone is 0.2% of a light year. It has taken almost 4 decades to manage that. In case you misread me thats 21000 of a light year in 40 years. That probe will likely be rendered non-functional by either time, or the rare interstellar dust, thousands of years before it would ever get to this hypothetical world. Definitely its systems would all have errors. Probably, using our current technology, we could make a probe go a bit faster if we had somewhere in mind. But it's a probe, it cannot carry people. ________________________________________ Edit: In the comments it has been suggested that I undervalue improvements to rocketry since the 1970s. Looking at the currently fastest moving man made object (presumably relative to the Earth), we can see that it is approximately 2.7 times as fast as Voyager. It was not designed with leaving the solar system, so we can probably push a little bit more out of a probe, lets say 5 times as a maximum limit. However, even if we push it to 5 times as fast as Voyager - we can see it would take 4000 years. ________________________________________ As mentioned, this is just for a probe. Moving humans, is much much harder. Extra mass due to life support, food, water, power (the probe hardly needs any) and all the living space and plumbing we’d need adds to the weight, which adds to the fuel needed. This also weighs more, adding still more to fuel requirements. (So clearly, we’d send a probe in preference to a manned mission. However, the question specifies a manned mission.)
To make a manned mission even worse though, it would obviously take longer than a human lifetime. This means (as we do not have the suspended animation or cryogenic technologies beloved by hard SciFi fans) we’d need a multigeneration ship. With all of the increases in mass, reliability and complexity that entails (and as pointed out in the comments: potential social issues).
Just creating an isolated habitat which can survive that long would be a massive engineering (& ecological) challenge. I doubt our current technology would make this possible.
Biosphere 2 was a project to build a habitat on a small scale in the earth (so with sunlight, an atmosphere of radiation shielding and gravity). It suffered all sorts of problems both ecological and social, and is generally considered a failure. Its longest run was 2 years, a far shorter time than would be needed for this mission. There are likely to be other problems performing the same experiment in space. These you probably cannot predict.
Allowing a few years development time, there are realistic proposals to send a probe to alpha/proxima centauri with 20 years journey time. But that is again only a probe, and the beam propulsion used allows for no way to slow down at the other end (I guess we could get the aliens to build something though). The method also relies on the low mass of the payload. As mentioned above, a living human takes a massive payload, and this approach would be unsuitable. So even with that technology, the manned mission would be impossible.
Another option (still with some development time) would be to copy the project Orion approach. Using nuclear bombs to push the rocket we’d probably make it in just a few decades. There would probably be a war when we come zooming into their solar system firing nuclear bombs out of our ship though!*
More speculatively, something like a Nuclear salt-water rocket is thought to provide many of the advantages of an Orion style drive, without the requirement of a massive “pusher plate” and hence greater scalability (downwards scalability, the Orion approach is not as good at smaller missions). We are still talking decades of travel though, and that means a lot of engineering difficulties. Verdict: still likely impossible (sad isn't it!).
Of course it is possible there would be a breakthrough which would allow us to travel much faster. Fusion power (which has been “just around the corner” for 60 years) would allow for much faster travel and potentially the energy density needed to keep people alive on the trip. Antimatter too could provide an engine capable of getting there in just a few years. Pion-rockets for example involve colliding protons and antiprotons, and would likely get there in around 2 years. However the main problem with antimatter is that storing it in even minute quantities is extremely difficult. Storing the amounts needed for this kind of mission, even for a few minutes (rather than the years needed) is absolutely impossible given current technology. Even making the quantities needed are effectively impossible currently.
Now, likely in a few decades (or even less) one of the speculative advanced technologies would bear fruits. But that is inherently unpredictable. It also goes a long way from “current technology”.
Much more sensible would be sending radio messages between us (obviously against the question details). Whether we could ever communicate at such a distance, even with creatures identical to ourselves, is debatable though.
The question is moot anyway, since the current nearest star to our own is around 4 light years away. I say currently because 70,000 years ago it seems like there was a star under one light year from ours**. Such scenarios are estimated to occur approximately every 100,000 years on average. The star that was so close 70,000 years ago is now 20 light years away. It is a relatively small star and was only identified in the last 5 or so years. For travelling 4 light years, we would take 4 times as long of course (not quite due to acceleration time.)
In the comments several people have wondered about time dilation. Unfortunately these journeys are too short for this to be an important factor. If we could get up to a high enough percentage of c (we can't), then we would be at that speed for so little time, that the difference would be irrelevant.
*Technically the Medusa project variant is probably the better set up. For a start it is lighter, and captures a higher percentage of the nuclear blast. It scales somewhat better because of these factors. **Scholz's star - definitely worth reading about (Wikipedia is always a good first place too).
A new 3D printer uses light to transform gooey liquids into complex solid objects in only a matter of minutes.
Nicknamed the "replicator" by the inventors — after the Star Trek device that can materialize any object on demand — the 3D printer can create objects that are smoother, more flexible and more complex than what is possible with traditional 3D printers. It can also encase an already existing object with new materials — for instance, adding a handle to a metal screwdriver shaft — which current printers struggle to do.
The technology has the potential to transform how products from prosthetics to eyeglass lenses are designed and manufactured, the researchers say.
"I think this is a route to being able to mass-customize objects even more, whether they are prosthetics or running shoes," said Hayden Taylor, assistant professor of mechanical engineering at UC Berkeley and senior author of a paper describing the printer, which appears online today (Jan. 31) in the journal Science.
"The fact that you could take a metallic component or something from another manufacturing process and add on customizable geometry, I think that may change the way products are designed," Taylor said.
Most 3D printers, including other light-based techniques, build up 3D objects layer by layer. This leads to a "stair-step" effect along the edges. They also have difficulties creating flexible objects because bendable materials could deform during the printing process, and supports are required to print objects of certain shapes, like arches.
The new printer relies on a viscous liquid that reacts to form a solid when exposed to a certain threshold of light. Projecting carefully crafted patterns of light — essentially "movies" — onto a rotating cylinder of liquid solidifies the desired shape "all at once."
"Basically, you've got an off-the-shelf video projector, which I literally brought in from home, and then you plug it into a laptop and use it to project a series of computed images, while a motor turns a cylinder that has a 3D printing resin in it," Taylor said. "Obviously there are a lot of subtleties to it — how you formulate the resin, and, above all, how you compute the images that are going to be projected, but the barrier to creating a very simple version of this tool is not that high."
Taylor and the team used the printer to create a series of objects, from a tiny model of Rodin's "The Thinker" statue to a customized jawbone model. Currently, they can make objects up to four inches in diameter.
"This is the first case where we don't need to build up custom 3D parts layer by layer," said Brett Kelly, co-first author on the paper who completed the work while a graduate student working jointly at UC Berkeley and Lawrence Livermore National Laboratory. "It makes 3D printing truly three-dimensional."
A CT scan — in reverse The new printer was inspired by the computed tomography (CT) scans that can help doctors locate tumors and fractures within the body.
CT scans project X-rays or other types of electromagnetic radiation into the body from all different angles. Analyzing the patterns of transmitted energy reveals the geometry of the object.
"Essentially we reversed that principle," Taylor said. "We are trying to create an object rather than measure an object, but actually a lot of the underlying theory that enables us to do this can be translated from the theory that underlies computed tomography."
Besides patterning the light, which requires complex calculations to get the exact shapes and intensities right, the other major challenge faced by the researchers was how to formulate a material that stays liquid when exposed to a little bit of light, but reacts to form a solid when exposed to a lot of light.
"The liquid that you don't want to cure is certainly having rays of light pass through it, so there needs to be a threshold of light exposure for this transition from liquid to solid," Taylor said.
The 3D printing resin is composed of liquid polymers mixed with photosensitive molecules and dissolved oxygen. Light activates the photosensitive compound which depletes the oxygen. Only in those 3D regions where all the oxygen has been used up do the polymers form the "cross-links" that transform the resin from a liquid to a solid. Unused resin can be recycled by heating it up in an oxygen atmosphere, Taylor said.
"Our technique generates almost no material waste and the uncured material is 100 percent reusable," said Hossein Heidari, a graduate student in Taylor's lab at UC Berkeley and co-first author of the work. "This is another advantage that comes with support-free 3D printing."
The objects also don't have to be transparent. The researchers printed objects that appear to be opaque using a dye that transmits light at the curing wavelength but absorbs most other wavelengths.
"This is particularly satisfying for me, because it creates a new framework of volumetric or 'all-at-once' 3D printing that we have begun to establish over the recent years," said Maxim Shusteff, a staff engineer at the Livermore lab. "We hope this will open the way for many other researchers to explore this exciting technology area."
One of the key building blocks of flexible photonic circuits and ultra-thin optics are metasurfaces. And EPFL engineers have now discovered a simple way of making these surfaces in just a few minutes – without needing a clean room – using a method already employed in manufacturing. Their findings have just been published in Nature Nanotechnology.
Optical circuits are set to revolutionize the performance of many devices. Not only are they 10–100 times faster than electronic circuits, but they also consume a lot less power. Within these circuits, light waves are controlled by extremely thin surfaces called metasurfaces that concentrate the waves and guide them as needed. The metasurfaces contain regularly spaced nanoparticles that can modulate electromagnetic waves over sub-micrometer wavelength scales.
Metasurfaces could enable engineers to make flexible photonic circuits and ultra-thin optics for a host of applications, ranging from flexible tablet computers to solar panels with enhanced light-absorption characteristics. They could also be used to create flexible sensors to be placed directly on a patient's skin, for example, in order to measure things like pulse and blood pressure or to detect specific chemical compounds.
The catch is that creating metasurfaces using the conventional method, lithography, is a fastidious, several-hour-long process that must be done in a clean room. But EPFL engineers from the Laboratory of Photonic Materials and Fiber Devices (FIMAP) have now developed a simple method for making them in just a few minutes at low temperatures – or sometimes even at room temperature – with no need for a clean room. The EPFL's School of Engineering method produces dielectric glass metasurfaces that can be either rigid or flexible. The results of their research appear in Nature Nanotechnology.
Turning a weakness into a strength The new method employs a natural process already used in fluid mechanics: dewetting. This occurs when a thin film of material is deposited on a substrate and then heated. The heat causes the film to retract and break apart into tiny nanoparticles. "Dewetting is seen as a problem in manufacturing – but we decided to use it to our advantage," says Fabien Sorin, the study's lead author and the head of FIMAP.
With their method, the engineers were able to create dielectric glass metasurfaces – rather than metallic metasurfaces – for the first time. The advantage of dielectric metasurfaces is that they absorb very little light and have a high refractive index, making it possible to effectively modulate the light that propagates through them.
To construct these metasurfaces, the engineers first created a substrate textured with the desired architecture. Then they deposited a material – in this case, chalcogenide glass – in thin films just tens of nanometers thick. The substrate was subsequently heated for a couple of minutes until the glass became more fluid and nanoparticles began to form in the sizes and positions dictated by the substrate's texture.
The engineers' method is so efficient that it can produce highly sophisticated metasurfaces with several levels of nanoparticles or with arrays of nanoparticles spaced 10 nm apart. That makes the metasurfaces highly sensitive to changes in ambient conditions – such as to detect the presence of even very low concentrations of bioparticles. "This is the first time dewetting has been used to create glass metasurfaces. The advantage is that our metasurfaces are smooth and regular, and can be easily produced on large surfaces and flexible substrates," says Sorin.