/* ---- Google Analytics Code Below */

Wednesday, November 30, 2022

US Rail Strike Implication Studies

 Looking for studies that have been done regarding the current and near future implications regarding the possible US labor rail-strike.  In particular how this could could change costs of the supply chain by industry,  and ultimately costs for consumers.   Any pointers are appreciated and sources will be cited.  

Laying the Groundwork for Digital Twins

 Thoughtful , introductory.  Would prefer a view that is more a simulation-engineering basis than metaverse, but still useful. 

Laying the groundwork for digital twins   in Mckinsey

AWS REINVENT 2022

November 29, 2022What if you had a simulation of yourself, an avatar whom you could send into unknown or risky situations to gauge an outcome? You could send your avatar to that new street food stall to see if you can stomach it, or test “yourself” with a new workout regimen. If things are going OK, add some extra hot sauce, or another set of reps. Digital twin technology doesn’t exist for people (yet?). But some organizations are putting digital twins in place for their products, manufacturing facilities, and supply chains. What are they? In a recent episode of the McKinsey Talks Operations podcast, McKinsey partners Kimberly Borden and Anna Herlt explain exactly what digital twins are, as well as how they can add business value—reduced time to market, more efficient product design, and tremendous improvements in product quality. For more on this trending new technology, check out the insights below. And stay tuned for more insights on topics that will headline this year’s AWS re:Invent 2022 (#reInvent).

Links in the linked-to text for each

Digital twins: What could they do for your business?

Digital twins: From one twin to the enterprise metaverse

Digital twins: The foundation of the enterprise metaverse

Digital twins: How to build the first twin

Digital twins: Flying high, flexing fast

Digital twins: The art of the possible in product development and beyond  ... 

IBM, Maersk Pull the Plug on Blockchain-based TradeLens Shipping Platform

Had followed this for some time, I thought it was a good example, used it in talks The 'why' seems weak.  Unless the fundamentals were just not there. 

IBM, Maersk pull the plug on blockchain-based TradeLens shipping platform  By Kyt Dotson in SiliconAngle

Computing giant IBM Corp. and Danish shipping company A.P. Moller – Maersk are discontinuing their blockchain-enabled shipping platform, TradeLens, which was jointly developed by the two companies for tracking shipments and managing supply chains in the container industry.

Maersk announced late Tuesday that the platform had failed to meet its commercial goals necessary to sustain itself, and thus the two companies are now pulling the plug on the platform. It’s expected to go offline by the end of the first quarter of 2023.

“TradeLens was founded on the bold vision to make a leap in global supply chain digitization as an open and neutral industry platform,” said Rotem Hershko, head of business platforms at Maersk. “Unfortunately, while we successfully developed a viable platform, the need for full global industry collaboration has not been achieved.”

TradeLens was launched in 2018 as a collaborative project between the two companies using IBM’s Hyperledger Fabric blockchain technology. It’s used to connect shippers, shipping lines, freight forwarders, port and terminal operators, transportation and customs authorities in order to reduce costs by tracking shipping data and documents.

The objective was to revolutionize the way that documents were transferred between different entities in the supply chain in order to smooth out operations and thus streamline efficiency.   Speaking to SiliconANGLE today, Neeraj Srivastava, co-founder and chief technology officer of DLT Labs, a blockchain firm that develops solutions for fintech and supply chains, argued that TradeLens failed because it spent too much time on hyping itself and too little time on innovation.

“TradeLens’ failure was not that the blockchain wasn’t worthwhile,” said Srivastava. “It’s that it spent too much effort on marketing the platform and hyping up its benefits and not enough time developing the technology to deliver on what the company promised. Too much hype is not good.”

Maersk said it intends to continue efforts to digitize supply chains even after shutting down TradeLens in order to optimize shipping and trade speeds.

Blockchain technology has been tested widely to track and protect supply chains, examples include IBM’s Food Trust Network for food safety, GrainChain for grains and Tradewind Markets Origins for minerals. ... ' 

Rethinking the Computer Chip in the Age of AI

 New designs for Computer chips. 

Rethinking the Computer Chip in the Age of AI,    via U of Penn

Posted on September 29, 2022   Author Devorah Fischler 

The transistor-free compute-in-memory architecture permits three computational tasks essential for AI applications: search, storage, and neural network operations.

Artificial intelligence presents a major challenge to conventional computing architecture. In standard models, memory storage and computing take place in different parts of the machine, and data must move from its area of storage to a CPU or GPU for processing.

The problem with this design is that movement takes time. Too much time. You can have the most powerful processing unit on the market, but its performance will be limited as it idles waiting for data, a problem known as the “memory wall” or “bottleneck.”

When computing outperforms memory transfer, latency is unavoidable. These delays become serious problems when dealing with the enormous amounts of data essential for machine learning and AI applications.

As AI software continues to develop in sophistication and the rise of the sensor-heavy Internet of Things produces larger and larger data sets, researchers have zeroed in on hardware redesign to deliver required improvements in speed, agility and energy usage.

A team of researchers from the University of Pennsylvania’s School of Engineering and Applied Science, in partnership with scientists from Sandia National Laboratories and Brookhaven National Laboratory, has introduced a computing architecture ideal for AI.

Deep Jariwala, Xiwen Liu and Troy Olsson

Co-led by Deep Jariwala, Assistant Professor in the Department of Electrical and Systems Engineering (ESE), Troy Olsson, Associate Professor in ESE, and Xiwen Liu, a Ph.D. candidate in Jarawala’s Device Research and Engineering Laboratory, the research group relied on an approach known as compute-in-memory (CIM).

In CIM architectures, processing and storage occur in the same place, eliminating transfer time as well as minimizing energy consumption. The team’s new CIM design, the subject of a recent study published in Nano Letters, is notable for being completely transistor-free. This design is uniquely attuned to the way that Big Data applications have transformed the nature of computing.

“Even when used in a compute-in-memory architecture, transistors compromise the access time of data,” says Jariwala. “They require a lot of wiring in the overall circuitry of a chip and thus use time, space and energy in excess of what we would want for AI applications. The beauty of our transistor-free design is that it is simple, small and quick and it requires very little energy.”

The advance is not only at the circuit-level design. This new computing architecture builds on the team’s earlier work in materials science focused on a semiconductor known as scandium-alloyed aluminum nitride (AlScN). AlScN allows for ferroelectric switching, the physics of which are faster and more energy efficient than alternative nonvolatile memory elements.

“One of this material’s key attributes is that it can be deposited at temperatures low enough to be compatible with silicon foundries,” says Olsson. “Most ferroelectric materials require much higher temperatures. AlScN’s special properties mean our demonstrated memory devices can go on top of the silicon layer in a vertical hetero-integrated stack. Think about the difference between a multistory parking lot with a hundred-car capacity and a hundred individual parking spaces spread out over a single lot. Which is more efficient in terms of space? The same is the case for information and devices in a highly miniaturized chip like ours. This efficiency is as important for applications that require resource constraints, such as mobile or wearable devices, as it is for applications that are extremely energy intensive, such as data centers.”  ... ' 

Drones on Strings as People Puppeteers?

 Quite new thought to me.  Though our look at group tasks for drones might have used this.

Drones on Strings Could Puppeteer People in VR

New Scientist, Matthew Sparkes, November 25, 2022

Researchers at Germany's Saarland University and Canada's University of Toronto have tested a system that uses a drone attached to a user’s finger via string to mimic the action of button-pushing in virtual reality. Saarland's Martin Feick said the challenges of maintaining the drone's stability while pulling the string include its tendency to oscillate or drift, while coordinating multiple drones so they do not crash or tangle up will be problematic. Feick acknowledged testing the drones safely with people is currently infeasible, so the researchers deployed nets to catch the drones. The drones also can produce distracting sounds and drafts, although Feick said soundless blade-free drones capable of ultrasonic levitation show potential.

Full article.

Securing Software Supply Chains

 From Bruce Schneier, with further commentary:

The NSA (together with CISA) has published a long report on supply-chain security: “Securing the Software Supply Chain: Recommended Practices Guide for Suppliers.“:

Prevention is often seen as the responsibility of the software developer, as they are required to securely develop and deliver code, verify third party components, and harden the build environment. But the supplier also holds a critical responsibility in ensuring the security and integrity of our software. After all, the software vendor is responsible for liaising between the customer and software developer. It is through this relationship that additional security features can be applied via contractual agreements, software releases and updates, notifications and mitigations of vulnerabilities.

Software suppliers will find guidance from NSA and our partners on preparing organizations by defining software security checks, protecting software, producing well-secured software, and responding to vulnerabilities on a continuous basis. Until all stakeholders seek to mitigate concerns specific to their area of responsibility, the software supply chain cycle will be vulnerable and at risk for potential compromise.'

They previously published   “Securing the Software Supply Chain: Recommended Practices Guide for Developers.” And they plan on publishing one focused on customers.

Preserving the Past with Immersive Technologies

Beyond the museum.  Much more to follow.  

NEWS

Preserving the Past with Immersive TechnologiesBy Esther Shein

Communications of the ACM, December 2022, Vol. 65 No. 12, Pages 15-17   10.1145/3565978

The Skin & Bones augmented reality app brings to life the skeletons in the Bone Hall of the Smithsonian's National Museum of Natural History.

Inside the U.S. Holocaust Memorial Museum in Washington, D.C., is an exhibit called the Tower of Faces (https://bit.ly/3cxI3Ik), which uses augmented reality to tell stories behind some of the 1,041 photos of people from the small town of Eishishok, in what is now Lithuania. The tower soars 50 feet high across 30 rows, displaying the faces of the town's inhabitants, nearly 4,000 of whom were massacred when the Germans invaded during World War II.

When visitors walk into the tower, they can pick up one of several iPads and hold it up to an image on the wall, which will then play a video that transports them into the town. The video first appears in color and then fades to black and white, while a narrator reads a brief script about the person. One tells the story of Szeina, an actress who was fluent in five languages and owned a hotel on the market square the Nazis took over to use as their local headquarters.

"It's a beautiful experience … there are many, many photos looking out at you—people riding bikes, outside in the snow, at a wedding banquet, and on the stairs of their houses," says Sarah Lumbard, director of museum experience and digital media at the Holocaust Museum. The photos depict people of all ages prior to the massacre in September 1941.

Many of the photos were supplied by Yaffa Sonenson Eliach, whose grandmother was a photographer in Eishishok and is herself a survivor. The immersive experience opened in April 2022 and the idea was to "create a spark of life" so the residents of Eishishok will be remembered, Lumbard says.

"Our question was: How do we bring them to life, just for a moment, and have it not just be a memorial but have victims of the Holocaust come to life and treat them with respect and engage visitors to really see them as people," Lumbard says.

Figure. The interactive Heroes and Legends attraction at NASA's Kennedy Space Center Visitor Complex in Florida, featuring the U.S. Astronaut Hall of Fame.

Digital technologies such as virtual reality (VR), augmented reality (AR), and three-dimensional (3D) graphics are making it possible for museums and other institutions to preserve historical events and tell the stories of those events in an engaging way. In the case of VR, the technology actually takes them to another time or place away from where they physically are.

As research firm Gartner says, "the future of digital experience is immersive" (see https://gtnr.it/3BqmqU2).

"In this way, we can experience historical events and places that are long gone in an immersive way—kind of like IMAX taken to the next level," explains Tuong Nguyen, a senior principal analyst at Gartner. "So instead of just seeing it on a flat screen, like TV or movies, or seeing it all around you like an IMAX movie, VR can potentially enable people to explore that space with 360 degrees of freedom.  

AR can change how someone experiences the world in front of them or around them, usually in a visual way, integrating information such as text, graphics, and audio with real-world objects. "The idea is to show users how things looked in the past with actual video and photos from these time periods," Nguyen says.

For the Smithsonian's National Museum of Natural History, also in Washington D.C., the impetus behind developing a mobile app called "Skin and Bones" (https://s.si.edu/3wGy0aT) using 3D augmented reality and 3D tracking was to attract more visitors to its Bone Hall by telling stories about some of the specimens on display.

Visits and dwell time in the hall and the experience people were having in the Bone Hall "fell far short of any measure of what a visitor experience should be in a modern-day exhibition," says Robert Costello, national outreach program manager, who developed the mobile experience. In fact, most visitors were using the Bone Hall as a passageway from one section of the museum to another, rather than a destination, Costello says. The average time spent in a modern exhibition at the museum is between 10 and 20 minutes, and most of those visitors were spending less than two minutes in the hall, which has a storied history, he says.  .... ' 

Data and Analytics in Soccer, Rise of Deeper Decision Manking

The Rise of Deeper Decision Making, Money and data driving next steps. 

Data and Analytics in Soccer

As the 2022 FIFA World Cup gets underway in Qatar on November 20, some of the most important action will be taking place off the field. Most teams will be furiously crunching data on goalies’ tendencies to try to determine how to win a penalty shoot-out if there’s a draw at the game’s final whistle. But this type of single-instance analysis is only a small part of the revolution taking place in the boardrooms at some of soccer’s biggest clubs. Today, the most important hire is no longer the 30-goal-a-season striker or an imposing brick wall of a defender. Instead, there’s an arms race for the person who identifies that talent.

Barcelona players in a tight huddle as they celebrate a win during the 2012 UEFA Champions League

How the best soccer team in the world lost its luster,  BY SIMON KUPER

Members of the Italian national soccer team celebrate scoring a goal during a qualifying match for the UEFA European Championship.

Successful teams: Superstars need not apply, BY BEN LYTTLETON

Sports Industry Outlook 2022 

The research department at Liverpool FC, the team that won England’s Premier League in 2020, for example, is now led by a Cambridge University–trained polymer physicist. Arsenal FC recently hired a former Facebook software engineer as a data scientist, and current Premier League champion Manchester City hired a leading AI scientist with a PhD in computational astrophysics to their research department. Chelsea FC’s new American owner, Todd Boehly, spent his summer trying and failing to hire a new sporting director with a data background. These are all examples from England, where the sport’s richest clubs are investing to gain an edge—and often recruiting from ahead-of-the-curve clubs with proven track records, like Monaco, in the French League, and the German club RB Leipzig.

Soccer has a rich history of this sort of analysis. Charles Reep, a military accountant, became soccer’s first data analyst in the 1950s, predating personal computers, Billy Beane, and the Moneyball moment in baseball, in 2003. That was followed up, in 2009, by the soccer equivalent, Soccernomics, by Simon Kuper and Stefan Szymanski, and data-driven sports analysis entered a new era. Among Kuper and Szymanski’s findings: goalkeepers are undervalued in the transfer market, and players from Brazil are overvalued.

I cofounded a football consultancy ten years ago with the authors of the book. One of our first clients was the Netherlands national team. We’ve been applying data to soccer for a while—but a lot of it is backward-looking, trying to mine past performance to account for what could happen on the field. We provided the Dutch team with a penalty-kick dossier before the 2010 World Cup final against Spain, in which Professor Ignacio Palacios-Huerta, an expert in game theory, showed penalty trends and patterns of Spain’s kickers. Spain scored four minutes before the end of the game to win, but the Dutch were confident they would have won on penalties.  .... ' 

Wireless Smart Bandages

Speeding healing. 

New Wireless Smart Bandage Accelerates Chronic Wound Healing  By Adrianna Nine on November 28, 2022 at 10:03 am   in ExtremeTech

Chronic wounds are an under-acknowledged medical concern. At any given time, more than 600,000 Americans are thought to experience physiologically-stunted wounds that won’t heal. Chronic wounds aren’t just inconvenient and painful; they also rack up individual healthcare costs and prevent people from engaging in certain activities, resulting in a decreased quality of life.

Thanks to new research, this might not always be the case. A team of scientists at Stanford University has developed a wireless “smart bandage” that simultaneously monitors wound repair and helps to speed up healing. The bandage could shorten the time people suffer from chronic wounds while mitigating the physical damage and discomfort caused by conventional healing methods.

In a study published last week in Nature Biotechnology, the scientists describe a flexible, closed-loop device that seals wounds while transmitting valuable biodata to an individual’s smartphone. Hydrogel makes up the bandage’s base: While conventional bandages tug and tear at the skin when they’re pulled away, hydrogel allows the smart bandage to attach securely without causing secondary damage during removal. On top of the hydrogel sits the electronic layer responsible for wound observation and healing. At just 100 microns thick, this layer contains a microcontroller unit (MCU), electrical stimulator, radio antenna, memory, and a series of biosensors.  .. ' 


Tuesday, November 29, 2022

Is Having AI Generate Text Cheating?

 Does AI give unfair advantage?   It will come into general use  

Is Having AI Generate Text Cheating?   By Carlos Baquero

Communications of the ACM, December 2022, Vol. 65 No. 12, Pages 6-7    10.1145/3565976

Professor Carlos Baquero of Porto University

https://bit.ly/3ElW1J7   August 3, 2022

Humans were always fragile creatures, most of our success in the ecosystem was driven by the efficient use of new tools. When a new tool arrives that augments our capabilities, we often question the fairness of using it. The debate usually does not last long when the tool has clear benefits. Boats have an advantage over swimming, writing solves our memory problems, this paragraph was improved using a grammar checker, and so forth.

Text generated by AI tools, such as GPT-3 (https://bit.ly/3e3icZQ), has seen an impressive increase in quality, and the AI-generated text is now hard to distinguish from human-generated text. Some people argue that using AI-generated text is cheating, as it gives the user an unfair advantage. However, others argue that AI-generated text is simply another tool that can be used to improve writing. The text in italic type drives this point home, as it was fully AI-generated after giving GPT-3 the appropriate context with the preceding text (going forward in this article, all the AI-generated text is marked in italic). To make the process more confusing, the AI-generated text can be further improved with tools that improve the grammatical presentation and choice of terms. At some point, it becomes hard to distinguish who wrote what.

Blended Writing and Provenance

We can place the question of whether blended writing with AIs will become an acceptable approach to a more efficient use of our capabilities and time. Tools for spelling and grammatical correction are now in everyday use and do not raise any ethical concerns. Nevertheless, AI-generated text, even if accepted from an ethical standpoint, raises questions on the provenance of the generated text. Luckily, there is already an abundance of tools for plagiarism detection (for the purpose of this article, all the AI-generated text has been checked for plagiarism using Quetext (https://bit.ly/3rrCy1U)). In the case of GPT-3, a closed-book system with no access to external content after the pre-training phase, the generation of "ipsis verbis" text seems statistically unlikely for any long output, so the plagiarism check is likely an abundance of care.

OpenAI, owner of GPT-3, does provide guidelines (https://bit.ly/3fvsnXd) for content co-authored with GPT-3. The gist is: Do no harm, refrain from using harmful content; clearly identify the use of AI-generated content; attribute it to your name, you are responsible for the published content. ... 

Excerpt  .. 

Andromeda Supercomputer

 New Advances

New Cerebras Wafer-Scale ‘Andromeda’ Supercomputer Has 13.5 Million Cores  By Jessica Hall on November 21, 2022 

Cerebras unveiled its new AI supercomputer Andromeda at SC22. With 13.5 million cores across 16 Cerebras CS-2 systems, Andromeda boasts an exaflop of AI compute and 120 petaflops of dense compute. Its computing workhorse is Cerebras’ wafer-scale, manycore processor, WSE-2.

Each WSE-2 wafer has three physical planes, which handle arithmetic, memory, and communications. By itself, the memory plane’s 40GB of onboard SRAM can hold an entire BERTLARGE. But the arithmetic plane also has some 850,000 independent cores and 3.4 million FPUs. Those cores have a collective 20 PB/s or so of internal bandwidth, across the communication plane’s cartesian mesh.

Each of Andromeda’s wafer-scale processors is the size of a salad plate, 8.5″ on a side. Image: Cerebras

Cerebras is emphasizing what it’s calling “near-perfect linear scaling,” which means that for a given job, two CS-2s will do that job twice as fast as one, three will take a third of the time, and so on. How? Andromeda’s SC-2 systems rely on parallelization, Cerebras said, from the cores on each wafer to the SwarmX fabric coordinating them all. But the supercomputer’s talents extend beyond its already impressive 16 nodes. Using the same data parallelization, researchers can yoke together up to 192 CS-2 systems for a single job.... ' 

Who is Responsible? Autopilot?

 More such events to follow and to be analyzed. 

Manslaughter Case Has a Strange Twist: Tesla That Killed Couple Was on Autopilot

By Futurism, November 2, 2022

A provocative manslaughter case is about to kick off in Los Angeles later this month, involving a fatal crash caused by a Tesla vehicle that had the company's controversial Autopilot feature turned on.

It's the first case of its kind, and one that could set a precedent for future crashes involving cars and driver-assistance software, Reuters reports.

We won't know the exact defense until the case gets under way, but the crux is that the man who was behind the wheel of the Tesla is facing manslaughter charges — but has pleaded not guilty, setting up potentially novel legal arguments about culpability in a deadly collision when, technically speaking, it wasn't a human driving the car.

From Futurism

View Full Article    

Monday, November 28, 2022

Leap-Second Gets a Considerable Pause

Surprising and oddly considerable pause in a natural measurement is agreed to.  Rarely see this kind an event.  Implications?  

Network-Crashing Leap Seconds to Be Abandoned by 2035, for at Least a Century   in Ars Technica, Kevin Purdy, November 22, 2022

Parties to the International Bureau of Weights and Measures (BIPM) approved the cessation of the leap second for keeping Coordinated Universal Time starting in 2035, until at least 2135. Leap seconds have been used to bring Earth's rotation into alignment with atomic-precision timekeeping. In 2012 and 2017, they triggered multi-hour network blackouts at companies including Reddit, Qantas, and Cloudflare. Many companies implemented a version of "leap smearing" to smooth out a leap second addition into micro-seconds spread across the globe throughout a day. Engineers at Meta, a supporter of the change, said the 27 leap seconds that have been applied since their introduction in 1972 were "enough for the next millennium."  ... ' 

How Generative AI Could Create Assets for the Metaverse

Have my doubts of how effectively , but interesting thought  ... 

How Generative AI could Create Assets for the Metaverse | Jensen Huang

Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.

The metaverse skyrocketed into our collective awareness during the height of the pandemic, when people longed for better ways to connect with each other than video calls. Gaming’s hot growth during the pandemic also pushed it forward. But the metaverse became so trendy that it now faces a backlash, and folks aren’t talking about it as much.

Yet technologies that will power the metaverse are speeding ahead. One of those technologies is generative AI, which uses deep learning neural networks to produce creative concept art and other ideas based on simple text prompts. ... 

Jensen Huang, CEO of AI and graphics chip maker Nvidia, believes that generative AI will be transformational and it’s just getting started. One of its biggest applications could be with the metaverse, which has huge demands for content as developers need to fill out virtual worlds with 3D assets. And numerous companies like Stable Diffusion, Promethean AI and Ludo AI are using these technologies to automatically generate artwork and other assets for gaming and metaverse applications. Nvidia has its own research going on this front.

Many metaverse companies are hoping that generative AI will help provide the resources to help them build out their worlds. Huang believes you will see progress when you enter more and more prompts — such as text to flesh out a concept — and the concept imagery gets better and better. And he also believes that when it becomes reusable across different Omniverse applications, then it will be clear that generative AI has reached a more mature stage.

EVENT

Intelligent Security Summit

Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.

Register Now

I recently caught up with Huang for a short interview on the metaverse and gaming. Our GamesBeat Summit: Into the Metaverse 3 event is coming on February 1 to February 2.

Here’s an edited transcript of our interview: ...  

Future of Driverless Trucks

A likely future of  Supply Chains

The Long Road to Driverless Trucks

By The New York Times, November 28, 2022

Companies know the technology is a long way from the moment trucks can drive anywhere on their own, so they are looking for ways to deploy self-driving trucks solely on highways.

In March, a self-driving eighteen-wheeler spent more than five straight days hauling goods between Dallas and Atlanta. Running around the clock, it traveled more than 6,300 miles, making four round trips and delivering eight loads of freight.

The result of a partnership between Kodiak Robotics, a self-driving start-up, and U.S. Xpress, a traditional trucking company, this five-day drive demonstrated the enormous potential of autonomous trucks. A traditional truck, whose lone driver must stop and rest each day, would need more than 10 days to deliver the same freight.

But the drive also showed that the technology is not yet ready to realize its potential. Each day, Kodiak rotated a new team of specialists into the cab of its truck, so that someone could take control of the vehicle if anything went wrong. These "safety drivers" grabbed the wheel multiple times.

From The New York Times

View Full Article

Quantum Microscopy

More on the topic in

Quantum-enhanced nonlinear microscopy  in Nature

Catxere A. Casacio, Lars S. Madsen, Alex Terrasson, Muhammad Waleed, Kai Barnscheidt, Boris Hage, Michael A. Taylor & Warwick P. Bowen 

Nature volume 594, pages201–206 (2021)Cite this article

An Author Correction to this article was published on 02 August 2021

This article has been updated

Abstract

The performance of light microscopes is limited by the stochastic nature of light, which exists in discrete packets of energy known as photons. Randomness in the times that photons are detected introduces shot noise, which fundamentally constrains sensitivity, resolution and speed1. Although the long-established solution to this problem is to increase the intensity of the illumination light, this is not always possible when investigating living systems, because bright lasers can severely disturb biological processes2,3,4. Theory predicts that biological imaging may be improved without increasing light intensity by using quantum photon correlations1,5. Here we experimentally show that quantum correlations allow a signal-to-noise ratio beyond the photodamage limit of conventional microscopy. Our microscope is a coherent Raman microscope that offers subwavelength resolution and incorporates bright quantum correlated illumination. The correlations allow imaging of molecular bonds within a cell with a 35 per cent improved signal-to-noise ratio compared with conventional microscopy, corresponding to a 14 per cent improvement in concentration sensitivity. This enables the observation of biological structures that would not otherwise be resolved. Coherent Raman microscopes allow highly selective biomolecular fingerprinting in unlabelled specimens6,7, but photodamage is a major roadblock for many applications8,9. By showing that the photodamage limit can be overcome, our work will enable order-of-magnitude improvements in the signal-to-noise ratio and the imaging speed ....  More and related articles ....

Does Consciousness Change the Rules of Quantum Mechanics?

 Hmm... 

Does consciousness change the rules of quantum mechanics?

Maybe our understanding of quantum entanglement is incomplete, or maybe there is something fundamentally unique about consciousness.

In the past few years, scientists have shown that macroscopic objects can be subjected to quantum entanglement. Pondering the limits of quantum entanglement allows us to consider how quantum mechanics can be unified with physics on a larger scale. There might be something unique about our role as conscious observers of the world around us.

Elizabeth Fernandez

Copy a link to the article entitled 

This is the fourth article in a four-part series on quantum entanglement. In the first, we discussed the basics of quantum entanglement. We then discussed how quantum entanglement can be used practically in communications and sensing. In this article, we take a look at the limits of quantum entanglement, and how entanglement on the large scale might even challenge our very basis of reality.

We can all agree that quantum entanglement is weird. We don’t worry too much about it, though, beyond some of its more practical applications. After all, the phenomenon plays out on scales that are vastly smaller than our everyday experiences. But perhaps quantum mechanics and entanglement are not limited to the ultra-small. Scientists have shown that macroscopic (albeit small) objects can be placed in entanglement. It begs the question: Is there a size limit for quantum entanglement? Carrying the idea further, could a person become entangled, along with their consciousness? 

Asking these questions not only lets us probe the limits of quantum mechanics, but it could also lead us to a unified theory of physics — one that works equally well for anything from electrons to planets. ... ' 

Tiny, Private Houses

Noted change  ... 

Pallet is making $7,500 prefab tiny homes that can be setup in 1 hour to help solve the homelessness crisis — see inside a unit at a Washington village  ... 

Brittany Chang Oct 29, 2022, 9:15 AM  in BusinessInsider

Washington-based Pallet is building prefab tiny homes to provide shelter for people who are unhoused.
Its smallest $7,500 64-square-foot unit "Pallet 64" is now being used in villages across the US.
See inside a Pallet 64 at Everett Gospel Mission's tiny home village near Pallet's headquarters.

Bigger isn't always better, according to the rising interest in tiny homes.

A New Frontier tiny home sits in a clearing.
A New Frontier tiny home designed by David Latimer. Studio Bull/New Frontier Design
Tiny home sales skyrocketed during the peak of the COVID-19 pandemic.
A tiny home surrounded by nature
Shelby Wilray

Some consumers wanted to downsize their primary residences. Others wanted a separate office during the rise in remote work. A few people were even using tiny homes as a private backyard gym.
A white tiny home with green space  ... '

Sunday, November 27, 2022

AI is Solving Classical Computing's Quantum Problem

 Intriguing mix of domains.

AI is Solving Classical Computing's Quantum Problem  By R. Colin Johnson, Commissioned by CACM Staff, November 22, 2022

The number of equations that need to be solved to fully describe the many-body problem of the quantum interactions among this many pixels is 100,000. 

By running clever AI neural networks that analyze the similarities among interactions, the number of equations that need to be solved to fully describe the many-body problem of quantum interactions can be reduced to solving just four.

Artificial intelligence (AI)—in particular, machine learning (ML)— recently began to solve problems for which quantum computers are targeted, according to researchers at the California Institute of Technology (CalTech), the Flatiron Institute (New York City), and IBM (Yorktown Heights, NY).

"ML cannot emulate every quantum algorithm," said Hsin-Yuan Huang, a quantum information theorist at CalTech, "but ML can emulate more quantum algorithms than classical algorithms that do not have learning abilities. For example, to solve the problem of finding quantum ground states [lowest energy levels], one typically wants to use adiabatic [thermodynamic] quantum algorithms. But we've proven that a classical ML model can learn from data to predict these ground states efficiently."

Quantum computers, once thought to be "superior" to classical computers, increasingly are being seen as yet another accelerator for specialized problems, according to IBM, which is developing what it calls neuro-symbolic AI—an ML method using classical computer hardware. IBM also is experimenting with hyperdimensional ML accelerators to work alongside classical computer hardware. These alternative accelerator architectures are being developed in parallel with its continued development of quantum computer accelerators for classical computers.

"Quantum computers will never reign 'supreme' over classical computers, but [like other accelerator architectures] will rather work in concert with them, since each have their unique strengths," according to IBM Research's Jay Gambetta, John Bunnels, Dmitri Maslov, and Edwin Penault, who wrote an IBM Research Blog post in 2019 arguing that the claim of quantum "supremacy" over classical computers is flawed.

The Classical Advantage

More recently, physicists at New York City's Flatiron Institute, in association with the University of Bologna, Italy, reported a 25,000-times speed-up in solving a daunting quantum problem using classical computers accelerated by ML. The Flatiron Institute research, led by visiting researcher Domenico Di Sante, demonstrated a solution to the quantum physics many-body problem that future quantum computers will aim to solve, but which classical computers struggle with today. By harnessing AI along with classical computer algorithms, the Flatiron Institute researchers reduced the problem of solving 100,000 coupled differential equations to just four.

Explained Di Sante, an assistant professor of the University of Bologna currently in residence at the Flatiron Institute's Center for Computational Quantum Physics, "Differential equations form the language used to model almost all physical phenomena in both the classical and the quantum world, from weather forecasts to the evolution of the universe to the dynamics of quantum electrons and subnuclear particles. All ambits of physical modeling benefit from tackling the problem of a large number of coupled differential equations. In this sense, our new data-driven approach to compress the complexity of many-body problems will be helpful to both classical and quantum fields."

Since classical computers using Di Sante's ML algorithms can simplify the solution of problems previously thought to require future quantum computers to solve efficiently, its accomplishment mitigates the need for full-blown universal quantum computers.

"Efficiently solving for the effective interaction among many-particles is a big deal in quantum physics, especially for interactions within quantum materials. It saves memory, computational power, and offers physical insight. Our work demonstrates how ML and quantum physics intersect constructively. It is difficult to quantify what will be our work's direct impact on quantum computers, but that field is facing the same problem—large, high-dimensional data sets that need compression in order to manipulate and study efficiently," said Di Sante. "I would love to discover that our more-efficient solution method can shed light onto the intricate nature of future quantum computer architectures." 

One caveat to Di Sante's approach is that the entire body of 100,000 equations must first be solved (which, in this example, took weeks of classical computer time). His ML algorithm then derived from that solution the smallest set of equations that could provide a specified level of accuracy. Hopefully, now that the ML algorithm has been constructed, future tweaking of it will enable the group to solve similar quantum problems without requiring weeks of preliminary computer time.  .... '

Protests at IPhone Factory

Leading to some phone outages.

China Covid: Angry protests at giant iPhone factory in Zhengzhou  in the BBC

Published, 4 days ago

Protests have erupted at the world's biggest iPhone factory in the Chinese city of Zhengzhou, according to footage circulated widely online.

Videos show hundreds of workers marching, with some confronted by people in hazmat suits and riot police. Those livestreaming the protests said workers were beaten by police. Videos also showed clashes., Manufacturer Foxconn said it would work with staff and local government to prevent further violence.

In its statement, the firm said some workers had doubts about pay but that the firm would fulfil pay based on contracts

It also described as "patently untrue" rumours that new recruits were being asked to share dormitories with workers who were Covid-positive.

Dormitories were disinfected and checked by local officials before new people moved in, Foxconn said.

Last month, rising Covid cases saw the site locked down, prompting some workers to break out and go home. The company then recruited new workers with the promise of generous bonuses.  ... ' 

Lost Something? Search 91.7 Million Files from the '80s, '90s, 2000s

Seeing it all?

Lost Something? Search 91.7 Million Files from the '80s, '90s, 2000s By Ars Technica, October 21, 2022

The files on Discmaster come from the Internet Archive, uploaded by thousands of people over the years.

A new website allows users to sift through 91.7 million computer files from CD-ROM releases and floppy discs dating back to the 1980s.  Hosted by tech archivist Jason Scott, the Discmaster site is the work of a group of anonymous programmers and features images, text documents, music, games, shareware, videos, and more from the Internet Archive.

Discmaster allows users to search by file type, format, source, file size, file date, and other criteria.

Said Scott, "The value proposition is the value proposition of any freely accessible research database." Much of the file format conversion is performed on the back end, to make the vintage files more accessible.

From Ars Technica

View Full Article    


Saturday, November 26, 2022

TikTok Eating the Internet

Evolution. Does it deliver customers online?

How TikTok Ate the Internet

By The Washington Post, October 18, 2022

In five years, TikTok, once written off as a silly dance-video fad, has become one of the most prominent, discussed, distrusted, technically sophisticated and geopolitically complicated juggernauts on the Internet.

On the night Shelby Renae first went viral on TikTok, she felt so giddy she could barely sleep. She'd spent the evening painting her nails, refreshing her phone between each finger — 20,000 views; 40,000 — and by the next morning, after her video crossed 3 million views, she decided it had changed her life.

She didn't really understand why it had done so well. The 16-second clip of her playing the video game "Fortnite" was funny, she thought — but not, like, millions-of-views funny. She wasn't a celebrity: She grew up in Idaho; her last job was at a pizza shop. But this was just how the world's most popular app worked. TikTok's algorithm had made her a star.

Now 25, she spends her days making TikTok videos from her apartment in Los Angeles, negotiating advertising deals and always chasing the next big hit. Many days, she feels drained — by the endless scramble for new content; by the weird mysteries of TikTok's algorithm; by the stalkers, harassers and trolls. Yet still, in her off hours, she does what all her friends do: watches TikTok. "It will suck you in for hours," she said.

From The Washington Post

View Full Article    

How the First Transistor Worked

 How far we have come.

HOW THE FIRST TRANSISTOR WORKED

Even its inventors didn’t fully understand the point-contact transistor

By GLENN ZORPETTE in Spectrum.ieee   20 NOV 2022 12 MIN READ

A photo of an outstretched hand with several transistors in the palm of it.  

A 1955 AT&T publicity photo shows [in palm, from left] a phototransistor, a junction transistor, and a point-contact transistor. AT&T ARCHIVES AND HISTORY CENTER

THE VACUUM-TUBE TRIODE wasn’t quite 20 years old when physicists began trying to create its successor, and the stakes were huge. Not only had the triode made long-distance telephony and movie sound possible, it was driving the entire enterprise of commercial radio, an industry worth more than a billion dollars in 1929. But vacuum tubes were power-hungry and fragile. If a more rugged, reliable, and efficient alternative to the triode could be found, the rewards would be immense.

The goal was a three-terminal device made out of semiconductors that would accept a low-current signal into an input terminal and use it to control the flow of a larger current flowing between two other terminals, thereby amplifying the original signal. The underlying principle of such a device would be something called the field effect—the ability of electric fields to modulate the electrical conductivity of semiconductor materials. The field effect was already well known in those days, thanks to diodes and related research on semiconductors.

A photo of a cutaway of a point-contact of a transistor.  In the cutaway photo of a point-contact, two thin conductors are visible; these connect to the points that make contact with a tiny slab of germanium. One of these points is the emitter and the other is the collector. A third contact, the base, is attached to the reverse side of the germanium.AT&T ARCHIVES AND HISTORY CENTER

But building such a device had proved an insurmountable challenge to some of the world’s top physicists for more than two decades. Patents for transistor-like devices had been filed starting in 1925, but the first recorded instance of a working transistor was the legendary point-contact device built at AT&T Bell Telephone Laboratories in the fall of 1947.

Though the point-contact transistor was the most important invention of the 20th century, there exists, surprisingly, no clear, complete, and authoritative account of how the thing actually worked. Modern, more robust junction and planar transistors rely on the physics in the bulk of a semiconductor, rather than the surface effects exploited in the first transistor. And relatively little attention has been paid to this gap in scholarship.

It was an ungainly looking assemblage of germanium, plastic, and gold foil, all topped by a squiggly spring. Its inventors were a soft-spoken Midwestern theoretician, John Bardeen, and a voluble and “ somewhat volatile” experimentalist, Walter Brattain. Both were working under William Shockley, a relationship that would later prove contentious. In November 1947, Bardeen and Brattain were stymied by a simple problem. In the germanium semiconductor they were using, a surface layer of electrons seemed to be blocking an applied electric field, preventing it from penetrating the semiconductor and modulating the flow of current. No modulation, no signal amplification.

Sometime late in 1947 they hit on a solution. It featured two pieces of barely separated gold foil gently pushed by that squiggly spring into the surface of a small slab of germanium.

Textbooks and popular accounts alike tend to ignore the mechanism of the point-contact transistor in favor of explaining how its more recent descendants operate. Indeed, the current edition of that bible of undergraduate EEs, The Art of Electronics by Horowitz and Hill, makes no mention of the point-contact transistor at all, glossing over its existence by erroneously stating that the junction transistor was a “Nobel Prize-winning invention in 1947.” But the transistor that was invented in 1947 was the point-contact; the junction transistor was invented by Shockley in 1948.

So it seems appropriate somehow that the most comprehensive explanation of the point-contact transistor is contained within John Bardeen’s lecture for that Nobel Prize, in 1956. Even so, reading it gives you the sense that a few fine details probably eluded even the inventors themselves. “A lot of people were confused by the point-contact transistor,” says Thomas Misa, former director of the Charles Babbage Institute for the History of Science and Technology, at the University of Minnesota.

Textbooks and popular accounts alike tend to ignore the mechanism of the point-contact transistor in favor of explaining how its more recent descendants operate.

A year after Bardeen’s lecture, R. D. Middlebrook, a professor of electrical engineering at Caltech who would go on to do pioneering work in power electronics, wrote: “Because of the three-dimensional nature of the device, theoretical analysis is difficult and the internal operation is, in fact, not yet completely understood.”

Nevertheless, and with the benefit of 75 years of semiconductor theory, here we go. The point-contact transistor was built around a thumb-size slab of n-type germanium, which has an excess of negatively charged electrons. This slab was treated to produce a very thin surface layer that was p-type, meaning it had an excess of positive charges. These positive charges are known as holes. They are actually localized deficiencies of electrons that move among the atoms of the semiconductor very much as a real particle would. An electrically grounded electrode was attached to the bottom of this slab, creating the base of the transistor. The two strips of gold foil touching the surface formed two more electrodes, known as the emitter and the collector.  ...  '

Friday, November 25, 2022

Regarding an NFT Bubble

This surprised me,  below is an outline, and then beyond.  Legit?  Following up.  I will remove if I find this invalid. 

From DSHR Blog:   https://blog.dshr.org/ 

I'm David Rosenthal, and this is a place to discuss the work I'm doing in Digital Preservation.

Tuesday, October 25, 2022

Non-Fungible Token Bubble Lasted 10 Months

Although the first Non-Fungible Token was minted in 2014, it wasn't until Cryptokitties bought the Ethereum blockchain to its knees in December 2017 that NFTs attracted attention. But then they were swiftly hailed as the revolutionary technology that would usher in Web 3, the Holy Grail of VCs, speculators and the major content industries because it would be a completely financialized Web. Approaching 5 years later, it is time to ask "how's it going?"

Below the fold I look at the details, but the TL;DR is "not so great"; NFTs as the basis for a financialized Web have six main problems:

Technical:    the technology doesn't actually do what people think it does.

Legal:           there is no legal basis for the "rights" NFTs claim to represent.

Regulatory:  much of the business of creating and selling NFTs appears to violate securities law.

Marketing:   the ordinary consumers who would pay for a financialized Web absolutely hate the idea.

Financial:      like cryptocurrencies, the fundamental attraction of NFTs is "number go up". And much of the trading in NTFs was Making Sure "Number Go Up". But, alas "number go down", at least partly because of problem #4.

Criminal: vulnerabilities in the NFT ecosystem provide a bonanza for thieves.   ....   

(more at DSHR) ...  

Chinese Drones in Restricted Spaces

Of course there are now many drones in that area, many Chinese built. But still a valid concern. 

 In American Military News: 

Hundreds of Chinese-made drones have flown into restricted airspaces over Washington, D.C., in recent months and officials are playing catch-up to stop the foreign-made tech from spying on those restricted areas. While China may not be controlling these drones directly, any Chinese-made devices could still be covertly sending data back to China.

Sources told POLITICO they don’t believe the Chinese government is directing the drones, which are made by China-based DJI. But officials are beginning to see risks in U.S. consumers buying up Chinese tech that can enter one of the world’s most secure airspaces and potentially become another government’s surveillance system, or worse.

“The reality is, people on the tech side always said, ‘Look, at any point in time the Chinese can take control of a DJI that’s flying in the air,’” an anonymous government contractor told POLITICO.

Commercial drones, including those by DJI, use “geofencing” to prohibit drones from entering into restricted airspace such as those over D.C. But a government contractor told POLITICO there are “YouTube videos that could walk your grandparents through” how to bypass those constraints and allow users to fly their drones wherever they want.  .... ' 

Optimal Decision Making

 Technical, but quite interesting point being made.  Optimality may be a good thing,  but how do I embed it in useful real time decisions?  Notably too the consideration of noise, often a key consideration.  This worth a look. 

Algorithm for Optimal Decision-Making Under Heavy-Tailed Noisy Rewards

Chung-Ang University (South Korea), November 17, 2022

Researchers at South Korea's Chung-Ang University (CAU) and Ulsan Institute of Science and Technology created an algorithm that supports minimum loss under a maximum-loss scenario (minimax optimality) with minimal prior data. The algorithm addresses sub-optimal performance for heavy-tailed rewards by algorithms designed for stochastic multi-armed bandit (MAB) problems. CAU's Kyungjae Lee said the researchers proposed minimax optimal robust upper confidence bound (MR-UCB) and adaptively perturbed exploration (MR-APE) methods. The team obtained gap-dependent and independent upper bounds of the cumulative regret, then assessed their methods via simulations conducted under Pareto and Fréchet noises. The researchers found MR-UCB outperformed other exploration techniques with stronger robustness and a greater number of actions under heavy-tailed noise; MR-UCB and MR-APE also could solve heavy-tailed synthetic and real-world stochastic MAB problems.

Full Article

What authors want from AI ‘ghostwriters’

I was recently introduced to some examples of this, quite impressive.

What authors want from AI ‘ghostwriters’

Compute me a story

October 7, 2022 - 12:09 pmIn Sept. 2020, The Guardian published an opinion piece written by a program. The artificial intelligence, called GPT-3, is a large language model developed by OpenAI, and it posed a bold question in the headline of its machine-generated text: “A robot wrote this entire article. Are you scared yet, human?”

Indeed, it is a scary time to be a professional writer. Earlier in 2020, Microsoft laid off journalists to replace them with a writing AI. And as AI language models get increasingly better, researchers are claiming that soon, AI-generated text will be indistinguishable from that written by a person.

Our research team at the University of British Columbia investigated what the rise of AI means to human writers. Specifically, we tried to understand what human writers expect from AI, and where the boundaries lie when it comes to writing work.

We interviewed seven hobbyists and 13 professional writers, using a design fiction approach. We first showed the writers different speculative designs of futuristic AI writers. We then asked them to reflect on how co-writing with an AI would transform their practice and perception of writing.

We found that writers wanted AIs to respect the personal values they attribute to writing. These personal values being: emotional values and productivity.

Emotions and productivity

Hobbyists in our study said they find joy in the writing process, referring to the act of writing as a “labor of love.” When considering scenarios where using AI would make them more productive, hobbyists weren’t interested in using the advanced writing technology if it displaced what it means to be a writer.

The writers attributed three different kinds of emotional values to writing. Some writers wanted to claim ownership over the words they wrote and were concerned that co-writing with an AI meant that the text wouldn’t be considered entirely their own. Other writers attributed a sense of integrity to the act of writing, and said using AI would be “like cheating.” Others just enjoyed the process of turning their ideas into words.

By contrast, for professional writers, writing was a means of living. If it could make them more prolific, they were open to using AI and assigning parts of their job to the robot writers. The professional writers envisioned themselves using AI as a ghostwriter who could realize their ideas into written pieces. To some extent, professional writers were willing to compromise their emotional values in exchange for productivity. ... ' 

Quantum Microscope Soon?

Intro, see also in Sabine Hossenfelder's  Youtube here:  https://youtu.be/fkXSCNDfj14   Where she points to a paper https://doi.org/10.1038/s41586-021-03528-w  suggesting early prototype designs.  

The quantum microscope revolution is here    in CosmosMagazine   By Lauren Fuge 

New entanglement-based sensor surpasses light-based microscopes.

University of Queensland researchers have built a quantum microscope based on the strange phenomenon Albert Einstein once called “spooky action at a distance”.

This new device takes advantage of quantum entanglement to illuminate living samples safely – unlike conventional microscopes, which use potentially damaging high-intensity light. Warwick Bowen, a quantum physicist at the University of Queensland, says this is the first entanglement-based sensor that supersedes non-quantum technology.

“This is exciting – it’s the first proof of the paradigm-changing potential of entanglement for sensing,” says Bowen, who is lead author on the new paper published in Nature.

Since their invention in the seventeenth century, traditional light-based microscopes have revolutionised our understanding of life by revealing the microscopic structures and behaviours of living systems. The field of microscopy took a big leap when lasers were introduced to more brightly illuminate samples; some recent technologies have even been able to peer down to resolutions nearly at the scale of atoms.

But the best microscopes are limited by the “noisiness” of photons – the tiny packets of energy that make up light. The random times at which individual photons hit a detector introduces noise, which affects the sensitivity, resolution and speed of microscopes. The noise can be reduced by increasing the intensity of light – which fries cells.

“The best light microscopes use bright lasers that are billions of times brighter than the sun,” Bowen explains. “Fragile biological systems like a human cell can only survive a short time in them

“We’re hitting the limits of what you can do just by increasing the intensity of your light.”

Bowen and team’s new microscope may just kickstart the next revolution in microscopy, because they’ve evaded these limitations by introducing quantum entanglement.

But how does this device actually work? Well, it’s down to quantum physics, so buckle in.

Quantum entanglement is a strange beast to get your head around. The idea is that two particles can become “entangled”, or linked, and will thereafter always mirror each other’s properties – what happens to one instantly happens to the other, even if they’re light-years apart. This instantaneous coordination seems to rebel against common sense; physicists don’t yet know exactly how this works, only that it does.

And this phenomenon can be harnessed in microscopy.

Physicists have known for a while that quantum correlations can be used to extract information from photons – in fact, these correlations used to improve laser interferometric gravitational wave detectors like LIGO, among many other things. They even suspected that quantum correlations could help improve microscopy, but until now they couldn’t build bright enough light sources with quantum correlations that could be interfaced with a microscope.

“However, all previous experiments used optical intensities more than 12 orders of magnitude lower than those for which biophysical damage typically arises, and far below the intensities typically used in precision microscopes,” the authors explain in their paper.

This new set-up uses a coherent Raman scattering microscope – existing technology that probes the vibrational signals of living molecules, giving specific information about their chemical makeup.

But the team custom-designed the microscope so quantum correlations improved the light source illuminating the sample, making the light extremely “quiet”.

“What entanglement allows us to do is basically train the photons in that light so that they arrive at the detector in a nice uniform sort of way,” Bowen says.

This is achieved using a “non-linear crystal”, which changes the light passing through; instead of a normal laser beam they used “squeezed light”, where the photons are intrinsically correlated. This reduced the amplitude of the light and, in turn, reduced the noise.

Uq's quantum microscope

UQ’s quantum microscope. Credit: the University of Queensland

For a fixed intensity of light, the set-up results in a higher signal-to-noise ratio and therefore higher contrast in the microscope. They were able to image a cell wall of yeast – around 10 nanometres thick.

“We could resolve a much larger region of that cell wall using quantum correlations than was possible using conventional microscopy, without destroying the cell,” Bowen explains.

The team were able to enhance the signal-to-noise ratio by 35%.”

“This removes a fundamental barrier to advances in coherent Raman microscopy and high-performance microscopy more broadly,” they write in their paper.

Bowen comments: “We’re really excited about it because it shows, for the first time, that it is possible to use quantum light to get an absolute advantage in microscopy – to measure something you could not measure in any other way.”

Sergei Slussarenko, a quantum physicist at Griffith University who was not involved in the study, says this is a great achievement. .... '

MITRE Attack Framework

 When I worked with the Govt in the past, worked with MITRE,  impressive overall. I noted this recent introduction to their 'Threat actors' capability.  (which I did not use at the time).   Of interest. 

Introduction to MITRE ATT&CK - Featuring Version 12 (2022)

Josh Darby MacLellan, on Nov 22, 2022

Have you ever wondered how to create a prioritized list of threat actors? Or identify what malicious tactics and techniques are most relevant? Or what security controls should be improved first? The MITRE ATT&CK Framework can help. Version 12 has just been released and this blog will help you understand what the Framework is and what’s new.

What is MITRE?

MITRE is a US-based not-for-profit organization that supports the US federal government in advancing national security by providing a range of technical, cyber, and engineering services to the government. In 2013, MITRE launched a research project to track cyber threat actors’ behavior, developing a framework named Adversarial Tactics, Techniques, and Common Knowledge, or in short form: ATT&CK.

What is the MITRE ATT&CK Framework?

The MITRE ATT&CK Framework contains a taxonomy of threat actor behavior during an attack lifecycle, broken down into 14 tactics that each contain a subset of more specific techniques and sub-techniques (covering the TT in TTPs). The Framework is split into three separate matrices, Enterprise (attacks against enterprise IT networks and cloud), Mobile (attacks targeting mobile devices), and industrial control systems (attacks targeting ICS).

The Framework contains a wealth of knowledge based on real-world observations. To give you an indication of scope, the October 2022 iteration of ATT&CK for Enterprise contains 193 techniques, 401 sub-techniques, 135 threat actor groups, 14 campaigns, and 718 pieces of software/malware.

Screenshot of the MITRE ATT&CK Framework for Enterprise with some but not all techniques.

Each technique can be explored to reveal sub-techniques and there is an entire MITRE knowledge base that feeds the matrices. This database contains a colossal amount of information on threat actor groups, malware, campaigns, descriptions of techniques and sub-techniques, mitigations, detection strategies, references for external resources, an ID system for tracking, and more.   ... ' 

Thursday, November 24, 2022

Seoul Patrols Streets with Robots

Interesting approach but unlikely to be tolerated today in the West. 

Self-Driving Robot Patrols Seoul Streets

EuroNews, Roselyne Min, November 19, 2022

South Korea has launched its first autonomous urban patrol robot, which patrols the streets of Seoul in search of dangerous situations. HL Mando, which developed "Goalie," said the robot is like a "moving surveillance camera," able to go where fixed CCTV cameras cannot. Equipped with satellite navigation and remote sensing technology, Goalie can avoid pedestrians and obstacles. HL Mando's Young-ha Cho said, "When the robot sees a dangerous situation or hears a sound like 'help me’, the control center operates the robot to move there and can check whether it is really a dangerous situation or not." HL Mando livestreams but does not store the footage from the robot, and encrypts its communications with the control center.  ... ' 

Wednesday, November 23, 2022

Consider the Chip Sandwich

Chip Sandwiches improve Data Transmission

Chip Sandwich Pushes Boundaries of Computing, Data Transmission Efficiency  By California Institute of Technology, November 22, 2022

An electronics chip (the smaller chip on the top) integrated with a photonics chip, sitting atop a penny for scale.

The Caltech/Southampton team designed both an electronics chip and a photonics chip from the ground up, then co-optimized them to work together.

Engineers at the California Institute of Technology (Caltech) and the U.K.'s University of Southampton have developed an ultrafast electronic/photonic chip sandwich that generates minimal heat.

Created over four years, "These two chips are literally made for each other, integrated into one another in three dimensions," said Caltech's Arian Hashemi Talkhooncheh.

The chips employ an optimized interface to transmit 100 gigabits of data per second while generating 2.4 pico-Joules per transmitted bit, boosting the transmission's electro-optical power efficiency 3.6 times over the current state of the art ... 

"As the world becomes more and more connected, and every device generates more data, it is exciting to show that we can achieve such high data rates while burning a fraction of power compared to the traditional techniques," said Caltech's Azita Emami.

From California Institute of Technology

View Full Article  

Energy Harvesting from 5G

Requirements to run IOT Systems. 

Press release / November 09, 2022

Minimal vibrations, temperature differences and even light can be used to generate power for small electronic systems. At Booth B4/258 of the electronica 2022 trade show, Fraunhofer researchers will demonstrate how 5G radio modules with higher energy consumption can now be powered autonomously through energy harvesting — without the use of batteries and cables.

A piezoelectric vibration converter supplies energy to sensors used in a building condition monitoring system. © Kurt Fuchs / Fraunhofer IIS

Sensors are key elements of the Internet of Things (IoT). They collect information, for example, about the condition of a machine or an infrastructure, process it and pass it on. The numerous sensors can obtain the required energy from batteries or via a cable connection. Fraunhofer researchers have now found a way to harvest enough energy to operate these sensors using vibrations from machines, equipment or buildings, as well as from temperature differences between pipes, lines or valves, and the environment.

“Powering a sensor node through energy harvesting technology makes it independent from other energy supplies. This saves the cost arising from energy-storage devices, such as a batteries, and eliminates the maintenance effort required for battery replacement. It also makes cable installations redundant,” says Dr. Peter Spies from the Fraunhofer Institute for Integrated Circuits IIS in Nuremberg, explaining the advantages. The autonomous sensors are used for data collection and transmission, e.g., for the condition monitoring of machines, buildings or bridges, as well as for smart metering systems.

Spies and his team have been researching for some time how and where energy harvesting technologies can be optimized and deployed. Due to the rise in energy prices, their field of research is rapidly gaining relevance, and inquiries from industry are piling up. Their latest development is a so-called NarrowBand IoT module that collects and transmits utility data in a 5G network. To ensure that the modules and sensors can be operated energy independently, they were specially measured and optimized for energy consumption. This opens up new possibilities for autonomously powering not only LPWANs (low-power wide-area networks) but also other radio systems with higher energy consumption and more advanced functionalities, such as bidirectional communication. The systems could then also be operated in a public network.

At the electronica 2022 trade show, held from November 15-18 in Munich, Fraunhofer IIS will be using the NarrowBand IoT module and a mioty® radio sensor to show how these sensors can be operated entirely without cables or batteries using a thermo-electric generator or a vibration converter. Also on display will be a smart screw connection whose preload force can be monitored remotely thanks to energy-autonomous sensor technology. This means that loose screws should no longer pose a safety risk to bridges, machines or buildings in the future.  ... ' 

Morning Coffee Tech

An agriculture rea I worked in, some details here, will dig somewhat deeper.

Morning Coffee Tech  By Luana Ferreira in the BBC   Business reporter, Brazil

For an estimated one billion people around the world drinking coffee is a daily regime.

Yet what many coffee lovers might not know is that they are often drinking a brew made, at least in part, from Brazilian beans.

"Brazilian beans have popular characteristics, and are known for their body and sweetness," says Christiano Borges, boss of the country's largest grower, Ipanema Coffees.

"Therefore, many coffee blends in the world use our coffee as a base."

Brazil is far and away the world's largest grower of coffee beans. It accounts for more than one third of all global supplies, or 37% in 2020, to be exact. In second place is Vietnam with 17% of supplies.

Some 70% of Brazil's coffee plants are the highly-priced arabica species, used in fresh coffee. The remaining 30% are robusta, which is used primarily for instant coffee.

Brazil's largest coffee plantations stretch for miles on end

The problem for Brazil, and world coffee supplies in general, is that last year the country's annual crop plummeted by almost a quarter due to a drought across its main coffee growing region, which centres on the south-eastern states of Minas Gerais, São Paulo and Paraná., The knock-on affect has been a global reduction in coffee beans supplies, and a subsequent doubling in wholesale prices since this time last year.

To try to alleviate any future falls in production, Brazil's largest coffee producers are increasingly turning towards technology to help them successfully grow and process the best possible crop, both in terms of size and quality., One such firm, Okuyama, says it is now investing at least 10% of its revenues in technology. Based in Minas Gerais, it has coffee plantations covering 1,100 hectares (2,718 acres).

Its staff use a computer app called Cropwise Protector, which is made by Swiss-Chinese agricultural tech firm, Syngenta., Linked to ground sensors and satellite imagery, the tool gives the farm workers a visual analysis of the farm, or plantation, on a tablet device or laptop.

They can then quickly apply such things as drip-irrigation, or pest-control, to a very specific area that might need it, rather than a whole field or the entire farm.   ... ' 

Collaborative Metaverse, Useful thoughts?

In O'Reilly Radar
The enterprise metaverse is about better collaboration, not virtual meetings.

By Mike Loukides
October 12, 2022

Learn faster. Dig deeper. See farther.
Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more
We want to congratulate Dylan Field on his startup Figma, which Adobe recently purchased for $20B. Dylan started his career with O’Reilly Media when he was in high school—not that long ago. With Figma, he’s made the big time.

It’s worth thinking about why Figma has been so successful, and why Adobe was willing to pay so much for it. Since the beginning, Figma has been about collaboration. Yes, it was a great design tool. Yes, it ran completely in the browser, no downloads and installation required. But more than anything else, Figma was a tool for collaboration. That was a goal from the beginning. Collaboration wasn’t an afterthought; it was baked in.

My thesis about the Metaverse is that it is, above all, about enabling collaboration. VR goggles and AR glasses? Fine, but the Metaverse will fail if it only works for those who want to wear a headset. Crypto? I strongly object to the idea that everything needs to be owned—and that every transaction needs to pay a tax to anonymous middlemen (whether they’re called miners or stakers). Finally, I think that Facebook/Meta, Microsoft, and others who say that the Metaverse is about “better meetings” are just plain headed in the wrong direction. I can tell you—anyone in this industry can tell you—that we don’t need better meetings, we need fewer meetings.

But we still need people working together, particularly as more and more of us are working remotely. So the real question facing us is: how do we minimize meetings, while enabling people to work together? Meetings are, after all, a tool for coordinating people, for transferring information in groups, for circulating ideas outside of one-to-one conversations. They’re a tool for collaboration. That’s precisely what tools like Figma are for: enabling designers to work together on a project conveniently, without conflicting with each other. They’re about demonstrating designs to managers and other stakeholders. They’re about brainstorming new ideas (with Figjam) with your team members. And they’re about doing all this without requiring people to get together in a conference room, in Zoom, or in any of the other conferencing services. The problem with those tools isn’t really the flat screen, the “Brady Bunch” design, or the absence of avatars; the problem is that you still have to interrupt some number of people and get them in the same (virtual) place at the same time, breaking whatever flow that they were in.

We don’t need better meetings; we need better tools for collaboration so that we don’t need as many meetings. That’s what the Metaverse means for businesses. Tools like GitHub and Google’s Colab are really about collaboration, as are Google Docs and Microsoft Office 365. The Metaverse is strongly associated with gaming, and if you look at games like Overwatch and Fortnite, and you’ll see that those games are really about collaboration between online players. That’s what makes these games fun. I’ve got nothing against VR goggles, but what makes the experience special is the interaction with other players in real time. You don’t need goggles for that. ...' 

Tuesday, November 22, 2022

A Hackers Mind: First Review

Plan to read this, the topic is very important for anyone in the field.  I continue to follow. 

First Review of A Hacker’s Mind

Kirkus reviews   A Hacker’s Mind:

A cybersecurity expert examines how the powerful game whatever system is put before them, leaving it to others to cover the cost.

Schneier, a professor at Harvard Kennedy School and author of such books as Data and Goliath and Click Here To Kill Everybody, regularly challenges his students to write down the first 100 digits of pi, a nearly impossible task­—but not if they cheat, concerning which he admonishes, “Don’t get caught.” Not getting caught is the aim of the hackers who exploit the vulnerabilities of systems of all kinds. Consider right-wing venture capitalist Peter Thiel, who located a hack in the tax code: “Because he was one of the founders of PayPal, he was able to use a $2,000 investment to buy 1.7 million shares of the company at $0.001 per share, turning it into $5 billion—all forever tax free.” It was perfectly legal—and even if it weren’t, the wealthy usually go unpunished. The author, a fluid writer and tech communicator, reveals how the tax code lends itself to hacking, as when tech companies like Apple and Google avoid paying billions of dollars by transferring profits out of the U.S. to corporate-friendly nations such as Ireland, then offshoring the “disappeared” dollars to Bermuda, the Caymans, and other havens. Every system contains trap doors that can be breached to advantage. For example, Schneier cites “the Pudding Guy,” who hacked an airline miles program by buying low-cost pudding cups in a promotion that, for $3,150, netted him 1.2 million miles and “lifetime Gold frequent flier status.” Since it was all within the letter if not the spirit of the offer, “the company paid up.” The companies often do, because they’re gaming systems themselves. “Any rule can be hacked,” notes the author, be it a religious dietary restriction or a legislative procedure. With technology, “we can hack more, faster, better,” requiring diligent monitoring and a demand that everyone play by rules that have been hardened against tampering.

An eye-opening, maddening book that offers hope for leveling a badly tilted playing field.

I got a starred review. Libraries make decisions on what to buy based on starred reviews. Publications make decisions about what to review based on starred reviews. This is a big deal.

Book’s webpage    https://www.schneier.com/books/a-hackers-mind/ 

Edible Drones

Narrow and novel  application here.

ACM NEWS

Stranded Without Food? Edible Drone Has Snackable Wings  By CNET, November 17, 2022

The wings of this drone are nutritious and, depending on what you think about rice cakes, delicious.

The drone's design uses some familiar-looking airplane-like components; the big difference is that its fixed wing is made from rice cakes and gelatin.

It's a nightmare scenario. You're on an ambitious mountain hike when you get lost, injured or stranded. The good news is help is finally on the way, but the bad news is it's going to take time to reach you and you're out of food. That's when a buzzing drone comes flying in for a landing. Not only do you get the snacks, medicine or water it's carrying, you can eat the wings to tide you over until the rescue team arrives.

This scenario could become real. A team with the Swiss Federal Institute of Technology, Lausanne (EPFL) has developed a prototype edible drone. The munchable machine is part of a broader project called RoboFood. RoboFood is about investigating edible robots for humans and animals as well as foods that behave like robots.

The team published its work online with the title "Towards edible drones for rescue missions: design and flight of nutritional wings." The study is tackling the problem of getting commercial drones to carry enough of a payload to help people in emergency situations.

  Full Article.   

And More Skin, Now for Health Monitoring

Argonne at work. 

Skin-Like Electronics Could Monitor Health Continuously

Argonne National Laboratory

Joseph E. Harmon, November 16, 2022

Scientists at the U.S. Department of Energy's Argonne National Laboratory, the University of Chicago, China's Tongji University, and the University of Southern California are developing flexible, wearable electronics that can monitor the wearer's health. The researchers created a skin-like neuromorphic chip from a plastic semiconductor film integrated with stretchable gold nanowire electrodes. In one experiment, the researchers assembled and trained an artificial intelligence device to differentiate healthy electrocardiogram signals from signals indicating health problems, which it did with more than 95% accuracy. The researchers also analyzed the plastic semiconductor under X-rays, in order to better understand its structure.  ... ' 

A Classic Difficult Problem for Robotic AI

 If only, we saw an early demo of this. 

Robots that Can Feel Cloth Layers May One Day Help with Laundry

By Stacey Federoff, Carnegie Mellon,

New research from Carnegie Mellon University's Robotics Institute (RI) can help robots feel layers of cloth rather than relying on computer vision tools to only see it. The work could allow robots to assist people with household tasks like folding laundry.

Humans use their senses of sight and touch to grab a glass or pick up a piece of cloth. It is so routine that little thought goes into it. For robots, however, these tasks are extremely difficult. The amount of data gathered through touch is hard to quantify and the sense has been hard to simulate in robotics — until recently. 

"Humans look at something, we reach for it, then we use touch to make sure that we're in the right position to grab it," said David Held, an assistant professor in the School of Computer Science and head of the Robots Perceiving and Doing (R-PAD) Lab. "A lot of the tactile sensing humans do is natural to us. We don't think that much about it, so we don't realize how valuable it is."

For example, to fold laundry, robots need a sensor to mimic the way a human's fingers can feel the top layer of a towel or shirt and grasp the layers beneath it. Researchers could teach a robot to feel the top layer of cloth and grasp it, but without the robot sensing the other layers of cloth, the robot would only ever grab the top layer and never successfully fold the cloth.

"How do we fix this?" Held asked. "Well, maybe what we need is tactile sensing."

ReSkin, developed by researchers at Carnegie Mellon and Meta AI, was the ideal solution. The open-source touch-sensing "skin" is made of a thin, elastic polymer embedded with magnetic particles to measure three-axis tactile signals. In a recent paper, researchers used ReSkin to help the robot feel layers of cloth rather than relying on its vision sensors to see them.

"By reading the changes in the magnetic fields from depressions or movement of the skin, we can achieve tactile sensing," said Thomas Weng, a Ph.D. student in the R-PAD Lab, who worked on the project with RI postdoctoral fellow Daniel Seita and graduate student Sashank Tirumala. "We can use this tactile sensing to determine how many layers of cloth we've picked up by pinching with the sensor."

Other research has used tactile sensing to grab rigid objects, but cloth is deformable, meaning it changes when touched — making the task even more difficult. Adjusting the robot's grasp on the cloth changes both its pose and the sensor readings.  .... '

Will AI Discover new Laws of Physics?

 A possibility.

Will AI Discover new Laws of Physics?  in NewScientist

Algorithms can pore over astrophysical data to identify underlying equations. Now, physicists are trying to figure out how to imbue these “machine theorists” with the ability to find deeper laws of nature

This article has been viewed 3468 times in the last 24 hours.

PHYSICS 21 November 2022 By Thomas Lewton

SPEAKING at the University of Cambridge in 1980, Stephen Hawking considered the possibility of a theory of everything that would unite general relativity and quantum mechanics – our two leading descriptions of reality – into one neat, all-encompassing equation. We would need some help, he reckoned, from computers. Then he made a provocative prediction about these machines’ growing abilities. “The end might not be in sight for theoretical physics,” said Hawking. “But it might be in sight for theoretical physicists.”

Artificial intelligence has achieved much since then, yet physicists have been slow to use it to search for new and deeper laws of nature. It isn’t that they fear for their jobs. Indeed, Hawking may have had his tongue firmly in his cheek. Rather, it is that the deep-learning algorithms behind AIs spit out answers that amount to a “what” rather than a “why”, which makes them about as useful for a theorist as saying the answer to the question of life, the universe and everything is 42.

Except that now we have found a way to make deep-learning algorithms speak physicists’ language. We can leverage AI’s ability to scour vast data sets in search of hidden patterns and extract meaningful results – namely, equations. “We’re moving into the discovery phase,” says Steve Brunton at the University of Washington in Seattle.  ... ' 

Simulation Based Dynamic Virtual Prototyping

 Simulation aided by selective AI 

Simulation-based and highly dynamic: virtual prototyping

Press release / November 09, 2022

The things that amaze us in everyday life, such as when our cars take control of the parking process, are often the result of countless series of expensive and lengthy trials. At electronica, Booth B4/258, Fraunhofer researchers will demonstrate how virtual prototyping can be used for simulations to detect errors and problems in complex electronic control systems at an early stage as well as to shorten development times and significantly reduce costs.

The team at Fraunhofer IIS/EAS is building their own vehicle-in-the-loop laborato-ry. Manufacturers can test and certify their vehicles in a virtual environment.

© Fraunhofer IIS/EAS

The team at Fraunhofer IIS/EAS is building their own vehicle-in-the-loop laborato-ry. Manufacturers can test and certify their vehicles in a virtual environment.

From navigation devices that report unexpected traffic jams before they can be seen, to robots moving in a dynamic environment — intelligent electronic components that identify, evaluate and autonomously adapt to changes in the environment and their own internal structure have long been part of our networked world and are rapidly becoming more widespread. However, the lengthy process to bring products like these to market is anything but simple.

In microelectronics, development cycles are significantly more complex than in conventional mechanical engineering. For application-specific integrated circuits (ASICs) and other embedded systems, it is not uncommon to see lead times of six months or more. Any delays can lead to missed market launch opportunities.

Dynamic tests in simulated environments

To shorten these processes, Dr. Christoph Sohrmann and his team at the Fraunhofer Institute for Integrated Circuits IIS, Division Engineering of Adaptive Systems EAS, are providing support for their customers through virtual prototyping: “We execute parts of the product development chain using simulations, which allows us to break up the development flow so that multiple teams can start in parallel,” explains the Virtual System Development group manager.

From purely virtual models to test stands for testing hardware and software in the context of the target product, the tests, conducted by scientists, allow for an agile development process. “Using virtual models allows us to start intensive software tests long before the hardware is available. Our customers can test their system piece by piece in the loop: This increases coverage significantly and produces more robust systems. The virtual development supplements the conventional development cycle. The software has far fewer errors when it reaches the actual prototype,” explains the EAS expert Sohrmann. This is a crucial point, because the costs of fixing an error increase exponentially from the concept to the mass production phase. If a product has to be recalled, that could spell the end of a business. This means that errors need to be identified and eliminated as early as possible.

Another good reason for virtual-based development methods is the number of tests that are necessary to ensure that a system can run without errors. The existence of countless borderline cases and exceptions poses a particular challenge. Sohrmann explains this using cars as an example: “Many of the latest models stay in lane automatically. When the sun reflects off the crash barrier, it can create an additional line on the road surface. The car can suddenly start to navigate using this third bright line.” In the simulation, experts can have a million cars driving in parallel and run through many more driving situations in the same development time — this leads to significant savings in time and costs, which saves many millions of miles traveled for the real model in the automotive sector.

Validation of AI-based systems

Testing procedures for intelligent, self-learning systems are also a focus for the professionals at EAS. For example, what do TÜV testing processes for self-driving vehicles need to look like? A lot can change in three years — in a city and in a vehicle. AI algorithms also learn and will change accordingly during that time. TÜV nonetheless needs to be able to check that all the systems are functioning correctly. A general inspection cannot take up three weeks, so virtual assistance at the test stand is unavoidable. “That is a major challenge. These testing procedures need to be available in under ten years,” Sohrmann points out. Accordingly, the researchers intend to continue their collaboration with the bodies responsible, not only in the form of consulting but also in technical support. .... '