/* ---- Google Analytics Code Below */

Sunday, February 28, 2021

MFA as Security

 Good overview piece abut 2FA and beyond. 

Is MFA Needed to Improve Security?

By Keith Kirkpatrick   Commissioned by CACM Staff    February 25, 2021

Many corporate and consumer-based systems and applications deploy short message service (SMS)-based two-factor authentication technology to help protect users from being hacked. This method of two-factor authentication is fairly simple; a user will log onto an app or system using their username and password, and then a unique security code is generated by an algorithm within the app, which is then sent to the user's phone via a text message. If that code is correctly entered into the system when prompted, it theoretically will authenticate the person trying to log into the system.

However, SMS-based authentication is rife with security holes. Alex Weinert, Microsoft's director of identity security, published a blog post in early November highlighting the immense risk of continuing to use SMS-based codes to authenticate users, given the ability of hackers to either intercept the codes while they're being sent (basic SMS messages are unencrypted), or to simply carry out a scheme known as subscriber identity module (SIM)-card swapping, or SIMjacking.

SIMjacking is a technique through which a criminal will call a user's wireless company and use information gathered about the user (including personal data garnered via phishing schemes, guessing answers to challenge questions, and exploiting the empathetic nature of humans) to have a phone's SIM card transferred to their account, giving them access to the user's SMS messages, including authentication texts.

"SIM-based multi-factor authentication is probably one of the most popular MFA methods on the Internet, if not the most popular, meaning that almost every company you deal with uses these SMS-based MFA solutions, and you really don't have a choice," says Roger Grimes, author of Hacking Multifactor Authentication, and a Data-Driven Defense Evangelist at KnowBe4, a security awareness education company. "Not only is [SMS] a poor authenticator, it is fairly easy to hack, but many times you can't opt out of it."

That's why security professionals suggest the use of multi-factor authentication applications, which are designed to reside on each physical device and do not require the use of SMS-based authentication codes. Authentication applications, which have been released by both large companies (Google Authenticator, Microsoft Authenticator) and independent software vendors (Twilo Authy, LogMeIn LastPass Authenticator, and Duo Mobile) generally only require a data connection during the initial set-up process, which involves installing the application on a smartphone, then configuring it to work with each account to be protected. Each account provides a secret key that is shared over a secure data channel to the authenticator app, and is used for all future logins.

To log into such a site, the user will provide credentials (a username and password to the site); an algorithm then generates codes using the current time on the device and the shared secret key, in order to generate a one-time password, then asks the user to enter it. The user runs the Authenticator app, which independently computes and displays the same password, which the user types into the site, authenticating their identity.  ... " 

Universities Capturing More Value from Research

 Sounds akin to our own work on gathering value from research.   Our recent work with the DOD to understand what research it has paid for, and can be monetized or placed into public use.  Good discussion. 

Should Universities Try to Capture More Value from Their Research?  Knowledge@Wharton

Supports K@W's  Innovation Content

A pair of newly published research papers co-authored by Wharton management professor David H. Hsu benchmark and explore commercialization drivers of academic science. The papers find that university research has produced pathbreaking innovations across many disciplines, many of which have been commercialized successfully. Yet, on average, universities capture 16% of the value they help create through licensing revenues or equity stakes in the startups their research spawns. Furthermore, some researchers and universities are much better able to commercialize their discoveries compared to others, even holding constant the discovery itself.

The first research paper, which Hsu wrote with Po-Hsuan Hsu, Tong Zhou and Arvids A. Ziedonis, is titled “Benchmarking U.S. University Patent Value and Commercialization Efforts: A New Approach” and was published this month in Research Policy. The second paper, “Revisiting the Entrepreneurial Commercialization of Academic Science: Evidence from ‘Twin’ Discoveries,” co-authored with Matt Marx, is forthcoming in Management Science.

The results suggest that universities with policies and resources devoted to commercialization efforts, aided by academic staff with commercialization experience and which are more interdisciplinary, are much more successful at translating research for commercial outcomes. Consequently, Hsu and his co-authors make a case for universities to take a closer look at the value they are extracting by commercializing their patents and intellectual property.... ' 

BERT for Unsupervised Training

 A nice, fairly compact, somewhat technical view of how to use BERT for unsupervised training problems.    And some unexpected uses.  True, everyone should know how to use this method.  Click through for full detail. 

For unsupervised task solving ... 

BERT is a prize addition to the practitioner’s toolbox   By Ajit Rajasekharan  in TowardsDataScience

Figure 1. Few reasons why BERT is a valuable addition to a practitioner’s toolbox apart from its well known use of fine-tuning for downstream tasks. (1) BERT’s learned vocabulary of vectors (in say 768 dimensional space) serve as targets that masked output vectors predict and learn from prediction errors during training. After training, these moving targets settle into landmarks that can be clustered and annotated (a one-time step) and used for classifying model output vectors in a variety of tasks — NER, relation extraction etc. (2) A model pre-trained enough to achieve a low next sentence prediction loss (in addition to the masked word prediction loss) yields quality CLS vectors representing any input term/phrase/sentence.  ... ' 

RPA Examined

See also aspects of business process modeling for extensions to the idea.  And likely combination with broader methods of ML and AI

Some thoughts on RPA: By Maria Macaraig  October 8, 2019    in Datafloq

Robotic process automation or RPA is the technology that makes it possible to program software and empower machines to mimic human actions, replicate human motion, and perform human functions automatically, and repetitively. 

RPA is powering waves of transformative change in manufacturing industries, defense, aerospace, business, and healthcare. The intelligent software and its visible application, the robot, don’t rest, is error-free, and is more productive and profitable than a human.   

Robotic process automation is not only changing the way we work; the technology is quietly revolutionizing the future. It should be useful to review the developments that set the stage for RPAs dominance so that we gain a better appreciation of the role and significance of RPA.

Kansas software experts at Tricension explain the history and role of RPA in enabling companies to enhance quality and improve productivity by streamlining business processes. 

The 1950s and 60s: Emergence of Artificial Intelligence (AI), Machine Learning (ML), and Natural Language Processing (NLP)

Foundational research conducted by eminent American computer scientists led by John McCarthy and Arthur Samuel paves the way for imbuing computers with the capability to think and respond like humans. 

AI, ML, computer engineering, and linguistic sciences were combining resources to deliver a single toolkit that could boost robotic process automation. 

Artificial Intelligence was focusing on creating smart machines that could mimic human intelligence (using logic, and reasoning) to solve complex issues that were beyond the range of the human brain. 

Machine Learning, developing as an offshoot of AI, was coding software algorithms that could gather and analyze big data. We designed algorithms that could “think” and “learn” on their own without being expressly programmed to do so.

Advancements in Natural Language Processing enabled us to bridge the gap between computers and natural human language. Artificial Intelligence technology-enabled computers to analyze large volumes of natural language and accurately comprehend the meaning and intent behind human-oriented spoken and written commands.   ... " 

Saturday, February 27, 2021

Decline of Computers as a General Purpose Technology

Contributed article, excerpt below in the March 2021 CACM.

My first reaction was No!  But some very good points made ... '

The Decline of Computers as a General Purpose Technology   By Neil C. Thompson, Svenja Spanuth

Communications of the ACM, March 2021, Vol. 64 No. 3, Pages 64-72  10.1145/3430936

Perhaps in no other technology has there been so many decades of large year-over-year improvements as in computing. It is estimated that a third of all productivity increases in the U.S. since 1974 have come from information technology,a,4 making it one of the largest contributors to national prosperity.

Key Insights  ...  

- Moore's Law was driven by technical achievements and a "general purpose technology" (GPT) economic cycle where market growth and investments in technical progress reinforced each other.  These created strong economic incentives for users to standardize to fast-improving CPUs, rather than designing their own specialized processors.

- Today, the GPT cycle is unwinding, resulting in less market growth and slower technical progress. 

- As CPU improvement slows, economic incentives will push users toward specialized processors, which threatens to fragment computing. In such a computing landscape, some users willbe in the 'fast lane,' benefit ing from customized hardware, and others will be left in the 'slow lane,' stuck on CPUs whose progress fades. ... 

The rise of computers is due to technical successes, but also to the economics forces that financed them. Bresnahan and Trajtenberg3 coined the term general purpose technology (GPT) for products, like computers, that have broad technical applicability and where product improvement and market growth could fuel each other for many decades. But, they also predicted that GPTs could run into challenges at the end of their life cycle: as progress slows, other technologies can displace the GPT in particular niches and undermine this economically reinforcing cycle. We are observing such a transition today as improvements in central processing units (CPUs) slow, and so applications move to specialized processors, for example, graphics processing units (GPUs), which can do fewer things than traditional universal processors, but perform those functions better. Many high profile applications are already following this trend, including deep learning (a form of machine learning) and Bitcoin mining. ... 

With this background, we can now be more precise about our thesis: "The Decline of Computers as a General Purpose Technology." We do not mean that computers, taken together, will lose technical abilities and thus 'forget' how to do some calculations. We do mean that the economic cycle that has led to the usage of a common computing platform, underpinned by rapidly improving universal processors, is giving way to a fragmentary cycle, where economics push users toward divergent computing platforms driven by special purpose processors.

This fragmentation means that parts of computing will progress at different rates. This will be fine for applications that move in the 'fast lane,' where improvements continue to be rapid, but bad for applications that no longer get to benefit from field-leaders pushing computing forward, and are thus consigned to a 'slow lane' of computing improvements. This transition may also slow the overall pace of computer improvement, jeopardizing this important source of economic prosperity.    ...."

Data and its Useful Nature as an Asset

 Recall this echoes some of our own examination of  'data as an asset'.     Thoughtful ideas here but I think it is still useful to consider it as an asset that can be combined with methods and other data to drive further value.   Does not have include exclusive  'ownership' as a element.

From Schneier:

Excellent Brookings paper: “Why data ownership is the wrong approach to protecting privacy.” 

From the introduction:

Treating data like it is property fails to recognize either the value that varieties of personal information serve or the abiding interest that individuals have in their personal information even if they choose to “sell” it. Data is not a commodity. It is information. Any system of information rights­ — whether patents, copyrights, and other intellectual property, or privacy rights — ­presents some tension with strong interest in the free flow of information that is reflected by the First Amendment. Our personal information is in demand precisely because it has value to others and to society across a myriad of uses  ... '

Talking Blockchain

Wladawsky talks Blockchain. Good thoughtful piece with lots of backup. 

How Can Blockchain Become a Truly Transformative Technology?

I first became interested in blockchain technologies when in 2016 the World Economic Forum (WEF) named The Blockchain in its annual list of Top Ten Emerging Technologies citing its potential to fundamentally change the way markets and governments work. The WEF noted that “Like the Internet, the blockchain is an open, global infrastructure upon which other technologies and applications can be built. And like the Internet, it allows people to bypass traditional intermediaries in their dealings with each other, thereby lowering or even eliminating transaction costs.”

The blockchain first came to light in 2008 as the architecture underpinning bitcoin, the best known and most widely held digital currency. The blockchain’s original vision was limited to enabling bitcoin users to transact directly with each other with no need for a bank or government agency to certify the validity of the transactions. But, like the Internet, electricity and other transformative technologies, blockchain has transcended its original objectives. Over the years, blockchains, - and the more encompassing distributed ledger technologies (DLT), - have developed a following of their own as distributed data base architectures with the ability to handle trust-less transactions where no parties need to know nor trust each other for transactions to complete.

Could blockchain/DLT become truly transformative technologies? And if so, what will it take? ... ' 

Scramble for Post Quantum

Moving more rapidly than expected. 

The Scramble for Post-Quantum Cryptography

By Samuel Greengard ,  Commissioned by CACM Staff

Researchers are working to counter the threat to current communications posed by the nascent quantum computing arena, which could undermine almost all of the encryption protocols used today.

History has demonstrated that where there are people, there are secrets. From elaborately coded messages on paper to today's sophisticated cryptographic algorithms, a desire to maintain privacy has persisted. Of course, as technology has advanced, the ability to cipher messages but also crack the codes has grown.

"Today's encryption methods are excellent, but we are reaching an inflection point," says Chris Peikert, an associate professor in the Department of Science and Engineering at the University of Michigan Ann Arbor. "The introduction of quantum computing changes the equation completely. In principle, these devices could break any reasonably-sized public key."

Such an event would wreak havoc. "It would affect nearly everything we do with computers," says Dustin Moody, a mathematician whose focus at the U.S. National Institute of Standards and Technology (NIST) includes computer security. Within this scenario, he says, computing subsystems, virtual private networks (VPNs), and digital signatures would no longer be secure. As a result, personal data, corporate records, intellectual property, and online transactions would all be at risk.

Consequently, cryptographers are developing new encryption standards that would be resistant to the brute force power of quantum computing. At the center of this effort is an initiative at NIST to identify both lattice-based and code-based algorithms that could protect classical computing systems but also introduce new and more advanced capabilities.  ... ' 

Friday, February 26, 2021

Ring Door Bell Goes Radar

Like some of the directions the doorbell tech is going,  getting to be quite common. 

Ring Video Doorbell Pro 2 uses radar for bird’s-eye view of front door activity  By Patrick Hearn

 Ring’s latest addition to its lineup of smart home security devices demonstrates exactly where the company is headed. The Ring Video Doorbell Pro 2 is a new, premium video doorbell with a host of next-generation technology to help keep your home safer than ever. The camera features 3D motion detection, 1,536p video, a Bird’s Eye View feature, and customizable privacy.

Through the use of a radar sensor, the 3D motion detection technology provides more accurate and precise identification of when a motion event begins. The sensors measure an object’s distance from the camera, which makes it easier to exclude certain high-traffic areas. For example, if your doorbell faces a sidewalk, you can set it so that only motion closer to your home triggers an alert.

This feature works in conjunction with the Bird’s Eye View feature. By measuring the exact distance a person is from the camera, the Ring Video Doorbell Pro 2 gives users an aerial view of the exact path someone took on their approach. If you’re watching an event happen through Live View or even viewing it in your Event History, you will see a picture-in-picture display that shows the movement path in addition to the actual event....  ' 

You Need a Challenge Network

 Good thought, agreed, always worked best when challenged. 

Why You Need a ‘Challenge Network’  in K@W

In the following excerpt from his new book, Think Again, Wharton management professor Adam Grant explains why success often comes from surrounding ourselves with “disagreeable” people – skeptics who can point out blind spots, question assumptions and help us overcome our weaknesses.

In 2000 Pixar was on fire. Their teams had used computers to rethink animation in their first blockbuster, Toy Story, and they were fresh off of two more smash hits. Yet the company’s founders weren’t content to rest on their laurels. They recruited an outside director named Brad Bird to shake things up. Brad had just released his debut film, which was well-reviewed but flopped in the box office, so he was itching to do something big and bold. When he pitched his vision, the technical leadership at Pixar said it was impossible: They would need a decade and $500 million to make it.

Brad wasn’t ready to give up. He sought out the biggest misfits at Pixar for his project — people who were disagreeable, disgruntled, and dissatisfied. Some called them black sheep. Others called them pirates. When Brad rounded them up, he warned them that no one believed they could pull off the project. Just four years later, his team didn’t just succeed in releasing Pixar’s most complex film ever; they actually managed to lower the cost of production per minute. The Incredibles went on to gross upwards of $631 million worldwide and won the Oscar for best animated feature.  ... "

Real-Time Marker-Less Motion Capture for Animals

Real time feedback for animal movement and posture. 

Real time Studies of animal Motion by Neural activity.By EPFL (Switzerland)

Nik Papageorgiou, December 10, 2020

An updated deep learning software toolbox developed at the Swiss Federal Institute of Technology, Lausanne (EPFL) facilitates real-time feedback studies on animal movement and posture. DeepLabCut-Live! (DLC-Live!) is designed to enable computers to track and predict these factors free of motion-capture markers, by controlling or stimulating the animals' neural activity. DLC-Live!'s tailored networks predict posture from video frames, combined with low latency so researchers can supply real-time feedback and assess behavioral functions of specific neural circuits; the system also interfaces with hardware used in posture studies to deliver feedback to animals. EPFL's Mackenzie Mathis said, "It's economical, it's scalable, and we hope it's a technical advance that allows even more questions to be asked about how the brain controls behavior."   .. '

Thursday, February 25, 2021

Digitization of Creativity and the Maker Movement.

A considerable piece.    In particular note the connection to the 'Maker Movement',  an approach we were introduced to as early as the 80s.  Of course 3D printing has become a big component.   Software tools to to produce models.    And also elements of AI able to work with us to add design creativity?   

What Can the Maker Movement Teach Us About the Digitization of Creativity?   By Sascha FrSiesike, Frédéric Thiesse, George Kuk

Communications of the ACM, March 2021, Vol. 64 No. 3, Pages 42-45   10.1145/3447524

In recent years, the 'maker movement' has emerged as a social phenomenon driven by novel technological possibilities.1 With the help of inexpensive, yet highly versatile means of production (for example, CNC milling machines, 3D printers) and easy-to-use software tools, makers free themselves from their traditional role as passive consumers and evolve into innovators and producers. Although the act of physical production seems to be at the center of the movement, a large part of the creative work takes place in the online sphere. These digital activities and their outcomes provide a rich source of information that can be used to gain a more nuanced understanding of how the digitization affects the creative process itself.

Of all the production methods available to makers, 3D printing is probably the most versatile and requires only a limited understanding of the production process. Several 3D design software packages allow even lay people to turn their ideas into printable designs. This combination of flexibility and usability has led to an abundance of 3D object models over the past years, which are shared and jointly refined with the community on digital maker platforms. As part of a multi-year research project on the use of 3D printing by the maker community, we found that the use of these platforms in the creative process blurs the boundaries between the digital and the physical and ultimately changes the way ideas are expressed, curated, and eventually translated into physical reality. In particular, we saw how makers with entirely different backgrounds (for example, HW/SW developers, designers, business and social entrepreneurs) traverse across the startup world, software development, and open online communities, to combine concepts through a novel digitized creative process. ... ' 

AI Recodes Software

Another intriguing application. 

AI Recodes Legacy Software to Operate on Modern Platforms

IBM's AI-based tools let engineers explore ways to extract value from legacy enterprise software

Last year, IBM demonstrated how AI can perform the tedious job of software maintenance through the updating of legacy code. Now Big Blue has introduced AI-based methods for re-coding old applications so that they can operate on today’s computing platforms.

The latest IBM initiatives, dubbed Mono2Micro and Application Modernization Accelerator (AMA), give app architects new tools for updating legacy applications and extracting new value from them. These initiatives represent a step towards a day when AI could automatically translate a program written in COBOL into Java, according to Nick Fuller, director of hybrid cloud services at IBM Research.  ... '

Knowledge Sharing Across Silos

From the APQC Blog: 

Why Knowledge Sharing Across Siloes Is More Important in 2021

Team-based collaboration got a huge boost in 2020

However, we don’t seem the same upswing when it comes to open, boundary-spanning collaboration. Less than a quarter of participants rate communities of practice, enterprise social networks, or expertise location tools as highly critical to their work, and these approaches received only small bumps in the wake of the pandemic. In the transition to virtual work, people simply haven’t turned to core KM tools as much as they might have.

Knowledge Management Adoption Still Lags

The emphasis on team- and project-based collaboration is not surprising. People’s work lives have been turned upside down, and their most immediate need—and instinct—has been to faithfully replicate what they had lost. And admittedly, daily interaction with close coworkers is essential to keeping the lights on and getting things done. 

But when employees collaborate only in pre-established closed groups, they aren’t realizing the full benefits of the tools they’ve embraced. Communities, enterprise social networks, and expertise location tools allow people to connect with likeminded colleagues regardless of team affiliation, surface hidden expertise, and seek out global perspectives. All of this is critical to the kind of innovation and creative problem solving required to respond to breakneck change. If people stay within the walled gardens of department chat, they’re leaving a lot on the table.  

The good news is that participating in a virtual community or enterprise social network uses many of the same skills that employees have honed in team-based sites. And with the mechanics of participation less of a hurdle for users, KM can focus on the incentives and cultural queues that position open channels as safe and rewarding places to engage. These aren’t easy challenges to overcome, but we have a golden opportunity to capitalize on digital trends and take open knowledge sharing to the next level. ... ' 

AI Everywhere: Implications of?

Thoughtfull. But I don't consider just algorithms to be AI.     AI is adaptable and evolving abilities to do things that currently humans do best.   Like reading or writing or learning or reacting.  

A.I. Here, There, Everywhere,   In The New York Times,  February 25, 2021

Interacting with artificial intelligence.

Artificial intelligence requires data to learn patterns and make decisions, But researchers are developing methods to use our data without actually seeing it, or encrypt it in ways that currently can't be hacked.

I wake up in the middle of the night. It's cold.

"Hey, Google, what's the temperature in Zone 2," I say into the darkness. A disembodied voice responds: "The temperature in Zone 2 is 52 degrees." "Set the heat to 68," I say, and then I ask the gods of artificial intelligence to turn on the light.

Many of us already live with A.I., an array of unseen algorithms that control our Internet-connected devices, from smartphones to security cameras and cars that heat the seats before you've even stepped out of the house on a frigid morning.

But, while we've seen the A.I. sun, we have yet to see it truly shine.

Researchers liken the current state of the technology to cellphones of the 1990s: useful, but crude and cumbersome. They are working on distilling the largest, most powerful machine-learning models into lightweight software that can run on "the edge," meaning small devices such as kitchen appliances or wearables. Our lives will gradually be interwoven with brilliant threads of A.I.   ..  " 

Example of Question Answering Application: Jarvis

Question Answering Applications 

Developing Question a Question Answer Application with NVIDIA Jarvis   By James Sohn | February 25, 2021  Tags: AI/Deep Learning, BERT, cloud computing, featured,

There is a high chance that you have asked your smart speaker a question like, “How tall is Mount Everest?” If you did, it probably said, “Mount Everest is 29,032 feet above sea level.” Have you ever wondered how it found an answer for you?

Question answering (QA) is loosely defined as a system consisting of information retrieval (IR) and natural language processing (NLP), which is concerned with answering questions posed by humans in a natural language. If you are not familiar with information retrieval, it is a technique to obtain relevant information to a query, from a pool of resources, webpages, or documents in the database, for example. The easiest way to understand the concept is the search engine that you use daily. 

You then need an NLP system to find an answer within the IR system that is relevant to the query. Although I just listed what you need for building a QA system, it is not a trivial task to build IR and NLP from scratch. Here’s how NVIDIA Jarvis makes it easy to develop a QA system.

Jarvis overview

NVIDIA Jarvis is a fully accelerated application framework for building multimodal conversational AI services that use an end-to-end deep learning pipeline. The Jarvis framework includes optimized services for speech, vision, and natural language understanding (NLU) tasks. In addition to providing several pretrained models for the entire pipeline of your conversational AI service, Javis is also architected for deployment at scale. In this post, I look closely into the QA function of Jarvis and how you can create your own QA application with it.  ... " 

Search and Rescue Drone uses Phones

Makes sense, and a way to coordinate the search for multiple kinds of search conditions and requirements.

Search-and-Rescue Drone Locates Victims by Homing in on Their Phones  By IEEE Spectrum,  February 24, 2021

The Search-And-Rescue DrOne (SARDO) platform was developed to enable a single drone to act as a moving cellular base station, do large sweeps over disaster areas, and locate survivors of disasters using signals from their phones.

The Search-And-Rescue DrOne (SARDO) platform developed by researchers at Germany's NEC Laboratories Europe uses off-the-shelf components, integrating aerial drones, artificial intelligence, and smartphones to find survivors of disasters using signals from their phones.  SARDO utilizes a drone as a mobile cellular base station that sweeps disaster areas and conducts time-of-flight measurements, while a machine learning (ML) algorithm surveys the area and calculates the location of victims.

A second ML algorithm helps locate survivors on the move by estimating each person's trajectory.

In field experiments, the drone could localize missing people to within a few tens of meters in roughly three minutes per victim.

From IEEE Spectrum

Wednesday, February 24, 2021

On Knowledge Graphs

Great piece on a favorite topic from CACM.  Only mildly technical.  Its all about usefully and efficiently representing knowledge.  Essential for anyone considering the future of string and using knowledge.  We experimented with it in the enterprise, for both historical and operational purposes. Below quick intro, more at the link.

Key Insights:

Data was traditionally considered a material object, tied to bits, with no semantics per se. Knowledge was traditionally conceived as the immaterial object, living only in people's minds and language. The destinies of data and knowledge became bound together, becoming almost inseparable, by the emergence of digital computing in the mid-20h century.

Knowledge Graphs can be considered the coming of age of the integration of knowledge and data at large scale with heterogeneous formats.

The next generation of researchers should become aware of these developments. Both successful and not, these ideas are the basis of current technology and contain fruitful ideas to inspire future research.

Knowledge Graphs    By Claudio Gutierrez, Juan F. Sequeda

Communications of the ACM, March 2021, Vol. 64 No. 3, Pages 96-104   10.1145/3418294

The notion of Knowledge Graph stems from scientific advancements in diverse research areas such as Semantic Web, databases, knowledge representation and reasoning, NLP, and machine learning, among others. The integration of ideas and techniques from such disparate disciplines presents a challenge to practitioners and researchers to know how current advances develop from, and are rooted in, early techniques.

Understanding the historical context and background of one's research area is of utmost importance in order to understand the possible avenues of the future. Today, this is more important than ever due to the almost infinite sea of information one faces everyday. When it comes to the Knowledge Graph area, we have noticed that students and junior researchers are not completely aware of the source of the ideas, concepts, and techniques they command.

The essential elements involved in the notion of Knowledge Graphs can be traced to ancient history in the core idea of representing knowledge in a diagrammatic form. Examples include: Aristotle and visual forms of reasoning, around 350 BC; Lull and his tree of knowledge; Linnaeus and taxonomies of the natural world; and in the 19th. century, the works on formal and diagrammatic reasoning of scientists like J.J. Sylvester, Charles Peirce and Gottlob Frege. These ideas also involve several disciplines like mathematics, philosophy, linguistics, library sciences, and psychology, among others.  ... " 

Spot Dogs at Work for NYPD

See that the NY City Police department has again utilized one of the Boston Dynamics  'Spot' dogs.  Apparently for a situation with potential human danger involved. .  The impressive look of 'dog-like' droids is coming to life.  Is this the future of policing?   Some of the inhabitants seem unsure.  

The NYPD deploys a robot dog again   

Boston Dynamics’ little robot makes another appearance in New York City   By Bijan Stephen in TheVerge

The cyberpunk dystopia is here! (If you weren’t aware: I’m sorry. You’re living in a cyberpunk dystopia.) The latest sign — aside from corporations controlling many aspects of everyday life, massive widespread wealth inequality, and the recent prominence of bisexual lighting — comes in the form of robot dogs deployed to do jobs human police used to. Yesterday, as the New York Post reports, the NYPD deployed Boston Dynamics’ robot “dog” Spot to a home invasion crime scene in the Bronx.    ... " 

Schank Academy

 I had mentioned reading some comments by Roger Schank.   Also noted he now has an 'academy', Emphases in Cyber Security, Software Development and Data Analytics.  Always liked his fresh approaches, even criticisms of  our efforts.  Let me know of your experience with their offerings

Schank Academy

Online, mentored courses

Our courses are entirely online, but they are not like any online courses you have ever seen. You will not be watching boring video lectures and taking tests; you will learn by doing with the help of knowledgeable mentors who are always available to provide meaningful advice and feedback on your work.  ... 

Contact about his offerings.

Metalens: Zoom-able Lenses without Moving Parts

Continued advances in lenses,  leading to more capabilities in computational sensors.  Continue to be amazed by abilities to take pictures on phones, and capture in real time information about the world and react to it.   Advances are not stopping. 

MIT Creates Zoomable Lens Without Any Moving Parts   By Ryan Whitwam on February 24, 2021

The science of optics has revealed the scale and detail of the universe for centuries. With the right piece of glass, you can look at a distant galaxy or the wiggling flagella on a single bacteria. But lenses need to focus — they need to move. Engineers at MIT have developed a new type of “metalens” that can shift focus without any moving parts. This could change the way we build devices such as cameras and telescopes. 

Currently, focusing a lens on objects requires the glass to move in some capacity, and that adds complication and bulk. That’s why, for example, high-zoom camera lenses have been so slow to come to smartphones — there’s just no room to add movable lens elements. It’s also why smartphones that do have optical zoom use multiple fixed lenses. For example, the new Samsung Galaxy S21 Ultra has 13, 26, 70, and 240mm lens equivalents in its giant camera array. 

The metalens developed at MIT can focus on objects at multiple distances thanks to its tunable “phase-changing” material. When heated, the atomic structure of the material rearranges, allowing the lens to change the way in which it interacts with light. The design currently operates in infrared, but this is just a first step.   ... '

Worldwide Web as We Know Ending?

 Certainly changing.  What drives such changes, more legislation, which are likely to produce yet more and likely less creative participants.   It was lack of govt oversight that drove that creativity.  

The worldwide web as we know it may be ending   By Rishi Iyengar, CNN Business

(CNN Business)Over the last year, the worldwide web has started to look less worldwide.

Europe is floating regulation that could impose temporary bans on US tech companies that violate its laws. The United States was on the verge of banning TikTok and WeChat, though the new Biden administration is rethinking that move. India, which did ban those two apps as well of dozens of others, is now at loggerheads with Twitter.

And this month, Facebook (FB) clashed with the Australian government over a proposed law that would require it to pay publishers. The company briefly decided to prevent users from sharing news links in the country in response to the law, with the potential to drastically change how its platform functions from one country to the next. Then on Tuesday, it reached a deal with the government and agreed to restore news pages. The deal partially relaxed arbitration requirements that Facebook took issue with.

In its announcement of the deal, however, Facebook hinted at the possibility of similar clashes in the future. "We'll continue to invest in news globally and resist efforts by media conglomerates to advance regulatory frameworks that do not take account of the true value exchange between publishers and platforms like Facebook," Campbell Brown, VP of global news partnerships at Facebook, said in a statement Tuesday.  ... " 

Quantum Computer Solves Simulation

Recall our previous mentions of D-Wave Quantum Methods.

A quantum computer just solved a decades-old problem three million times faster than a classical computer   

Using a method called quantum annealing, D-Wave's researchers demonstrated that a quantum computational advantage could be achieved over classical means.

By Daphne Leprince-Ringuet | February 23, 2021 -- 15:26 GMT (07:26 PST) | Topic: Quantum Computing  ZDNet

Scientists from quantum computing company D-Wave have demonstrated that, using a method called quantum annealing, they could simulate some materials up to three million times faster than it would take with corresponding classical methods. 

Together with researchers from Google, the scientists set out to measure the speed of simulation in one of D-Wave's quantum annealing processors, and found that performance increased with both simulation size and problem difficulty, to reach a million-fold speedup over what could be achieved with a classical CPU. 

The calculation that D-Wave and Google's teams tackled is a real-world problem; in fact, it has already been resolved by the 2016 winners of the Nobel Prize in Physics, Vadim Berezinskii, J. Michael Kosterlitz and David Thouless, who studied the behavior of so-called "exotic magnetism", which occurs in quantum magnetic systems.    ... " 

Tuesday, February 23, 2021

Upcoming Voice AI with Clouds: Podcast

Of interest, the future of Voice nd AI.   Will be following.

Voice Talks Rises to the Clouds for February 25 Episode by Eric Hal Schwartz in Voicebot.ai

Modev’s VOICE Talks Presented by Google Assistant will peer into the future for its latest episode on the evolution of voice technology. The new episode is titled “Starting a New Decade in Voice and AI” and will stream live on Feb. 25 at 2 p.m. Eastern, hosted as always by Google Assistant’s co-lead of global product partnerships Sofia Altuna.

AI INSIGHT

Altuna will start off the show’s discussions and presentations with a broad overview of the combination of voice tech, cloud computing, and artificial intelligence that are set to elevate the current state of technology before introducing the first of her guests, technical director of AI for Google Cloud Ashwin Ram. Ram will go over the way cloud computing supports voice technology and how it is being used to solve problems in the field. Following Ram, Cobalt CEO Jeff Adams and XAPP vice president Michael Meyers will zero in on those challenges,. They will lay out how voice and AI are adapting to avoid issues before they even arise, with each of their companies developing conversational AI tech to accommodate those potential problems.

“There’s so much we don’t know that makes the future fundamentally impossible to predict,” Adams said in a statement. “But that’s the joy of it. The future is in our blindspot and that makes it an adventure. We are going to create it and see it when it gets here.”   ... ' 

Pin Screens to the Walls

Is it the future of display to have information pinned on the wall.  Or floating in space?

Qualcomm’s new AR ‘Smart Viewer’ lets you pin virtual screens to your walls

Lenovo adopted the design for its ThinkReality glasses

By Adi Robertson@thedextriarchy  in TheVerge

Chip maker Qualcomm has introduced a new reference design for augmented reality glasses: an AR “smart viewer” you can tether to a phone or PC via USB-C. Called the XR1 Smart Viewer, the system is meant to be lightweight and look (sort of) like sunglasses, while also enabling features like hand tracking and spatial awareness. The first glasses based on its design are set for release in mid-2021. .. "

IBM and AI. An Abandonment of Watson?

 I see that our interesting correspondent and critic from before the first AI winter, Roger Schank, has posted a note strongly criticizing IBM on AI and cognitive claims.  Below an excerpt, link through to more: 

They are not doing "cognitive computing" no matter how many times they say they are  Update: February 2021

Commentng on WSJ article: IBM’s Retreat From Watson Highlights Broader AI Struggles in Health

I was chatting with an old friend yesterday and he reminded me of a conversation we had nearly 50 years ago. I tried to explain to him what I did for living and he was trying to understand why getting computers to understand was more complicated than key word analysis. I explained about concepts underlying sentences and explained that sentences used words but that people really didn’t use words in their minds except to get to the underlying ideas and that computers were having a hard time with that.

Fifty years later, key words are still dominating the thoughts of people who try to get computers to deal with language. But, this time, the key word people have deceived the general public by making claims that this is thinking, that AI is here, and that, by the way we should be very afraid, or very excited, I forget which.

We were making some good progress on getting computers to understand language but, in 1984, AI winter started. AI winter was a result of too many promises about things AI could do that it really could not do. (This was about promoting expert systems. Where are they now?). Funding dried up and real work on natural language processing died too.

But still people promote key words because Google and others use it to do "search". Search is all well and good when we are counting words, which is what data analytics and machine learning are really all about. Of course, once you count words you can do all kinds of correlations and users can learn about what words often connect to each other and make use of that information. But, users have learned to accommodate to Google not the other way around. We know what kinds of things we can type into Google and what we can’t and we keep our searches to things that Google is likely to help with. We know we are looking for texts and not answers to start a conversation with an entity that knows what we really need to talk about. People learn from conversation and Google can’t have one. It can pretend to have one using Siri but really those conversations tend to get tiresome when you are past asking about where to eat.

But, I am not worried about Google. It works well enough for our needs.

What I am concerned about are the exaggerated claims being made by IBM about their Watson program. Recently they ran an ad featuring Bob Dylan which made laugh, or would have, if had made not me so angry. I will say it clearly: Watson is a fraud. I am not saying that it can’t crunch words, and there may well be value in that to some people. But the ads are fraudulent. ... ' 

Monday, February 22, 2021

Linux and Open Source Go to Mars

Big follower of astronomy and astronautics.   Connections to modern computing are also of continued interest.   The latest efforts to Mars are instructive.  What else can be done to support the capability for modern, long distance and increasingly autonomous computing?     Even furthering the use of advanced Drone and Robotics control.

Mars and Beyond: Linux, Open Source Go to Mars  By ZDNet   February 22, 2021

The U.S. National Aeronautics and Space Administration (NASA) Perseverance rover will explore Mars with the self-flying Ingenuity helicopter drone, using Linux and NASA-built software based on the Jet Propulsion Laboratory (JPL)'s open source F' framework.

F' facilitates rapid development and implementation of spaceflight and other embedded software applications.

It features an architecture that decomposes flight software into discrete elements with well-defined interfaces, a C++ framework that enables capabilities like message queues and threads, and modeling tools for specifying components and links and automatically generating code.

JPL's Timothy Canham said the F'-based software used in Ingenuity is "kind of an open source victory because we're flying an open source operating system and an open source flight software framework and flying commercial parts that you can buy off the shelf if you wanted to do this yourself someday."

From ZDNet  in ACM  ...' 

Microsoft Big Quantum Win was a Mistake

Interesting,  recall the announcement then, mostly technical points are made here, but worth noting.   Not clear then hw this will effect Microsoft's work in the space, if at all. 

Microsoft’s Big Win in Quantum Computing Was an ‘Error’ After All   By Wired in ACM

In March 2018, Dutch physicist and Microsoft employee Leo Kouwenhoven published headline-grabbing new evidence that he had observed an elusive particle called a Majorana fermion.

Microsoft hoped to harness Majorana particles to build a quantum computer, which promises unprecedented power by tapping quirky physics. Rivals IBM and Google had already built impressive prototypes using more established technology. Kouwenhoven's discovery buoyed Microsoft's chance to catch up. The company's director of quantum computing business development, Julie Love, told the BBC that Microsoft would have a commercial quantum computer "within five years."

Three years later, Microsoft's 2018 physics fillip has fizzled. Late last month, Kouwenhoven and his 21 coauthors released a new paper including more data from their experiments. It concludes that they did not find the prized particle after all. An attached note from the authors said the original paper, in the prestigious journal Nature, would be retracted, citing "technical errors."  ... ' 

Modernizing Data Dashboards.

Good thoughts, somewhat obvious, but still useful tips about the ideas involved. 

Modernizing Data Dashboards. Posted by Technovert Solutions on February 21, 2021  in DSC. 

Modernizing is critical to apply the latest technologies and practices to address areas where users are less than satisfied and data trust needs to be higher. In this article, learn about modernizing data visualization with dashboards and reports.... '

Amazon Alexa Crowdsources New Products

Interesting approach will examine further and post here.  Thoughts about usefulness?

Amazon is using crowdsourcing to create new products   by Tom Ryan  in Retailwire  with further expert comment at the link. 

Amazon launched a new program, “Build It,” that in a crowdsourcing scheme enables consumers to vote on which Alexa-enabled products will be developed.

Built It represents the extension of Day 1 Editions, an invitation-only program that offers a select group of consumers the chance to purchase an in-development product at a special price. In exchange for the discount, the buyers provide early feedback to Amazon so the team can review any flaws while gauging potential demand for a rollout. Day 1 Editions led to the 2019 launch of Echo Frames smart eyeglasses and Echo Loop smart ring.

Build It, which is open to anybody, is more geared toward accessing potential demand and apparently creating some buzz around launch.

Consumers participating in the program are offered a chance to pre-order in-development products at a special price as long as enough others pre-order, mimicking a Kickstarter campaign. If the concept reaches its pre-order goal in 30 days, Amazon will build it and those pre-ordering over the 30-day period receive the item at the discount price. The price increases when fully rolled out.  ... ' 

Technology Impacting Supply Chains

 Good thoughts, all players should examine these methods.

Blockchain and RPA Leading Supply Chain Trends in 2021  By Marisa Brown, APQC

It is worrisome to note that 1 in 5 supply chains barely survived the COVID-19 crisis. It’s vital that these organizations use the lessons learned from 2020 to proactively prepare for the next disruption (since the question is when not if there will be some future supply disruption). Check out this article, What Supply Chain Leaders Need to Know for 2021, for some advice.

I would like to acknowledge the 13 percent of supply chains that completely saved the day in the face of COVID-19. It took many people in those organizations going above and beyond the “normal” to make that happen. In talking to these organizations, I have heard about greater cross-functional collaboration, more frequent re-planning, deeper relationships with suppliers, and faster decision-making. (See Supply Chain Planning: Blueprint for Success for additional insights.) Also, it speaks to the pervasive nature of this crisis that only 5 percent of 455 respondents said their supply chains were not significantly impacted by COVID-19 (not displayed in Figure 1).  

Now let’s look at top trends and obstacles facing supply chains.  

TOP 3 TRENDS ANTICIPATED TO IMPACT SUPPLY CHAINS  

In the next three years, many different trends and changes will impact supply chains. In addition to digitization of the supply chain, here are the top three trends identified (by percentage of respondents rating it as a major or moderate impact).  

»    34% Robotic process automation (RPA) which will help improve productivity and efficiency by enabling people to spend time on more value-added activities vs transactional ones.  

»    32% An automation shift will enable more time spent on the second trend: a greater focus on environmental, social, and corporate governance (ESG) factors and issues.  

»    32% Blockchain, the third anticipated trend, can enable greater traceability and visibility, also enabling sustainability efforts.   .. " 

Communicating with Dreaming

Recall our early dream experiments.   Could this re-open our sleep learning goals?

Scientists Communicated With People While They Were Lucid Dreaming  By Shelly Fan  in Singularity Hub

We’ve probed the depths of Earth’s deepest trench, sent rovers to Mars, and observed other worlds billions of light years away. Yet we’ve never been able to decipher the mysterious, bizarre, and disjointed world of our own dreams. It seems impossible: after all, people who dream are fast asleep and oblivious to the outside world.

Except now, we can.

In a mind-bending paper published last week in Current Biology, teams of scientists from four countries found that it’s possible to communicate with people who are actively dreaming. It’s not simple information, either. The volunteers, roughly two dozen spread across four labs, were able to listen to math problems and answer them using facial twitches and eye movements. One group of sleepers could even decipher Morse code, and reply to the outside world in real time.

“Our experimental goal is akin to finding a way to talk with an astronaut who is on another world, but in this case the world is entirely fabricated on the basis of memories stored in the brain,” the researchers said.

This is crazy. Research into dreams has long relied on the recall of people after waking up, which—I’m sure you agree—is riddled with errors, confusion, and missed details. The new study means that we now have a way to directly engage with people while they’re deep asleep, probe the contents of their dreams, and potentially alter them.  ... " 

More No and Low Code Emerges

More solutions, but have yet to see them widely used.   Would want to see them strongly tested for security issues.   Probably open source to allow that? 

Low-code software platform provider Creatio raises $68M  By Mike Wheatley  in SiliconAngle

Low-code software platform company Creatio is hoping to grow its business after announcing a $68 million capital raise today.

The round was led by the U.S. growth equity firm Volition Capital. Horizon Capital, a private equity firm, also participated in the round. Creatio said it will use its new funds to invest in research and development, global marketing and sales expansion as it looks to build on its current momentum.

Creatio, which was formerly known as bpm’online Ltd., sells tools that include a low-code business processes management platform that makes it possible for workers without coding skills to build enterprise apps with minimal effort, using a drag-and-drop user interface to guide them. The same UI can be used to create complex business processes that help to manage interactions between colleagues, clients and partners.  ... " 

Sunday, February 21, 2021

Deep Learning for Fleet Management

Interesting example,  worth examining.   Other operations examples?

New Deep Learning Systems Profoundly Disrupt Fleet Management Operations

Deep learning is having a profound impact on the future of fleet management through greater efficiency.

Posted by  Ryan Kh  in Smart data collective. 

Deep learning tech is influencing and enhancing many industries, promising to provide insights into key business operations which were not previously possible to unearth. Transportation and logistics is a prime example.

The transportation analytics industry is projected to be worth $27 billion by 2026. One of the biggest applications of this technology lies with using deep learning to streamline fleet management.

Fleet management is one area that is especially well positioned to benefit from the latest data-driven analytical tools, so here is a look at just how much positive disruption is being caused in this market at the moment.

Improvements to efficiency & sustainability

Businesses which operate fleets of vehicles, whether small or large, are under increased scrutiny with regards to the sustainability of their operations at the moment.

There are a number of ways to go about improving the eco-friendliness of business fleets, with the long term aim of many organizations being to migrate to fully electric vehicles, leaving fossil fuel powered incumbents in the past where they belong. .. ' 

IBM and Daimler using Quantum Computer

Continued advances of the use of quantum computing to model lithium molecules to get closer to lithium Sulphur  batteries that would be longer lasting and cheaper.

 IBM and Daimler use quantum computer to develop next-gen batteries

January 8, 2020 | Written by: Jeannette Garcia  in ACM

Categorized: Quantum Computing

Electric vehicles have an Achilles heel: the capacity and speed-of-charging of their batteries. A quantum computing breakthrough by researchers at IBM and Daimler AG, the parent company of  Mercedes-Benz, could help tackle this challenge. We used a quantum computer to model the dipole moment of three lithium-containing molecules, which brings us one step closer the next-generation lithium sulfur (Li-S) batteries that would be more powerful, longer lasting and cheaper than today’s widely used lithium ion batteries.

Simulating molecules is extremely difficult but modeling them precisely is crucial to discover new drugs and materials. In the research paper “Quantum Chemistry Simulations of Dominant Products in Lithium-Sulfur Batteries,” we simulated the ground state energies and the dipole moments of the molecules that could form in lithium-sulfur batteries during operation: lithium hydride (LiH), hydrogen sulfide (H2S), lithium hydrogen sulfide (LiSH), and the desired product, lithium sulfide (Li2S). In addition, and for the first time ever on quantum hardware, we demonstrated that we can calculate the dipole moment for LiH using 4 qubits on IBM Q Valencia, a premium-access 5-qubit quantum computer. ... ' 

Kroger Data Breach

Had come to mind that retailers had been less susceptible here, but then an example:

Kroger is latest victim of third-party software data breach   by Frank Bajak  in TechExplore

This June 17, 2014, file photo, shows a Kroger store in Houston. Kroger Co. says it was among the multiple victims of a data breach involving a third-party vendor's file-transfer service and is notifying potentially impacted customers, offering them free credit monitoring. The Cincinnati-based grocery and pharmacy chain said in a statement Friday, Feb. 19, 2021, that it believes less than 1% of its customers were affected, specifically some using its Health and Money Services, as well as some current and former employees because a number of personnel records were apparently viewed. (AP Photo/David J. Phillip, File)

Kroger Co. says it was among the multiple victims of a data breach involving a third-party vendor's file-transfer service and is notifying potentially impacted customers, offering them free credit monitoring.

The Cincinnati-based grocery and pharmacy chain said in a statement Friday that it believes less than 1% of its customers were affected—specifically some using its Health and Money Services—as well as some current and former employees because a number of personnel records were apparently viewed.

Kroger said the breach did not affect Kroger stores' IT systems or grocery store systems or data and there was no indication that fraud involving accessed personal data had occurred.

The company, which has 2,750 grocery retail stores and 2,200 pharmacies nationwide, did not immediately respond to questions including how many customers might have been affected.

Kroger said it was among victims of the December hack of a file-transfer product called FTA developed by Accellion, a California-based company, and that it was notified of the incident on Jan. 23, when it discontinued use of Accellion's services. Companies use the file-transfer product to share large amounts of data and hefty email attachments.... "

San Diego Supercomputing

 Long ago we worked with UCSD group, in the Bio Modeling area.

San Diego Supercomputer Center Helps Advance Computational Chemistry  By University of California San Diego, February 18, 2021

MIT's Heather Kulik and colleagues used the Comet supercomputer at the University of California, San Diego's San Diego Supercomputer Center and the Bridges supercomputer at the Pittsburgh Supercomputing Center in this effort. The resulting artificial neural network models predict strong correlation in materials at significantly lower computational cost than conventional models, potentially accelerating the search for materials in diverse applications.

The MIT team's  workflow engaged with at least three electronic structure codes and utilized central processing units and graphics processing units on Comet and Bridges. "Using those supercomputers firsthand allowed me to think about ways I can teach students who may just be learning computational chemistry to complement their experimental research for ways that they can use not only now but in the future," Kulik says. ...

From University of California San Diego

Brain Background Noise Useful

Recall this being brought up in some work we did in understanding reactions to stimuli.    We theorized that it was always useful to calculate the background, where we later discovered some 'data' that ended up being useful.    Still think its useful to bring along the metadata of background noise.   This is not the same thing, more in the realm of neuro, and only some of our data was neuro.   Nice to see this,  makes me think.

The Brain’s ‘Background Noise’ May Be Meaningful After All

By digging out signals hidden within the brain’s electrical chatter, scientists are getting new insights into sleep, aging, and more.

AT A SLEEP research symposium in January 2020, Janna Lendner presented findings that hint at a way to look at people’s brain activity for signs of the boundary between wakefulness and unconsciousness. For patients who are comatose or under anesthesia, it can be all-important that physicians make that distinction correctly. Doing so is trickier than it might sound, however, because when someone is in the dreaming state of rapid-eye movement (REM) sleep, their brain produces the same familiar, smoothly oscillating brain waves as when they are awake.  ... " 

Saturday, February 20, 2021

Airlines to Ask Contact Information

Expected, likely useful.  A firm requirement?   Will it get the usual pushback on privacy?

Airlines plan to ask passengers for contact-tracing details

In this Thursday, Feb. 18, 2021, file photo, travelers wear face coverings as they queue up at the north security checkpoint in the main terminal of Denver International Airport, in Denver. Major U.S. airlines say they will ask passengers on flights to the United States for information that public health officials could use for COVID-19 contact tracing. The trade group Airlines for America said Friday, Feb. 19, that the carriers will turn over the information to the Centers for Disease Control and Prevention. (AP Photo/David Zalubowski, File)

The U.S. airline industry is pledging to expand the practice of asking passengers on flights to the United States for information that public health officials could use for contact tracing during the pandemic.

An industry trade group said Friday that the carriers would turn over the information to the U.S. Centers for Disease Control and Prevention, which could use it to contact passengers who might be exposed to the virus that causes COVID-19.

Delta and United have been doing that since December. On Friday, an industry trade group said that American, Southwest, Alaska, JetBlue and Hawaiian will also ask passengers to make their names, phone numbers, email and physical addresses available to the CDC.  ... "

Restoring Notre-Dame with VR

Very impressive imagery on current state and progress of Notre-Dame Damage and restoration. Certainly a model for future work of this kind.  Includes video describing work.

Cathédrale Notre-Dame Rescue Is Buttressed by Digital Wizardry  By Financial Times, February 17, 2021  in ACM

Art historians, architects, computer scientists, and digital designers from around the world are leveraging virtual reality (VR), three-dimensional modeling, and cloud computing technologies to create a "virtual twin" of the Notre-Dame cathedral in Paris, France, as part of its reconstruction following a 2019 fire.

The simulated cathedral will display the progress in real time using images streamed from inside the cathedral by robot-sentries outfitted with cameras. Engineers and experts on medieval architecture can move through the simulation while wearing VR headsets. Architectural drawings, post-fire scientific reports, and the provenance of specific building components can be accessed by clicking on any detail.

Artificial intelligence agents also are moving quickly through pre- and post-fire images to identify surviving sculpted and limestone elements that could be reintegrated into the site.

These techniques could be used to preserve ancient architecture virtually, reconstruct centuries-old sites, or create virtual museums for iconic cultural sites.

From Financial Times

Homomorphic Standards, Sample Efforts

Very useful piece with links to alternate standards and company efforts underway.  Much more at the link

Homomorphic Encryption Standardization

An Open Industry / Government / Academic Consortium to Advance Secure Computation

Standards Meetings

Additional introductory material on homomorphic encryption can be found on the Homomorphic Encryption Wikipedia page.

STANDARDIZATION

There are several reasons why we think this is the right time to standardize homomorphic encryption.

There is already dire need for easily available secure computation technology, and this need will be getting stronger as more companies and individuals switch to cloud storage and computing. Homomorphic encryption is already ripe for mainstream use, but the current lack of standardization is making it difficult to start using it.

Specifically, the current implementations are not easy enough to use by non-experts. The standard will push to uniformize and simplify their API, and educate the application developers about to use them.

The security properties of RLWE-based homomorphic encryption schemes can be hard to understand. The standard will present the security properties of the standardized scheme(s) in a clear and understandable form.

BASICS OF HOMOMORPHIC ENCRYPTION

Fully homomorphic encryption, or simply homomorphic encryption, refers to a class of encryption methods envisioned by Rivest, Adleman, and Dertouzos already in 1978, and first constructed by Craig Gentry in 2009. Homomorphic encryption differs from typical encryption methods in that it allows computation to be performed directly on encrypted data without requiring access to a secret key. The result of such a computation remains in encrypted form, and can at a later point be revealed by the owner of the secret key.... "

Friday, February 19, 2021

Fine Tuning Vendor Risk Management

Crucial these days. 

How to Fine-Tune Vendor Risk Management in a Virtual World  in Dark Reading by Ryan Smyth & Spencer MacDonald Managing Director / Director, FTI Technology  

Without on-site audits, many organizations lack their usual visibility to assess risk factors and validate contracts and SLA with providers.

Vendor risk management is nothing new to most security and privacy professionals. Programs for managing vendors are typically well-established and have run like clockwork for quite some time — with many firms requiring their critical vendors to allow access for periodic on-site assessments of privacy, security, and other controls. But as with so many things this year, the coronavirus pandemic has brought well-oiled vendor risk management processes to a screeching halt. Now, without the ability to conduct on-site audits, many organizations lack their usual visibility to assess risk factors and validate whether their providers are doing all they have agreed to in their contracts and service-level agreements (SLAs). 

This is particularly concerning given that vendors and third-party providers are a prime source of breaches in security, privacy and/or compliance. Risk Based Security reported that the incidence of breaches, "involving companies handling sensitive data for business partners and other clients," rose by 35% from 2017 to 2019 and exposed 4.8 billion records last year. ... '

Self Organizing Textiles from Cellular Automata

Always an interest here, rarely applied.

Cellular Automata

Self-Organising Textures    in Distill.

Neural Cellular Automata Model of Pattern Formation

Neural Cellular Automata (NCA ) are capable of learning a diverse set of behaviours: from generating stable, regenerating, static images to segmenting images to learning to “self-classify” shapes.

. The inductive bias imposed by using cellular automata is powerful. A system of individual agents running the same learned local rule can solve surprisingly complex tasks. Moreover, individual agents, or cells, can learn to coordinate their behavior even when separated by large distances. By construction, they solve these tasks in a massively parallel and inherently degenerate 2 way. Each cell must be able to take on the role of any other cell - as a result they tend to generalize well to unseen situations.

In this work, we apply NCA to the task of texture synthesis. This task involves reproducing the general appearance of a texture template, as opposed to making pixel-perfect copies. We are going to focus on texture losses that allow for a degree of ambiguity. After training NCA models to reproduce textures, we subsequently investigate their learned behaviors and observe a few surprising effects. Starting from these investigations, we make the case that the cells learn distributed, local, algorithms. ...."

The ENIAC Turns 75

 I used to walk past the site of the ENIAC lab on the way to class at the U of Pa, and even got a tour of the space once.  Impressive, 

ENIAC Turns 75  By Samuel Greengard,   Commissioned by CACM Staff

Programming ENIAC, the Electronic Numerical Integrator and Calculator (it was not called a computer, because "computers" at that time were people).

The history of computing is filled with mythical figures, but often lost in the shuffle is the accomplishment of John Mauchly and J. Presper Eckert, Jr. On February 14, 1946, the pair publicly unveiled the world's first true computer: ENIAC (Electronic Numerical Integrator and Computer). From their lab at the University of Pennsylvania in Philadelphia, they launched a revolution that truly changed the world.

On its 75th anniversary, ENIAC is once again in the spotlight. "It was the big bang of the information age. It set in motion a paradigm that has become the underpinning of daily life, as well as of deepest science," observes Bill Mauchly, an inventor and software architect who, as the son of John Mauchly, has also become a historian for the computer.

"ENIAC was the first digital programmable computer. It demonstrated what was possible," adds Thomas Haigh, professor of history at the University of Wisconsin, Milwaukee, and co-author of ENIAC in Action: Making and Remaking the Modern Computer (MIT Press).

A Calculated Approach

ENIAC, built at a then-astounding cost of $487,000, used 10 position ring counters to store digits. Each digit required 28 vacuum tubes that counted pulses on the ring counters to perform arithmetic. "Because it was electronic, it was thousands of times faster than anything that came before it," Haigh explains. "The machine would complete its work in a flash and spend most of its time waiting for human intervention."

The computer supported 200 decimal digits of writeable electronic memory spread across 20 "accumulators." It was a then-revolutionary development. ENIAC added by transmitting ten-digit numbers directly between accumulators, incrementing the contents of the destination counters. An "add time" was 200 microseconds. The accumulators worked in parallel, allowing up to 50,000 additions per second. A 10-digit by 10-digit multiplication required 14 add times, or a total of approximately 2,800 microseconds. A division or square root problem required 143 add times, or 28,600 microseconds.

To be sure, ENIAC was notable for more than simply being the world's first fully electronic computer. It was incredibly consequential. From the day it was introduced to the public via a front-page story in The New York Times to its retirement nearly a decade later, it tackled an array of real-world tasks including ballistics trajectory research, Monte Carlo simulations, weather predictions, and early hydrogen bomb research conducted by John von Neumann and others.

"Although the architecture and programming style were very idiosyncratic and not copied by any later machine, the ENIAC project built a foundation for more advanced computing models," says Mark Priestley, a research fellow at the U.K.'s National Museum of Computing and co-author of ENIAC in Action. This included the EDVAC (Electronic Discrete Variable Automatic Computer) design that, among other advances, introduced binary rather than decimal computing, and modern programming techniques. ... " 

Thursday, February 18, 2021

Detecting Object Permanence and more

Reinforcement learning problem. Learning like children do.

AI Agents Learned Object Permanence by Playing Hide and Seek

By IEEE Spectrum  February 17, 2021

Researchers at the Allen Institute for AI (AI2) demonstrated that artificial intelligence agents learned the concept of object permanence — that objects hidden from view are still there — by playing hide and seek.

The agents, playing as both hiders and seekers, learned the game "Cache" via reinforcement learning. The agents began learning about the environment by taking random actions, like pulling on drawers, and dropping objects in random places. Their game play improved as they learned from outcomes, with the hider, for instance, learning that it had selected a good hiding place when the seeker failed to find the object.

Subsequent testing showed that the agents understood the principles of containment and object permanence and were able to rank images based on how much free space they contained.

The agents performed as well or better than models trained on the gold-standard ImageNet.

From IEEE Spectrum

FHE for Using Encrypted Data

Very nicely done piece on the seeming paradox of FHE:

IBM Makes Encryption Paradox Practical  in IEEE Spectrum.

“Fully homomorphic” cryptography allows partial access to digital vaults without ever opening their locks  By Dan Garisto

How do you access the contents of a safe without ever opening its lock or otherwise getting inside? This riddle may seem confounding, but its digital equivalent is now so solvable that it’s becoming a business plan. 

IBM is the latest innovator to tackle the well-studied cryptographic technique called fully homomorphic encryption (FHE), which allows for the processing of encrypted files without ever needing to decrypt them first. Earlier this month, in fact, Big Blue introduced an online demo for companies to try out with their own confidential data. IBM’s FHE protocol is inefficient, but it’s workable enough still to give users a chance to take it for a spin. 

Today’s public cloud services, for all their popularity, nevertheless typically present a tacit tradeoff between security and utility. To secure data, it must stay encrypted; to process data, it must first be decrypted. Even something as simple as a search function has required data owners to relinquish security to providers whom they may not trust. .... "

See IBM's   Homomorphic Encryption Services demonstration  "  Unlock the value of sensitive data without decryption to preserve privacy .. " 

Remote Access Cautions

 Good more balanced view of the recent SCADA threat in infrastructure.  Remote access is a classic potential security weakness.    Have myself been involved in the need to monitor and control industrial systems remotely   The failure was not including several elements in the remote connection.    Fix that now, early warnings are good.  Good points made here: 

Increased uptime? Check. Better access to outside expertise? Check. Improved first-time-fix rate? Check. in Tripwire

These are just some of the benefits of industrial remote access. Yet many customers are reluctant to embrace remote access. Not only that, but incidents such as the breach at the Oldsmar water utility might increase organizations’ reluctance to use remote access.

Using Oldsmar as an Example

The benefits of remote access should not be in dispute. So rather than making remote access the scapegoat, let’s consider the incident at Oldsmar water utility briefly.

It has been established that the nefarious actor was able to access the SCADA system via TeamViewer. The details of how they were able to gain access via TeamViewer is still unknown.

So, based on this information, TeamViewer is the villain, correct?

The answer is not binary. TeamViewer serves a legitimate purpose if used correctly. In this instance, to understand if TeamViewer was the right tool, let’s consider the application more closely.

As a water authority, the Oldsmar plant’s main KPI is to keep the plant operational 24/7 because we all want safe and clean drinking water when we start the tap! This means minimal downtime, timely notifications of any alarms and the ability to diagnose faults promptly. Remote access is an essential tool to achieve this objective. The remote user does not need access to the utility’s IT network to keep the plant operational. And, this is the key – IT and OT’s remote access needs are different .... " 

SAS: How AI Changes the Rules

Looks to be a good paper from SAS, below an overview, link through for more.  Always liked how SAS looked at the broader aspect of 'analytics'.  Reading.

How AI Changes the Rules

New Imperatives for the Intelligent Organization

About this paper

Many leaders are excited about AI’s potential to profoundly transform organizations by making them more innovative and productive. But implementing AI will also lead to significant changes in how organizations are managed, according to our recent survey of more than 2,200 business leaders, managers and key contributors. Those survey respondents, representing organizations across the globe, expect that reaping the benefits of AI will require changes in workplace structures, technology strategies and technology governance.to manage the significant changes to software development and deployment processes that most respondents expect from AI.

AI will drive organizational change and ask more of top leaders. The majority of survey respondents expect that implementing AI will require more significant organizational change than other emerging technologies including cloud. AI demands more collaboration among people skilled in data management, data analytics, IT infrastructure, and systems development, as well as business and operational experts. This means that organizational leaders need to ensure that traditional silos don’t hinder advanced analytics efforts, and they must support the training required to build skills across their workforces.

AI will place new demands on the CIO and CTO. AI implementation will influence the choices CIOs and CTOs make in setting their broad technology agendas. They will need to prioritize developing foundational technology capabilities, from infrastructure and cybersecurity to data management and development processes — areas in which those with more advanced AI implementations are taking the lead compared with other respondents. CIOs will also need to manage the significant changes to software development and deployment processes that most respondents expect from AI. The survey also indicated that many CIOs will be charged with overseeing or supporting formal data governance efforts: CIOs and CTOs are more likely than other executives to be tasked with this.

AI will require an increased focus on risk management and ethics. The Global survey shows a broad awareness of the risks inherent in using AI, but few practitioners have taken action to create policies and processes to manage risks, including ethical, legal, reputational, and financial risks. Managing ethical risk is a particular area of opportunity. Those with more advanced AI practices are establishing processes and policies for data governance and risk management, including providing ways to explain how their algorithms deliver results. They point out that understanding how AI systems reach their conclusions is both an emerging best practice and a necessity, in order to ensure that the human intelligence that feeds and nurtures AI systems keeps pace with the machines’ advancements.

The report that follows explores these findings in depth. Read on to learn more about the changes that leaders must prepare for to successfully implement trusted AI.  .. "

Addressing Sheet Metal Design Waste

 Note the connection between design and fabrication. in a workflow.  Minimizing waste.   If you have a model of workflow, its easier to look at things like design and waste,  And AI methods can address patterns that minimize waste. 

MIT CSAIL taps AI to reduce sheet metal waste  By Kyle Wiggers @Kyle_L_Wiggers  in Venturebeat

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) say they’ve created an AI-powered tool that provides feedback on how different parts of laser-cut designs should be placed onto metal sheets. By analyzing how much material is used in real time, they claim that their tool — called Fabricaide — allows users to better plan designs in the context of available materials.

Laser cutting is a core part of industries spanning from manufacturing to construction. However, the process isn’t always efficient. Cutting sheets of metal requires time and expertise, and even the most skillful users can produce leftovers that go to waste.

Fabricaide ostensibly solves this with a workflow that “significantly” shortens the feedback loop between design and fabrication. The tool keeps an archive of what a user has done, tracking how much of each material they have left and allowing the user to assign multiple materials to different parts of the design to be cut. This simplifies the process so that it’s less of a headache for multimaterial designs.

Searchandising

OK, the term is completely new to me, this Algolia advertising piece links to a free ebook/pamphlet that describes it.

Searchandising: 9 best practices for better conversion rates,  Posted: February 18, 2021

Merchandising is a critical part of the search journey, and searchandising is becoming a discipline in its own right.

Search is a strong driver of sales and revenue. Amazon’s conversion rate shoots up 6x when visitors do a search (for Walmart 4x, and Etsy 3x).

Read this ebook to learn how to become an expert in searchandising, whether you’re a beginner or a more tenured online merchandiser. These 9 best practices for better conversion rates come from tried-and-tested strategies customers have implemented.  ... " 

Enhancing Data Security

Its data, getting it and securely putting it to use.  Always highlighted in emergency situations.

Microsoft enhances data security in Power BI business intelligence apps  By  Mike Wheatley in SiliconAngle

Microsoft Corp. is beefing up the security capabilities of its Power BI business intelligence tools with new network isolation and data protection features that help to ensure customer’s information will never be compromised.

Data is, of course, at the core of Microsoft Power BI, which is the collective name for a suite of cloud-based apps and services that enterprises can use to collate, manage and analyze information from a wide range of sources to obtain more business insights from it.

Power BI is used by workers to transform raw data such as sales figures into graphical visualizations that present a clearer picture of what’s happening inside their business. The service can tap into data from numerous sources such as simple Excel spreadsheets, Oracle databases and various cloud-based and on-premises applications.

That data needs to be kept secure, and in today’s announcement, Arun Ulagaratchagan, corporate vice president of Business Intelligence Platform at Microsoft, detailed the newest capabilities in Power BI that are designed to ensure this.  ... " 

Connected Communications

Satellite connectivity for challenges like emergency communications. 

Connecting machines in remote regions  by Zach Winn, Massachusetts Institute of Technology

On Nov. 26, seven fishermen aboard a small fishing boat off the coast of Maharashtra in western India were struck with panic when their vessel was damaged and began to sink. The panic was warranted: The boat was too far from shore to radio for help.

Tens of thousands of fishermen find themselves in a similar situation around the world every year. Globally, the vast majority of small, deep-sea fishing vessels do their work totally disconnected, leaving them vulnerable to storms and other disasters.

At the root of the problem is the high cost of satellite connectivity in areas like oceans, forests, and mountains, which make up the majority of the Earth's landmass. Now the startup Skylo, co-founded by Parth Trivedi SM '14, is offering the ability to communicate with satellites from anywhere on the planet for less than 10 dollars a month.

Skylo's team has developed a new antenna and communication protocol that allows machines, sensors, and other devices to efficiently transmit data to the geostationary satellites already deployed in space. The company says its technology enables satellite communications at less than 5 percent of the cost of existing solutions and could bring an "internet of things" revolution in the world's most remote regions.

With the Skylo Hub, which resembles a modem and contains the company's proprietary antennae, deep-sea fishermen can go from being isolated and vulnerable to having the ability to send out emergency communications, receive storm alerts, and even sell their catch before they return to port. Farmers in remote regions can get real-time data on weather forecasts, soil content, and crop health. Truck drivers and fleet operators that were previously invisible for large stretches of their journeys can be precisely located and their cargo monitored. ... " 

Wednesday, February 17, 2021

Military Digitization

 An approach to military digitization, also for peace time efforts of crisis management and disaster recovery. 

German Armed Forces Order Deployable Mission-Critical Communication Networks from Motorola Solutions to Drive Greater Digitization  in BusinessWire

Contract value of the framework agreement is worth 254 million euros.

CHICAGO--(BUSINESS WIRE)--Digitization is one of the highest priorities for the German Armed Forces. It will help to increase efficiency of operations and safety for soldiers and civilian personnel. Secure and reliable voice communications as well as access to data are central drivers for efficiency and responsiveness on any successful mission.

Cross-agency communication is also a prerequisite for successful cooperation between different teams, particularly in the event of a crisis or disaster. As part of the digitization of land-based operations and to ensure fast communication in various areas in the field, the German Armed Forces (Bundeswehr) are procuring deployable communication networks, which are available in two versions, mobile and stationary. The modernization marks an important milestone within the Bundeswehr's military’s digitization strategy.... "

More Biometrics for Identification

Handprints are a much older biometric ID capability, and iris scanning is also well known.  We tested the methods in our innovation centers. 

MIT News,Rachel Gordon,February 8, 2021

AI Can Use the Veins on Your Hand Like Fingerprints to Identify You  By New Scientist

February 17, 2021

Researchers at the University of New South Wales have developed a technique that identifies individuals using the unique pattern of veins on the back of their hands.

The researchers used 500 photos of the hands of 35 people to train a neural network to connect the pattern of veins to a particular subject. The model identified the test subjects with an accuracy rate of 99.8%, then identified four new subjects not included in the original dataset with a 96% accuracy rate.

Researcher Syed Shah at the university said vein detection is reliable for people of all ethnicities and is less vulnerable to attacks than existing biometric tests using fingerprints or face recognition. The technique potentially could be adapted for use with smartphones and CCTV cameras, Shah said. 

From New Scientist  ..