/* ---- Google Analytics Code Below */

Saturday, March 28, 2020

Emergence of the AI Risk Manager

In our day this was integrated with other kinds of analysis and management.   It may well make sense to focus this more generally. 

The emergence of the professional AI risk manager

By Kenn So  in Venturebeat

When the 1970s and 1980s were colored by banking crises, regulators from around the world banded together to set international standards on how to manage financial risk. Those standards, now known as the Basel standards, define a common framework and taxonomy on how risk should be measured and managed. This led to the rise of professional financial risk managers, which was my first job. The largest professional risk associations, GARP and PRMIA, now have over 250,000 certified members combined, and there are many more professional risk managers out there who haven’t gone through those particular certifications.

We are now beset by data breaches and data privacy scandals, and regulators around the world have responded with data regulations. GDPR is the current role model, but I expect a global group of regulators to expand the rules to cover AI more broadly and set the standard on how to manage it. The UK ICO just released a draft but detailed guide on auditing AI.  https://ico.org.uk/about-the-ico/ico-and-stakeholder-consultations/ico-consultation-on-the-draft-ai-auditing-framework-guidance-for-organisations/   The EU is developing one as well. Interestingly, their approach is very similar to that of the Basel standards: specific AI risks should be explicitly managed. This will lead to the emergence of professional AI risk managers.  ... "

Rand on Force Planning

Rand piece on the topic developed for Military scenarios, but could it be used for systems that include coopetition as well?    Costs, resources, strategies.

Force Planning in the New Era of Strategic Competition
The RAND Blog by Jacob L. Heim  
March 28, 2020

The U.S. Department of Defense announced (PDF) in 2018 that it was elevating the priority it placed on developing the capabilities necessary to deter Chinese and Russian aggression. That means that the Department needs new analytical frameworks to reassess what force development looks like during an era of peacetime military competition. In particular, analysts need techniques for estimating how much it costs each side to maintain a fixed military balance over time.  ... "

Low-Code Taking Over

 Ultimately this will take over for most coding.

Low Code And No Code: A Looming Trend In Mobile App Development   By Nitin Nimbalkar -

Today’s businesses are implementing enriching their operations with capabilities little by little on a variety of different products. But the trend is clear before you know it, the distinction between tools that are powerful enough for professional development teams and to be simple enough for citizen developers.

At this point, Low-Code and No-Code will merge into a single market segment for both enterprise-class and user-friendly developers at the same time.

Before heading, let’s identify the distinct functions of low code and no code in app development, by bifurcating them.

Difference between low code and no code development
• Low Code: Low code is a development move that automates time-consuming manual processes, without manual coding, using a visual IDE environment, which is automation that connects to backends, and application management system.

• No code: In the same way as low-code platforms, no code platform uses a visual application system that allows users to create applications without coding. Usually this includes drag and drop functions. An example of this is Salesforce CRM, which enables people with coding skills to code, and those who don’t have those skills can build simple apps without using the system.

Further, as the need for low code and no code is surging due to increased requirement, trends depict how the picture of coding might get changed.   ... " 

The COVID-19 Virus and What the Recommendations Mean

Very nicely done,  just eight minutes plus animated video about the topic.  For the whole family to understand.   By the German Kurzgesagt animation studio.



Kurzgesagt Animation Studio  (with other language translations)

Friday, March 27, 2020

Watermarking Control Data for Safety from Hackers

Thoughtful and apparently simple idea.

Approach Could Protect Control Systems From Hackers
IEEE Spectrum
Michelle Hampson
March 26, 2020

Researchers at Siemens and Croatia’s University of Zagreb have developed a technique to more easily identify attacks against industrial control systems (ICS), like those used in the electric power grid, or to control traffic. The researchers applied the concept of "watermarking" data during transmission to ICS, in a manner that is broadly applicable without requiring details about the specific ICS. In such a scenario, when data is transmitted in real time over an unencrypted channel, it is accompanied by a specialized algorithm in the form of a recursive watermark (RWM) signal; any disruption to the RWM signal indicates an attack is underway. Said Siemens' Zhen Song, “If attackers change or delay the real-time channel signal a little bit, the algorithm can detect the suspicious event and raise alarms immediately."

Linden Labs Gives up on VR Spin off

Note our examination of virtual world retail, mentioned here earlier.  This piece gives us a look at the current status of world tech with current supporting tech like VR.   Last I looked SL was not heavily used.

Why 'Second Life' developer Linden Lab gave up on its VR spin-off
'We decided that Linden Lab wanted to become cash-positive.’
Nick Summers, @nisummers in Engadget
  
Second Life developer Linden Lab has sold Sansar, a platform for virtual 'scenes' that could be explored with a VR headset or traditional PC setup.

Back in 2016, I described the service as "WordPress for social VR." A foundation that allowed creators to import custom assets and quickly build their own shareable world. The company hoped that this mix would attract commercial clients — think museums, car manufacturers and record labels — that want their own VR experience but don't have the technical expertise to deal with game engines and digital distribution.

Similarly, Linden Lab hoped Sansar would attract users who crave diverse worlds — like those promised in movies such as Ready Player One — and, if they have a creative spark, possibly make their own assets that can be shared and sold to the rest of the community.

Sansar's VR compatibility was a big draw. At the time, there were many 3D chat room experiences — including Second Life — but few that allowed large groups to strap on a headset and freely converse. Linden Lab knew that the number of people with high-end VR headsets was small, though. And the team didn't want to dilute the experience so it could run on mobile-powered hardware like Google Cardboard and Samsung's Gear VR.    ... " 

Tesla Autopilot Detecting Traffic Lights

An example of systems including more environmental context for making decisions, ultimately essential.

A video shows a Tesla stopping autonomously at a red light.   ....

By Christine Fisher, @cfisherwrites in Engadget

On Novel Risks in the Enterprise

Something we studied in some detail, solution was to have sufficient knowledge and resources, internal and access to external to be able to address the context of such problems.   Making them less 'Novel'.    Not sure how well that works in the current situation.

Novel Risks   by Robert S. Kaplan, Dutch Leonard, and Anette Mikes  in HBSWK

Companies can manage known risks by reducing their likelihood and impact. But such routine risk management often prevents them from recognizing and responding rapidly to novel risks, those not envisioned or seen before. Setting up teams, processes, and capabilities in advance for dealing with unexpected circumstances can protect against their severe consequences.

Author Abstract
All organizations now practice some form of risk management to identify and assess routine risks for compliance—in their operations, supply chains, and strategy, as well as from envisioned external events. These risk management policies, however, fail when employees do not recognize the potential for novel risks to occur during apparently routine operations. Novel risks—arising from circumstances that haven’t been thought of or seen before—make routine risk management ineffective, and, more seriously, delude management into thinking that risks have been mitigated when, in fact, novel risks can escalate to serious if not fatal consequences. The paper discusses why well-known behavioral and organizational biases cause novel risks to go unrecognized and unmitigated. Based on best practices in several organizations, the paper describes the processes that private and public entities can institute to identify and manage novel risks. These risks require organizations to launch adaptive and nimble responses to avoid being trapped in routines that are inadequate or even counterproductive when novel circumstances arise.  .... 

Paper:  http://www.hbs.edu/faculty/pages/download.aspx?name=20-094.pdf 

Attacks on Deep Reinforcement Learning

On the safety of Reinforcement Learning.  Considerable,largely technical piece.

Physically Realistic Attacks on Deep Reinforcement Learning  Bair Berkeley  By Adam Gleave     

Deep reinforcement learning (RL) has achieved superhuman performance in problems ranging from data center cooling to video games. RL policies may soon be widely deployed, with research underway in autonomous driving, negotiation and automated trading. Many potential applications are safety-critical: automated trading failures caused Knight Capital to lose USD 460M, while faulty autonomous vehicles have resulted in loss of life.

Consequently, it is critical that RL policies are robust: both to naturally occurring distribution shift, and to malicious attacks by adversaries. Unfortunately, we find that RL policies which perform at a high-level in normal situations can harbor serious vulnerabilities which can be exploited by an adversary.... " 

Thursday, March 26, 2020

Wal-Mart Joins Hyperledger

Wal-Mart's early looks at this seem to be identification and tracking related supply chain applications.    Is this an indication of further dives into the space?

Walmart Joins Hyperledger Alongside 7 Other Companies  By Samuel Haig

Walmart has become the latest major conglomerate to join open-source blockchain consortium Hyperledger. Walmart is among eight new members to join the platform. The new members were announced on March 3 at the Hyperledger Global Forum 2020 in Phoenix, Arizona.

Sanjay Radhakrishnan, the vice president of Walmart Global Tech, expressed excitement in joining the platform, stating:

“We've seen strong results through our various deployments of blockchain, and believe staying involved in open source communities will further transform the future of our business."  ... '

Positive Tools for Challenging Times

I see that long-time correspondent Sunnie Southern of ViableSynergy has a newsletter about 'Positive Tools for Challenging Times'.   Check it out:

Positive Tools for Challenging Times
This is the second in a series of emails with resources our team has personally compiled to make life a little easier during this difficult time.  We have added a couple of new categories based on your feedback and suggestions from last week's message. Please take our short survey and check-out new innovations.   ..... 

If you have a resource or an inspiring story that you would like to share, please email us at Hello@ViableSynergy.com and we'll share in a future message.     .... 

It has been a while since connecting with many of you. We'd love to reconnect and exchange updates.  Send us a message at Hello@ViableSynergy.com or via your favorite social media channel (click below) and let's get something on the calendar.   ... '

IBM Tracks Virus 'Weather'

A nicely done Covid-19 tracking app is part of the IBM 'Weather Channel' App.   Shows location and trends continually updated, based on your location.  Nicely shown on the bottom of the App with a red button to click.  With other virus news and video.  I see the IBM CEO talks about it below.  I am following on my smartphone.

What are the best ways to make this influence behavior?    Some sort of simple behavior-effect prediction?

Later I noted that the warning included: (Some locations do not currently provide all data).  So we have the classic problem of incomplete and even faulty data.

IBM CEO: Covid-19 tracking app can help modify behavior
Ginni Rometty, CEO of IBM, explains what the company is doing to help during the coronavirus crisis. It launched a tool on its Weather Channel app that tracks the outbreak. .... 

Read in CNN Business: https://apple.news/A_YGufs01RRiCOpb8GNl2eg\

Neural Networks Search for New Materials

Mentioned previously here.   Novel use of 'creativity' to search among possible solutions.

Neural networks facilitate optimization in the search for new materials
by David L. Chandler, Massachusetts Institute of Technology

An iterative, multi-step process for training a neural network, as depicted at top left, leads to an assessment of the tradeoffs between two competing qualities, as depicted in graph at center. The blue line represents a so-called Pareto front, defining the cases beyond which the materials selection cannot be further improved. This makes it possible to identify specific categories of promising new materials, such as the one depicted by the molecular diagram at right.  

When searching through theoretical lists of possible new materials for particular applications, such as batteries or other energy-related devices, there are often millions of potential materials that could be considered, and multiple criteria that need to be met and optimized at once. Now, researchers at MIT have found a way to dramatically streamline the discovery process, using a machine learning system.

As a demonstration, the team arrived at a set of the eight most promising materials, out of nearly 3 million candidates, for an energy storage system called a flow battery. This culling process would have taken 50 years by conventional analytical methods, they say, but they accomplished it in five weeks.

The findings are reported in the journal ACS Central Science, in a paper by MIT professor of chemical engineering Heather Kulik, Jon Paul Janet Ph.D. '19, Sahasrajit Ramesh, and graduate student Chenru Duan. ... " 

Augmented Analytics

For now not replacing but rather augmenting people.    Unlike the article I would say this has been around for a long time. 

Augmented Analytics Drives Next Wave of AI, Machine Learning, BI

Business intelligence will move beyond dashboards, and AI and machine learning will become easier for less skilled workers as augmented analytics are embedded into platforms.
Enterprises struggling to get their data management and machine learning practices up to speed in an era of more and more data may be in for a nice surprise. After years of bending under the weight of more data, more need for insights, and a shortage of data science talent, augmented analytics is coming to the rescue. What's more, it could also help with putting machine learning into production, something that has been an issue for many enterprises.

Identified as a major trend by Gartner at its Symposium event last year, augmented analytics has been around for several years already, according to Rita Sallam, distinguished research VP and Gartner fellow. But in recent years the concept has expanded to encompass automation of many of the processes that are required by the entire data pipeline. That includes tasks such as profiling, cataloging, storage, data management, generating insights, assisting with data science and machine learning models, and operationalization, according to Sallam, who was set to present a session about augmented analytics at the now postponed Gartner Data and Analytics Summit that has been rescheduled for September ... "

Wednesday, March 25, 2020

Timing is the Thing for Modeling the Risk

Forecasting once again is essential for knowing how to react.

Supply chain outlook: The timing of the slowdown
MIT Professor David Simchi-Levi forecast the mid-March manufacturing pause. Now he looks ahead.

Peter Dizikes | MIT News Office
March 25, 2020

With the Covid-19 virus disrupting the global economy, what is the effect on the international supply chain? In a pair of articles, MIT News examines core supply-chain issues, in terms of affected industries and the timing of unfolding interruptions.

The rapid spread of the Covid-19 virus is already having a huge impact on the global economy, which is rippling around the world via the long supply chains of major industries.

MIT supply chain expert David Simchi-Levi has been watching those ripples closely in 2020, as they have moved from China outward to the U.S. and Europe. His tracking of supply chain problems provides insight into what is happening in the global economy — and what could happen in a variety of scenarios.

“This is a significant challenge,” says Simchi-Levi, who is a professor of engineering systems in the School of Engineering and in the Institute for Data, Systems, and Society within the MIT Stephen A. Schwarzman College of Computing. The global public health crisis, he adds, “is not only affecting the supply chain. There is a significant impact on demand, and as a result, a significant impact on the financial performance on all these businesses.”   ... "

Crowdsource the Problem, Distribute Solutions.

Lots of Possibilities Here, find ways to find them

Folding@Home Network More Powerful Than World's Top 7 Supercomputers Combined
Tom's Hardware
by Paul Alcorn

The Folding@Home distributed computing network is currently churning out 470 petaflops of raw computing power, which is more powerful than the collective computing muscle of world's top seven supercomputers, in a push to defeat the coronavirus pandemic. That compares to the 149 petaflops of sustained output generated by the world's fastest supercomputer, the Oak Ridge National Laboratory (ORNL)'s Summit system. ORNL announced two weeks ago that Summit had been enlisted in the fight against COVID-19. Folding@Home said the number of contributors in its fight against the pandemic has risen 1,200%.  ... "

Generating Videos

Video synthesis to supplement with real world data.

IBM’s AI generates new footage from video stills
Kyle Wiggers @KYLE_L_WIGGERS in VentureBeat

A paper coauthored by researchers at IBM describes an AI system — Navsynth — that generates videos seen during training as well as unseen videos. While this in and of itself isn’t novel — it’s an acute area of interest for Alphabet’s DeepMind and others — the researchers say the approach produces superior quality videos compared with existing methods. If the claim holds water, their system could be used to synthesize videos on which other AI systems train, supplementing real-world data sets that are incomplete or marred by corrupted samples.

As the researchers explain, the bulk of work in the video synthesis domain leverages GANs, or two-part neural networks consisting of generators that produce samples and discriminators that attempt to distinguish between the generated samples and real-world samples. They’re highly capable but suffer from a phenomenon called mode collapse, where the generator generates a limited diversity of samples (or even the same sample) regardless of the input. ... " 

Personality Key for Open Source Contribution?

Depends if we can really detect useful personality measures of this type, and if they will be stable under differing contexts and goals.

Personality Key in Whether Developers Can Contribute to Open Source Projects
Waterloo News

The results of a study by researchers at the University of Waterloo in Canada suggest a software developer's personality could affect their ability to contribute to open source projects. Although social factors are the primary determinant of acceptance or rejection of online contributors' work, Waterloo's Meiyappan Nagappan said personality also is important to consider because it governs how contributors' behaviors manifest in their interactions with others. The researchers assessed data from the GitHub open source platform to analyze the personality traits of 16,935 active developers from 1,860 projects, and extracted the five leading developer personalities—openness, conscientiousness, extraversion, agreeableness, and neuroticism—with the IBM Watson Personality Insights service. Waterloo's Alex Yun said the analysis suggested that biases may be involved in the acceptance or rejection of contributions to work on open source platforms. Said Yun, "Managers are more likely to accept a contribution from someone they know, or someone more agreeable than others, even though the technical contribution might be similar."

Defending Retail Against the Coronavirus

Useful approaches outlined.

Defending Retail against the Coronavirus
Companies can brace themselves for lasting changes to the sector even as they grapple with short-term disruption..... 

By Marc-André Kamel and Joëlle de Montgolfier in Bain ...

Tires Get Embedded Tech

As the article suggests, a bit unexpected, but its where the system and its uses meets the real world, and its useful to know what is being sensed, in real time and over time.

The Humble Tire Gets Kitted Out with Technology
The Wall Street Journal
Sara Castellanos
March 19, 2020

Tire manufacturers are designing intelligent tires to improve the braking of self-driving vehicles. Goodyear Tire & Rubber is developing tires equipped with a sensor and proprietary machine learning algorithms, in the hope they will help autonomous vehicles brake at a shorter distance and communicate with self-driving systems. Goodyear currently sells tires that measure temperature and pressure, but the new intelligent tire incorporates a sensor that monitors wear, inflation, and road surface conditions; data from the sensor is tracked continuously and analyzed in real time with machine learning algorithms. Said Goodyear CEO Rich Kramer, "With the onset of autonomous vehicles, the role of the tire in the performance and safety of the vehicle would increase if we can make that tire intelligent."

Tuesday, March 24, 2020

China Launches Blockchain Network

Decreasing costs and increasing the ease of blockchain application building.   Seems a considerable effort. 

China to Launch National Blockchain Network in 100 Cities
IEEE Spectrum
Nick Stockton

An alliance of Chinese government groups, banks, and technology firms plans to launch the Blockchain-based Service Network (BSN), one of the first blockchain networks constructed and maintained by a central government, in April. Advocates say the BSN will slash the costs of blockchain-based business by 80%, with nodes hopefully installed in 100 Chinese cities by launch time. The network will allow programmers to develop blockchain applications more easily, but apps running on the BSN will have closed or "permissioned" membership by default. North Carolina State University's Hong Wan suggests China's government aims to make the BSN the core component of a digital currency and payment system that competes with other services. The BSN Alliance hopes the platform will become the global standard for blockchain operations, but the Chinese government's retention of the BSN's root key means it can monitor all transactions made via the platform. ... "

Running Simulations to Train Analyses

We did versions of the same thing to get data that would create more detailed and thus useful models of industrial scenarios.    Especially useful if it hard to get enough running-sensing examples.      One value of all simulations is to create training examples that are too difficult or risky to create in the real world.

System Trains Driverless cars in simulation before they hit the road
Using a photorealistic simulation engine, vehicles learn to drive in the real world and recover from near-crash scenarios.

Rob Matheson | MIT News Office

A simulation system invented at MIT to train driverless cars creates a photorealistic world with infinite steering possibilities, helping the cars learn to navigate a host of worse-case scenarios before cruising down real streets.  

Control systems, or “controllers,” for autonomous vehicles largely rely on real-world datasets of driving trajectories from human drivers. From these data, they learn how to emulate safe steering controls in a variety of situations. But real-world data from hazardous “edge cases,” such as nearly crashing or being forced off the road or into other lanes, are — fortunately — rare.

Some computer programs, called “simulation engines,” aim to imitate these situations by rendering detailed virtual roads to help train the controllers to recover. But the learned control from simulation has never been shown to transfer to reality on a full-scale vehicle.

The MIT researchers tackle the problem with their photorealistic simulator, called Virtual Image Synthesis and Transformation for Autonomy (VISTA). It uses only a small dataset, captured by humans driving on a road, to synthesize a practically infinite number of new viewpoints from trajectories that the vehicle could take in the real world. The controller is rewarded for the distance it travels without crashing, so it must learn by itself how to reach a destination safely. In doing so, the vehicle learns to safely navigate any situation it encounters, including regaining control after swerving between lanes or recovering from near-crashes.  

In tests, a controller trained within the VISTA simulator safely was able to be safely deployed onto a full-scale driverless car and to navigate through previously unseen streets. In positioning the car at off-road orientations that mimicked various near-crash situations, the controller was also able to successfully recover the car back into a safe driving trajectory within a few seconds. A paper describing the system has been published in IEEE Robotics and Automation Letters and will be presented at the upcoming ICRA conference in May. ... "

Scaling AI Training

Scaling training costs.  Technical.

Google open-sources framework that reduces AI training costs by up to 80%   By Kyle Wiggers

Google researchers recently published a paper describing a framework — SEED RL — that scales AI model training to thousands of machines. They say that it could facilitate training at millions of frames per second on a machine while reducing costs by up to 80%, potentially leveling the playing field for startups that couldn’t previously compete with large AI labs.

Training sophisticated machine learning models in the cloud remains prohibitively expensive. According to a recent Synced report, the University of Washington’s Grover, which is tailored for both the generation and detection of fake news, cost $25,000 to train over the course of two weeks. OpenAI racked up $256 per hour to train its GPT-2 language model, and Google spent an estimated $6,912 training BERT, a bidirectional transformer model that redefined the state of the art for 11 natural language processing tasks. ... " 

Teaching AI to be Better at Second-Guessing

Intent is a element of context that humans frequently use in adapting to conversation.

How humans are teaching AI to become better at second-guessing
by Lachlan Gilbert, University of New South Wales

One of the holy grails in the development of artificial intelligence (AI) is giving machines the ability to predict intent when interacting with humans.

We humans do it all the time and without even being aware of it: we observe, we listen, we use our past experience to reason about what someone is doing, why they are doing it to come up with a prediction about what they will do next.

At the moment, AI may do a plausible job at detecting the intent of another person (in other words, after the fact). Or it may even have a list of predefined, possible responses that a human will respond with in a given situation. But when an AI system or machine only has a few clues or partial observations to go on, its responses can sometimes be a little… robotic.  .... "


Monday, March 23, 2020

Crowdsourcing Creativity

Spent much time on broad the idea, like the direction of the process.

Crowdsourcing Plot Lines to Help the Creative Process

Penn State News
Jessica Hallman
March 13, 2020

Researchers at the Pennsylvania State University (Penn State) College of Information Sciences and Technology have launched a crowdsourced system that provides writers with story ideas from the online crowd to facilitate the creative process. The Heteroglossia system lets authors share sections of their story drafts using a text editor, and online workers are tasked with brainstorming plot ideas from the perspective of fictional characters they are assigned. Penn State's Ting-Hao (Kenneth) Huang said human workers currently power the system, but artificial intelligence could be incorporated into the platform in the future. "I believe if we learn how to help creative writing or creative processes in general, we can learn more about how to build systems that can be creative." 

IBM Partners for Coronavirus Research

Interesting directions for collaborating on technology.

IBM Partners with White House to Direct Supercomputing Power for Coronavirus Research
CNN
Clare Duffy
March 22, 2020

IBM will help coordinate an initiative to supply more than 330 petaflops of computing power to scientists researching COVID-19, as part of the COVID-19 High-Performance Computing Consortium, in partnership with the White House Office of Science and Technology Policy and the U.S. Department of Energy. The initiative will harness 16 supercomputing systems from IBM, national laboratories, several universities, Amazon, Google, Microsoft, and others. Computing power will be provided via remote access to researchers whose projects are approved by the consortium's leadership board. The consortium also will connect researchers with top computational scientists. Said IBM Research director Dario Gil, "We're bringing together expertise ... even across competitors, to work on this. We think it's important to bring a sense of community and to bring science and capability against this goal."

AI and Big Data

A frequent question I have received, is what is the difference between Big Data and AI?  My answer is DB is a means of using much more available data to perform analytical methods.  While AI uses a particular set of machine learning methods to find complex patterns in data.   In general AI methods are still less transparent but more powerful in some domains.    They can and we did use them in conjunction.   Sometimes there is little difference, both depend on large, sometimes very complex data.

Evolving Relationship Between Artificial Intelligence and Big Data  in ReadWrite   By Nitin Garg / 11 Jan 2020 / AI / Data and Security / Tech

Find the evolving relationship between big data and artificial intelligence. The growing popularity of these technologies offers engaging audience experience. It encourages newcomers to come up with an outstanding plan.

AI and Big Data help you transform your idea into substance. It helps you make full use of visuals, graphs, and multimedia to give your targeted audience with a great experience. According to Markets And Markets, the worldwide market for AI in accounting assumed to grow. As a result, growth from $666 million in 2019 to $4,791 million by 2024.

The critical component of delivering an outstanding pitch is taking a step further with an incredible plan of assuring success. Big data and Artificial intelligence help you contribute to multiple industries bringing an effective plan. It can directly speak to investors and your targeted audience, covering essential aspects and representing your idea in a nutshell.

According to Techjury, The big data analytics market is set to reach $103 billion by 2023, and in 2019, the big data market is expected to grow by 20%.  .... "

Linking Gamification and AI

Steve Omohundro talks some favorite topics of mine in a recent presentation, slides at the link.  Integration of gamification a favorite.  We used gamification as a means to find alternative 'expert crowdourcing' to explore alternative solutions to wicked problems.   In the supply chain space.  Makes mention of Bytedance, that I had not heard of, will take a look.

Talk: The AI Platform Business Revolution, Matchmaking Empathetic technology and AI Gamification.

On October 15, Steve Omohundro spoke at FXPAL (FX Palo Alto Laboratory) about “The AI Platform Business Revolution, Matchmaking, Empathetic Technology, and AI Gamification”:

Abstract
Popular media is full of stories about self-driving cars, video deepfakes, and robot citizens. But this kind of popular artificial intelligence is having very little business impact. The actual impact of AI on business is in automating business processes and in creating the “AI Platform Business Revolution”. Platform companies create value by facilitating exchanges between two or more groups. AI is central to these businesses for matchmaking between producers and consumers, organizing massive data flows, eliminating malicious content, providing empathetic personalization, and generating engagement through gamification. The platform structure creates moats which generate outsized sustainable profits. This is why platform businesses are now dominating the world economy. The top five companies by market cap, half of the unicorn startups, and most of the biggest IPOs and acquisitions are platforms. For example, the platform startup Bytedance is now worth $75 billion based on three simple AI technologies.

In this talk we survey the current state of AI and show how it will generate massive business value in coming years. A recent McKinsey study estimates that AI will likely create over 70 trillion dollars of value by 2030. Every business must carefully choose its AI strategy now in order to thrive over coming decades. We discuss the limitations of today’s deep learning based systems and the “Software 2.0” infrastructure which has arisen to support it. We discuss the likely next steps in natural language, machine vision, machine learning, and robotic systems. We argue that the biggest impact will be created by systems which serve to engage, connect, and help individuals. There is an enormous opportunity to use this technology to create both social and business value.  .... '

Sunday, March 22, 2020

Technology and the Future of Marketing

An interesting Podcast re the future of marketing and its implications.

Why Omni-channel Personalization Is the Future of Marketing

Podcast: 
Netcore's Rajesh Jain talks about how technology is shaping and transforming the future of marketing.

All customers want a unique, personalized experience, irrespective of how they interact with a brand – be it in-store, on an app, via a website, or wherever. With the prevalence of mobile and connected devices which give marketers access to vast customer data, and technologies such as analytics and machine learning, it is increasingly possible for companies to offer omni-channel personalization. But marketers also need to focus on identifying their “best customers,” instead of spreading their resources thin, says Rajesh Jain, founder and managing director of Netcore, a global marketing technology firm.

 Jain defines “best customers” as those who “spend more, stay longer with you and spread your message more.” These customers, says Jain, have the greatest lifetime value for a company. In a conversation with Knowledge@Wharton, Jain talks about how technology is shaping and transforming the future of marketing.

 Below is an edited version of the interview. .... " 

AI at the Edge

Some useful thoughts about the use of AI in devices, enabled by faster ubiquitous connections like 5G.

AI at the Edge Enabling a New Generation of Apps, Smart Devices   By AI Trends Staff

Enabling an edge-computing architecture with AI is seen as a way forward for advances in strategic applications. And at the advent of 5G network speeds, AI is seen as essential to the endpoints.

A new network paradigm based on virtualization enabled by Software Defined Networking (SDN) and Network Function Virtualization (NFV), presents an opportunity to push AI processing out to the edge in a distributed architecture, suggests a recent report from Strategy Analytics.

Three types of edge computing are foreseen: device as the edge, in which an IoT device generates and consumes data and has embedded AI that can send and receive data to and from additional AI systems; enterprise premise network edge, that can support AI processing on a piece of hardware in a vehicle, drone or machinery, and can collect and process data from smart devices; and operator network edge, with an AI stack/platform to host applications and services, which may be located at a micro data center in a radio tower, edge router, base station or internet gateway. ... " 

Fractal Uncertainty and Quantum

Intriguing but technical view:

Finding solutions amidst fractal uncertainty and quantum chaos
Math professor Semyon Dyatlov explores the relationship between classical and quantum physics.

Jonathan Mingle | MIT News correspondent

Semyon Dyatlov calls himself a “mathematical physicist.”

He’s an associate editor of the journal Probability and Mathematical Physics. His PhD dissertation advanced understanding of wave decay in black hole spacetimes. And much of his research focuses on developing new ways to understand the correspondence between classical physics (which describes light as rays that travel in straight lines and bounce off surfaces) and quantum systems (wherein light has wave-particle duality).

So it may come as a surprise that, as a student growing up in Siberia, he didn’t study physics in depth.

“Much of my work is deeply related to physics, even though I didn’t receive that much physics education as a student,” he says. “It took when I started working as a mathematician to slowly start understanding things like general relativity and modern particle physics.”... '