/* ---- Google Analytics Code Below */

Saturday, October 19, 2019

Reflections and Projections about IBM Watson

An nicely done internal look at Watson.   As I have mentioned here before, Watson may not have lived up to some of the early expectations, but it remains a serious contender for serious, practical AI  applications.  And in particular because they already have capabilities like BPM which can be used to model and link to real business process.  Early on talked to them about that.  They seemed enthusiastic about the idea, but I have not seen any development since.  They have external collaboration with RPA vendors, which is a good thing also.

IBM Watson: Reflections and Projections  By Rob Thomas, IBM

AI has gone through many cycles since we first coined the term “machine learning” in 1959. Our latest resurgence began in 2011 when we put Watson on national television to play Jeopardy! against humans. This became a cornerstone event, demonstrating that we had something unique. And we saw early success, putting Watson to work on projects with clients. This created even more excitement. That excitement led to more opportunity. At this stage, we have a large product organization, separate dedicated research organization, and an entire health organization all leveraging and building on this technology.

So, what is Watson? This is the question I’ve been asked the most since IBM combined its Data and AI software units earlier this year. ......

By Rob Thomas
Author of ‘The End of Tech Companies’ & ‘Big Data Revolution’  amzn.to/2uVu84R. Leading Data and AI @IBM. Robdthomas@gmail

Handheld Printing

Fascinating application, for some kinds of graphics applications. In Indiegogo. With the usual warnings that such offers for product are high risk.  For both initial shipment, and longer term availability and product quality.

PrinCube is the ultimate handheld, portable printer. It’s ultra-lightweight, fits in the palm of your hand and lets you create fast, vivid, color printing anywhere. It works over Wi-Fi with your phone to easily upload any text, image, or design and instantly print onto virtually any object or surface at the touch of a button. Printing has never been more mobile and more convenient. ... "

Matching for Identification of Antibiotics

Refreshing here is that there is no claim for AI.  We need all kinds of analytics to enhance our skills.

Computational 'Match Game' Identifies Potential Antibiotics
Carnegie Mellon News    By Byron Spice

Carnegie Mellon University (CMU) computational biologists collaborated with researchers at seven other institutions to develop a software tool that identifies bioactive molecules and the microbial genes that generate them, for assessment as potential antibiotics. The team demonstrated that MetaMiner can detect such molecules at least 100 times faster than was possible with previous techniques. MetaMiner applies genome mining methodology, analyzing gene clusters to deduce molecules the genes produce. CMU's Hosein Mohimani and Liu Cao bypassed genome mining's high susceptibility to error by building an error-tolerant search engine that finds matches between databases of microbial DNA and databases that classify molecular products according to mass spectra. With MetaMiner, the researchers identified 31 known and seven previously unknown ribosomally synthesized and post-translationally modified peptides in about 14 days; Mohimani said obtaining those results manually likely would have taken decades. ... "

Jeff Bezos's Master Plan

Some non technology views of how unique Amazon is, and where it is going

Jeff Bezos’s Master Plan in The Atlantic By Franklin Foer
What the Amazon founder and CEO wants for his empire and himself, and what that means for the rest of us.

Where in the pantheon of American commercial titans does Jeffrey Bezos belong? Andrew Carnegie’s hearths forged the steel that became the skeleton of the railroad and the city. John D. Rockefeller refined 90 percent of American oil, which supplied the pre-electric nation with light. Bill Gates created a program that was considered a prerequisite for turning on a computer.

At 55, Bezos has never dominated a major market as thoroughly as any of these forebears, and while he is presently the richest man on the planet, he has less wealth than Gates did at his zenith. Yet Rockefeller largely contented himself with oil wells, pump stations, and railcars; Gates’s fortune depended on an operating system. The scope of the empire the founder and CEO of Amazon has built is wider. Indeed, it is without precedent in the long history of American capitalism.  ... " 

Echo Auto: Customizing Mobile Voice Experiences

My test for client discussions have been ongoing.  In my last post on this I mentioned that Amazon currently has very different behavior between home and Echo Auto behavior in the car, making the experience bumpy.  They should consider making the experience less annoyingly inconsistent. 

Customizing Voice Experiences for Echo Auto and Other In-Vehicle Devices  By Mark Tucker in Voicebot.ai

After a year-long, invite-only release of Amazon’s Echo Auto, the device is available for all to purchase. Amazon received over one million pre-orders for Echo Auto and Nationwide Insurance has announced it will distribute the device to another million drivers that are its customers. In addition, Voicebot data show there are as many as 50% more monthly active users of voice assistants while driving as through smart speakers. Now is the time for Alexa developers to start creating in-vehicle voice experiences. However, there are some new values you should be aware of that can be used to customize Alexa Skills for the car.

Alexa Devices for the Automobile Differ in Capabilities

While the Echo Auto is Amazon’s flagship product specifically made for automobiles, some users are bringing Alexa to the car using the Echo Input or Dot, their mobile phone with the Alexa App, or one of the third-party products from makers such as Garmin and Anker.

For this discussion, we are not considering the hardware aspects of these devices that make them special-built for automobiles. These aspects would include such things as the number of mics or whether the device connects to your car’s audio system via AUX, Bluetooth, USB, or FM Transmitter.

The following table shows what values are available on various devices for developers of custom Alexa Skills as they create in-vehicle experiences. Echo Auto today, offers two values that can be particularly useful for automotive use cases but are not available through third-party devices.  .... " 

Attention for Advanced Forecasting and Classification

Interesting and quite technical view of forecasting and classification that is worth a look.  Of course accurate and timely forecasting is important for most businesses.  Considerable piece here, below an intro with much more at the link.  Have never seen it done accurately enough with these kinds of methods.

Attention for time series forecasting and classification
Harnessing the most recent advances in NLP for time series forecasting and classification  By Isaac Godfried

Transformers (specifically self-attention) have powered significant recent progress in NLP. They have enabled models like BERT, GPT-2, and XLNet to form powerful language models that can be used to generate text, translate text, answer questions, classify documents, summarize text, and much more. With their recent success in NLP one would expect widespread adaptation to problems like time series forecasting and classification. After all, both involve processing sequential data. However, to this point research on their adaptation to time series problems has remained limited. Moreover, while some results are promising, others remain more mixed. In this article, I will review current literature on applying transformers as well as attention more broadly to time series problems, discuss the current barriers/limitations, and brainstorm possible solutions to (hopefully) enable these models to achieve the same level success as in NLP. This article will assume that you have a basic understanding of soft-attention, self-attention, and transformer architecture. If you don’t please read one of the linked articles. You can also watch my video from the PyData Orono presentation night.

Attention for time series data: Review

The need to accurately forecast and classify time series data spans across just about every industry and long predates machine learning. For instance, in hospitals you may want to triage patients with the highest mortality early-on and forecast patient length of stay; in retail you may want to predict demand and forecast sales; utility companies want to forecast power usage, etc. .... " 

Friday, October 18, 2019

California Launches Earthquake Early Warning System

What is particularly good here is that the means to test this will be obvious.   I get USGS reports on earthquakes via an App, can filter them to mag 4.5 and higher, minutes after they occur.  So comparison to predictions should be straightforward.  Look forward to ongoing analysis.

California Launches Earthquake Early Warning System
Calls it the best in the world    in Reuters  By Dan Whitcomb 

California has launched the first statewide earthquake warning system in the U.S., aimed at detecting seismic waves and notifying residents via a mobile phone app up to 20 seconds before tremors hit. The California Earthquake Early Warning System taps hundreds of sensors to detect P-waves, which travel through the interior of the Earth, and arrive prior to surface waves and at a higher frequency during a temblor. University of California, Berkeley seismologists and engineers designed the MyShake phone app, which will initially warn users of local quakes with a magnitude of 4.5 or higher. Alerts are based on the ShakeAlert computer program operated by the U.S. Geological Survey, which analyzes data from seismic networks across the state, estimates the preliminary magnitudes of tremors, and calculates which areas will feel them.  ... "

Here is more about the early warning mobile App. I have it installed.

AI Patent Portfolio Strategies of Amazon, Qualcomm, Tencent

Points to some marketing reports available for a fee.   At the link.

Artificial Intelligence (AI) Patent Portfolio Strategies of Amazon & Qualcomm, and AI Investments of Tencent | 2019 Overview & Examination Report

October 18, 2019 05:21 ET | Source: Research and Markets in GlobalNewsWire

Dublin, Oct. 18, 2019 (GLOBE NEWSWIRE) -- The "Artificial Intelligence Patent Portfolio Strategies of Amazon and Qualcomm, and AI Investments of Tencent" report has been added to ResearchAndMarkets.com's offering.

In the IoT (Internet of Things) age, AI (Artificial Intelligence) is considered a major accelerator to smarten up electronics products, leading to the emergence of relevant applications such as smart cars, smart elevators, smart home appliances, smart pets, smart robots, and smartphones.

Following the widespread adoption of AI technology in emerging applications, Amazon which started as an online bookshop has been dedicated to M&A activities and AI patent portfolio developments over the years. Meanwhile, Qualcomm also has come with its own AI patent portfolio strategies following a noticeable decline in its revenue and patent royalty from the mobile phone market. Meanwhile, the Chinese AI industry is led by Tencent which has invested over US$10 billion into AI in the past two years. Tencent's AI ambitions over the next few years are defined by a strategy called 'Make AI Everywhere'.

This report provides an overview of AI patent portfolios, and examines the patent portfolio development strategies of Amazon and Qualcomm; looks into the AI investment plans of Tencent.

Scope

AI technology overview and includes trend analysis with breakdowns by country, sector, and field, key and correlative technology, and R&D readiness analysis
Patent portfolio analysis of several major AI assignees, including IBM, Microsoft, Google, Siemens, and Rockwell Automation
Detailed AI patent portfolio analysis of Amazon with patent mining techniques
Amazon's AI patent counts with a detailed breakdown by field, core technology, and intelligent application
Analysis of Amazon's investment projects and M&A activities
Detailed AI patent portfolio analysis of Qualcomm with patent mining techniques
Qualcomm's AI patent counts with a detailed breakdown by field, core technology, and intelligent application
Analysis of Amazon's investment projects and M&A activities  .... " 

Retailwire: Why Brands Are Moving Direct-To-Consumer

From Yesterdays webinar:

If you would like to see a recording of the webinar or refer it to others, you'll find it in our Resources section:  Recording and PDF Slides (Requires registration)

For your copy of Oracle NetSuite's eBook, "Mind the Gap: What Different Generations Want From Retailers",     https://cc.readytalk.com/cc/download/schedule/9q34mh7c749a

If you have questions about any of the material, please contact:
For Oracle NetSuite:  Matthew Rhodus  matthew.rhodus@oracle.com 

For RetailWire:  Al McClain, CEO, Co-founder   almcclain@retailwire.com   www.retailwire.com    (561)  627-4974

Meeting Description:

Upstart digital native brands that market and sell direct-to-consumer (DTC) have turned the traditional consumer products distribution model inside out.

CPG brands are learning that they too can bypass traditional wholesale and retail distribution, thereby finding ways to forge tighter, more meaningful relationships with customers. 

Join us for this exclusive RetailWire webinar to learn how the increasing influence of social channels, mobile tech, rapid delivery logistics and (needless to say) Amazon are compelling brands to go direct-to-consumer. You’ll see examples of brands that use DTC to give customers a level of personalization that legacy retailers can’t, as well as the advantages brands gain in inventory control and marketplace agility. 

And stay tuned following the presentation for a lively discussion featuring two of our BrainTrust panelists, Ken Cassar and Carol Spieckerman, who will expand on the possibilities of direct-to-consumer marketing    … " 

November Informs Analytics E Mag

The latest Analytics Mag from the Informs Group.   Below some quick summaries of key articles, plus lots more at the link.  Always informative.

Beyond Cross Industry Standard Process for Data Mining

Data people come in all shapes and sizes; the modern industry analytics professional may have had his or her foundational training as a research physicist, might have cut his or her teeth optimizing ad revenue and “click” count, or may have a traditional statistics or actuarial background [1]. Surprisingly, the recent explosion in availability of data tools and expertise has not yet been accompanied by the development of firmer, contemporary frameworks for “doing” data science. Should we wait for a random search to evolve an optimal algorithm for conducting the modern data science workstream? Or are older frameworks such as CRISP-DM (Cross Industry Standard Process for Data Mining) [2] optimal and sufficiently useful models? .... 

LATEST NEWS:

Aviation Safety: ‘Everything looks a little different’ …
In the wake of two Boeing 737 Max crashes that claimed 346 lives, the …

New research could significantly reduce …
Delays and disruptions in airline operations annually result in billions of …

Survey: Scaling technology innovation can double …
A vast new research survey on future systems from Accenture sheds light on …

CEOs say artificial intelligence tops disruptive technology
Artificial Intelligence (AI) has emerged as the most frequently mentioned theme …

Recounting 2019 Analytics Society Accomplishments
On the evening of Oct. 21 of the INFORMS Annual Meeting in Seattle, the Analytics …

2020 Syngenta Crop Challenge in Analytics
The submission period for the 2020 Syngenta Crop Challenge in Analytics is …

Robotic Grip Improved

Grip is a big thing in robotics.  This will be a big deal for classic smart home applications for eldercare.  And in factory manufacturing applications.

Giving robots a faster grasp
An algorithm speeds up the planning process robots use to adjust their grip on objects, for picking and sorting, or tool use.

Jennifer Chu | MIT News Office

If you’re at a desk with a pen or pencil handy, try this move: Grab the pen by one end with your thumb and index finger, and push the other end against the desk. Slide your fingers down the pen, then flip it upside down, without letting it drop. Not too hard, right?

But for a robot — say, one that’s sorting through a bin of objects and attempting to get a good grasp on one of them — this is a computationally taxing maneuver. Before even attempting the move it must calculate a litany of properties and probabilities, such as the friction and geometry of the table, the pen, and its two fingers, and how various combinations of these properties interact mechanically, based on fundamental laws of physics.

Now MIT engineers have found a way to significantly speed up the planning process required for a robot to adjust its grasp on an object by pushing that object against a stationary surface. Whereas traditional algorithms would require tens of minutes for planning out a sequence of motions, the new team’s approach shaves this preplanning process down to less than a second.

Alberto Rodriguez, associate professor of mechanical engineering at MIT, says the speedier planning process will enable robots, particularly in industrial settings, to quickly figure out how to push against, slide along, or otherwise use features in their environments to reposition objects in their grasp. Such nimble manipulation is useful for any tasks that involve picking and sorting, and even intricate tool use.

“This is a way to extend the dexterity of even simple robotic grippers, because at the end of the day, the environment is something every robot has around it,” Rodriguez says.

The team’s results are published today in The International Journal of Robotics Research. Rodriguez’ co-authors are lead author Nikhil Chavan-Dafle, a graduate student in mechanical engineering, and Rachel Holladay, a graduate student in electrical engineering and computer science.

40 Machine Learning Tutorials from DSC

This group of tutorials via DSC looks good.  And to get full access you need to join as a member, which I always recommend.   Its free.  If you like self-study, its a great era we live in, lots of free and detailed information.  DSC is a great resource.

40+ Modern Tutorials Covering All Aspects of Machine Learning
Posted by Capri Granville 

This list of lists contains books, notebooks, presentations, cheat sheets, and tutorials covering all aspects of data science, machine learning, deep learning, statistics, math, and more, with most documents featuring Python or R code and numerous illustrations or case studies. All this material is available for free, and consists of content mostly created in 2019 and 2018, by various top experts in their respective fields. A few of these documents are available on LinkedIn: see last section on how to download them   .... " 

Thursday, October 17, 2019

Social Credits Used in Silicon Valley?

Interesting claim, but a formal one?

Uh-oh: Silicon Valley is building a Chinese-style social credit system in Fastcompany
In China, scoring citizens’ behavior is official government policy. U.S. companies are increasingly doing something similar, outside the law.
By Mike Elgan

Have you heard about China’s social credit system? It’s a technology-enabled, surveillance-based nationwide program designed to nudge citizens toward better behavior. The ultimate goal is to “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step,” according to the Chinese government.

In place since 2014, the social credit system is a work in progress that could evolve by next year into a single, nationwide point system for all Chinese citizens, akin to a financial credit score. It aims to punish for transgressions that can include membership in or support for the Falun Gong or Tibetan Buddhism, failure to pay debts, excessive video gaming, criticizing the government, late payments, failing to sweep the sidewalk in front of your store or house, smoking or playing loud music on trains, jaywalking, and other actions deemed illegal or unacceptable by the Chinese government.  .... "

P&G Hires People with Autism for RPA

  I had not heard they were specifically hiring people with autism for RPA and process modeling jobs,  is there specific data out there indicating this is a particularly good match?   Or it is it a more general diversity effort? Checking that.

P&G boosts hiring of people with autism
By Barrett J. Brunsman  – Staff reporter, Cincinnati Business Courier   

Procter & Gamble’s effort to add people with autism to the workforce has resulted in the hiring of four managers with the disorder, who are based at the company’s global headquarters in Cincinnati.

The four, who started full-time jobs on Oct. 14, focus on digital solutions for business processes. They were hired following a five-week assessment at P&G headquarters downtown. 

P&G had hoped to fill at least five positions for robotic process automation developer, which is responsible for transforming direction and input from an RPA Solution Architect into solutions. .... "

State of Retail Supply Chains

Via Retailwire, the state of the Retail supply chains from Eft.   At the link you can sign in and get the full report.  Key takeaways below not new or unexpected, have existed for some time, details and solutions.

Eft’s 2020 State of Retail Supply Chain Report
In eft’s 2020 State of Retail Supply Chain Report, produced in partnership with Dassault Systems, over 300 retailers, manufacturers, logistics and technology enablers provided their expert insights on the key challenges facing the industry.

Key Takeaways from the report:

• Forecasting is a massive concern in retail supply chains with over 30% of retailers and manufacturers classing it as their biggest challenge

• Negligible if any improvement across all supply chain players when it comes to improving visibility for themselves and their customers

• A technology adoption lag from retailers and manufacturers is causing efficiency to be impaired – 61% of respondents felt that they were efficient enough for day-to-day operations but nothing more ... ' ..... '

SQRL for Website Login and Authentication

More on Steve Gibson's SQRL, read through a bit of his site, below the WP description.  He has a very useful weekly podcast on security called 'Security Now'.   That I have followed for several years.  A good place to stay up to date on current IT technology security topics. 

Note the use of zero-knowledge proofs in their SQRL system described below:

In the Wikipedia:

SQRL (pronounced "squirrel")[3] or Secure, Quick, Reliable Login (formerly Secure QR Login) is a draft open standard for secure website login and authentication. The software typically uses a link of the scheme sqrl:// or optionally a QR code, where a user identifies via a pseudonymous zero-knowledge proof rather than providing a user ID and password. This method is thought to be impervious to a brute force password attack or data breach. It shifts the burden of security away from the party requesting the authentication and closer to the operating system implementation of what is possible on the hardware, as well as to the user. SQRL was proposed by Steve Gibson of Gibson Research Corporation in October 2013 as a way to simplify the process of authentication without the risk of revelation of information about the transaction to a third party ..... '

Extensive Video tutorial on SQRL:   https://youtu.be/Y6J1Yt8YYj0  

Pew Research Data Decoded

Happened into Pew Research of late.   Here is their blog/site on data topics, I plan to dive in from time to time and point to things of interest.

The "how" behind the numbers, facts and trends shaping your world.

Pew Research Center’s main website offers plenty in the way of polished reports, data visualizations and interactives. But there are also lots of methodological musings, puzzles and tangles that you would see if you could flip those picture-perfect research products over (metaphorically speaking).

We’re here to do that part for you.

Since we can’t loop you, our research friends, into our in-office chatter, we’ve created this channel as the next best thing. At Decoded, you’ll find blog posts about some paths taken and some narrowly avoided, including experiments that intrigued us but never saw the light of day. We’ll be talking about survey methods, of course, but also data science, data visualization and even news app development.
Feedback is majorly welcome — so leave your thoughts and questions in the comments or reach out to us via e-mail at info@pewresearch.org.

Thanks for joining us.
Claudia Deane is vice president of research at Pew Research Center.  ..... ' 

Alexa Course for In-Skill Purchasing

Amazon is crowd sourcing Alexa skills for its infrastructure.   Have worked my way through several training outlines like the below.  Which show the direction they are moving, worth a look.    Still not what I would call very serious AI,  would like it to include skill capabilities that are more remarkable and smart.   Including the exchange of payment also complicates the design and regulation.

New Alexa Skills Training Course: How to Design for In-Skill Purchasing  Justin Jeffress

We’re excited to introduce our new Alexa Skills course, How to Design for In-Skill Purchasing. This free course outlines the best practices for designing a great monetized Alexa skill experience.

Optimize Your Voice Experience for In-Skill Purchasing

In order to effectively monetize your Alexa skills, you need to design an experience that inspires your customers to continue using your skill over and over. While a portion of the experience depends on the technical implementation (code, information architecture, APIs, etc.) it can only go as far as your voice interaction design. So we created a design-focused course to help you design a skill with in-skill purchasing. You’ll learn what makes great premium content, when to make offers, how to write offers, how to handle transitions to and from the Amazon Purchase flow, and how to provide access to purchases.

By completing this course, you’ll be equipped with the knowledge to design and optimize your skill for in-skill purchasing.

Course Components:

Introducing Our Use Case
Offer the Right Premium Content
Make an Offer at the Right Time
Write Effective Upsells
Make a Smooth Handoff
Provide Access to Purchases
Wrapping Up & Resources  ... '

Wednesday, October 16, 2019

Americans Knowledge of Computer Tech

Interesting results.  Some odd questions.  Could have had more and broader questions.  Indicates the challenges for things like understanding topics like privacy and transparency by the general public.  Our own work with decision makers and execs found this to have changed radically with the adoption of smartphones,  but that also increased misunderstandings of some capabilities for things like analytics.

Americans and Digital Knowledge
A majority of U.S. adults can answer fewer than half the questions correctly on a digital knowledge quiz, and many struggle with certain cybersecurity and privacy questions      By Emily A Vogels and Monica Anderson

A new Pew Research Center survey finds that Americans’ understanding of technology-related issues varies greatly depending on the topic, term or concept. While a majority of U.S. adults can correctly answer questions about phishing scams or website cookies, other items are more challenging. For example, just 28% of adults can identify an example of two-factor authentication – one of the most important ways experts say people can protect their personal information on sensitive accounts. Additionally, about one-quarter of Americans (24%) know that private browsing only hides browser history from other users of that computer, while roughly half (49%) say they are unsure what private browsing does.

This survey consisted of 10 questions designed to test Americans’ knowledge of a range of digital topics, such as cybersecurity or the business side of social media companies. The median number of correct answers was four. Only 20% of adults answered seven or more questions correctly, and just 2% got all 10 questions correct.

As was true in a previous Center survey, Americans’ knowledge of digital topics varies substantially by educational attainment as well as by age. Adults with a bachelor’s or advanced degree and those under the age of 50 tend to score higher on these questions. These are some of the key findings from a Pew Research Center survey of 4,272 adults living in the United States conducted June 3-17, 2019.... "  ..... '

Logs are Live and Thriving

A fairly obvious point.  Via O'Reilly.    But it is precisely these logs initially established for simple statistical, fairly linear tracking, validating the operation of code and resulting UX,  that are often used today to drive data for Deep Learning.   Seen that exactly lately.    How do we define 'too much' data?    When is the derived data violating your privacy?

Logs were our lifeblood. Now they're our liability.What happens when we collect too much data?
In the beginning, writes Jay Kreps, co-founder of Confluent and co-creator of Kafka*, there was the log. 

A log is just a sequence of records ordered by time. It’s configured to allow more and more records to be appended at the end, like this:   .... 

Logs keep track of anything and everything. There are all kinds of logs in computing environments:
The ones that are important are server logs, which keep track of the computers that access the content and apps you have on the internet.

When your computer accesses a website, the server hosting that website gets - and keeps -  a bunch of details from your computer, including which resources (web pages) the computer accessed, the time the computer accessed those resources, and the IP address of the computer that accessed them.   ... " 

Open Data vs Public Data

Short piece in DSC.  Useful for my current work.  Not sure if the definition is always applied this way, so you have to cautious in using the tag as others apply it.

Is There a Difference Between Open Data and Public Data?
Posted by Lewis Wynne-Jones 
Yep. And it’s a big one.

There is a general consensus that when we talk about open data we are referring to any piece of data or content that is free to access, use, reuse, and redistribute. Due to the way most governments have rolled out their open data portals, however, it would be easy to assume that the data available on these sites is the only data that’s available for public consumption. This isn’t true.

Although data sets that receive a governmental stamp of openness receive a lot more publicity, they actually only represent a fraction of the public data that exists on the web.  So what’s the difference between “public” data and “open” data?

What is open data?  ... " 

Learning, Reliability in Complex Systems

Useful piece in Infoq .... learning is hard because we need to continually put it into context for ourselves, goals. Plus the ultimate implications of reliability after they have learned something new.

Jason Hand explores the challenges with learning in complex systems, the relationship between high and low stakes learning opportunities as well as the cost associated.  In InfoQ

Bio: Jason Hand is Senior Cloud Advocate at Microsoft. He writes, presents, and coaches on the principles & nuance of DevOps, Site Reliability Engineering, and modern incident management practices. Named “DevOps Evangelist of the Year” in 2016, he recently authored a book on the topic of Site Reliability Engineering. He is a co-host on the podcast “Community Pulse”- a show on building community in tech

Hand: One of the things I want to talk about today, or the main theme for today is that I hope that I challenge some of the thinking that you've maybe been resting in for a long time in how we build, and operate, and maintain our systems. A lot of the ideas I want to share today came from the travels that I've been doing over the past six or seven months. Back in September, I joined Microsoft, previously I was at a company called VictorOps, which, if you're not familiar with them, they do incident management, on-call management, similar to PagerDuty if you're familiar with that service. I joined Microsoft, and one of the very first things that I was pulled into was what they call Microsoft's Ignite The Tour, which is a global tour. All around the world, there are 17 different cities.

I was pulled into this project, which was actually very interesting, very eye-opening. Over the course of the past 6 months or so, I've traveled almost 92,000 miles, which ended up being a little bit over 3 and a half times around the world, which I never would have thought I would have done in my career. It blows my mind. I've spent a total of over seven days, just in the air alone during that time.

These stats aren't that important, but what is important is that everywhere I go and everybody I talk to, I find that I'm using language like complex systems, and there's not a common understanding of what that actually means. I think - I'm guilty of this - we've fallen into this trap of using words, using terminology that not everyone's on the same page of what it actually means. When I say complex systems, I think sometimes people just accept that as face value and don't actually dig into what does that actually mean when he says complex.

That's a big part of what I want to talk about today, and especially what I want to start with is what do we mean when we're talking about complex systems? Not only that but why - when we're trying to focus on learning so much, we've heard about how it's important to be a learning organization - why are we struggling to actually learn? If you were here for Ryan's [Kitchens] talk previously, he touched on a lot of things that I'm going to also try to amplify a little bit more. It's very difficult for us to really find good methods to learn about our systems, to learn new ways to improve them, and continuously build them and make them better for the world, for everybody that we're trying to serve and trying to make better.  .....


NELL and Machine Learning

In a re-examination of 'machine learning', the broader definition that includes but is not restricted to deep learning.   I recalled our look the CMU effort called NELL,  Never Ending Language Learner'.  At the time it was insufficiently oriented to our needs, any information about the actual use of its learning?  Here is the intro to it on the Carnegie Mellon site.

NELL: The Computer that Learns,   Professor Tom Mitchell

Tom Mitchell's two daughters are grown but watching his newest 'baby' learn to read is an unprecedented achievement.

Professor Mitchell leads the team that developed the Never-Ending Language Learner – NELL – a computer system that, over time, is teaching itself to read and understand the web.
"I've been interested for many, many years in how machines learn because I'm also interested in how humans learn," explained Mitchell, who heads Carnegie Mellon's Machine Learning department – the first and only department of its kind in the world. "NELL comes naturally out of that. The current machine-learning algorithms are very different in style than how you and I learn. They analyze a single data set, output an answer, and then you turn them off. That's not like us at all! The idea of NELL is to capture a style more like the on-going learning of humans."

Understanding language – the way humans do – depends on both context and background knowledge gained over time. So NELL scans the web – attempting to "read" hundreds of millions of web pages on a fact-finding mission.

For example, the repeated combination of a phrase like "New York City Marathon" in combination with other words has taught NELL to learn that it's a "race" and a "sports event." ...  "

See also:
Project website:  http://rtw.ml.cmu.edu/rtw/
On twitter:           https://twitter.com/cmunell

SAP Rolls Towards Cloud Data Intelligence

Our enterprise was and is an SAP user, we worked with them primarily on optimization analytics applications.

HANA in the Cloud by Christmas, SAP Says,   Alex Woodie in DataNami

SAP formally revealed the planned general availability of hotly anticipated new cloud data storage products during CTO Juergen Mueller’s keynote address at TechEd conference in Barcelona Monday. Two of those new offerings — SAP HANA Cloud and SAP Data Warehouse Cloud, which are bundled under the SAP HANA Cloud Services umbrella — were announced earlier this year and are expected to ship “before Christmas,” Mueller said.

SAP HANA Cloud is a new cloud-native version of the core HANA database that is delivered as a container and can be managed on the SAP Cloud platform using the Kubernetes resource manager. According to Mueller, SAP HANA Cloud gives customers a single place to store data from a variety of sources — for analytics and transaction processing — and manage it in a variety of ways.

“SAP HANA Cloud offers one data access layer for all your data sources,” he said. “It directly connects to your data from your on-premise HANA system, your third-party systems, and even Excel, without the need for data replication in order to work with that data.”

The new cloud offering is a fully managed cloud service, and will “dramatically lower” customers’ total cost of ownership (TCO) for storing and managing petabytes worth of data, Mueller said.

“We have choices for data storage and more options for data federation, so you don’t have to move your data around anymore, so the CFO will be happy,” Mueller said. “You only pay for the memory, disk, and data lake storage that you use — paid by the hour, if you wish.”

SAP HANA Cloud includes a data tiering functions that let customers mix and match their data to different underlying storage mediums supported by HANA. It’s all about meeting the changing data storage and management needs of customers, Mueller said.  .... "

Tuesday, October 15, 2019

Wal-Mart Asks for Home Access for Delivery

Another case, like the one with Amazon of allowing delivery personnel a means to enter the smart home for delivery.  Amazon Key.  No idea of how that is working, but have not heard it mentioned recently.   We examined the use of refrigerated boxes places outside the home, with separate locking mechanism.  This is not an idea brought up frequently as an asset of the smart home.  So still think its a very niche need today.  But perhaps as family structures continue to change?  Now Wal-Mart makes a try in the same space.

Will customers give Walmart the keys to their homes?   by George Anderson in retailwire with further expert opinion.

Walmart has announced that it is launching its InHome Delivery program in the Pittsburgh, Kansas City and Vero Beach markets. The service gives Walmart delivery people access to customers’ garages or homes where they will put refrigerated and frozen products away in refrigerators.

Customers participating in the program are required to purchase a smart door or garage lock kit for $49.95. Membership to the program is being billed at an introductory price of $19.95 a month for an unlimited number of deliveries. The new service is seen by many as Walmart’s answer to Amazon.com’s Key Service.

Delivery associates will use smart entry technology to enter the homes of customers. Their actions are tracked through wearable cameras. While Walmart typically uses third-party couriers to deliver groceries, only employees who have been with the company for a minimum of a year are eligible to participate in training to work in the InHome program.  ... " 

Google Daydream VR

Somewhat of a surprise given how Google is so emerging tech.     But more of a confirmation of a very niche market?   Not enough consumer uptake.   We saw an early demo and it was impressive. 

Google's Daydream VR experiment is over
Its latest phones don't support the platform and it's discontinuing Daydream View headsets.
By Kris Holt, @krisholt 

Google is bringing the Daydream VR experiment to a close. The Pixel 4 and Pixel 4 XL smartphones it announced today don't support the platform, and it won't sell Daydream View headsets anymore. The Daydream app and store will still work for people with other Pixel devices, however.

"[There hasn't] been the broad consumer or developer adoption we had hoped, and we've seen decreasing usage over time of the Daydream View headset," a Google spokesperson told Engadget in a statement. "So while we are no longer selling Daydream View or supporting Daydream on Pixel 4, the Daydream app and store will remain available for existing users."

Google cited "clear limitations constraining smartphone VR from being a viable long-term solution." Namely, when people dropped their phones into headsets, that prevented them from using other apps. The company plans to continue working on augmented reality with features such as AR walking directions in Maps, AR experiences in Search and Google Lens.   .... "

Facebook AI Releases Searchable Code Snippets

Seems a useful thing, what Facebook has put on the web to support AI efforts is impressive.  See more at the Facebook.ai site.  Also note the claimed validation against malware. 

Facebook Releases AI Code Search Datasets   By Anthony Alford in InfoQ

Facebook AI released a dataset containing coding questions paired with code-snippet answers, intended for evaluating AI-based natural-language code search systems.  The release also includes benchmark results for several of Facebook's own code-search models and a training corpus of over 4 million Java methods parsed from over 24,000 GitHub repositories.

In a paper published on arXiv, researchers described their technique for collecting the data. The training data corpus was collected from the most popular GitHub repositories of Android code, ranked by number of stars. Every Java file in the repositories was parsed, identifying the individual methods. Facebook used the resulting corpus in their research on training code-search systems. To create the evaluation dataset, they started with a question-and-answer data dump from Stack Overflow, selecting only questions that had both "Java" and "Android" tags. Of these, they kept only questions that had an upvoted answer that also matched one of the methods identified in the training data corpus. The resulting 518 questions were manually filtered to a final set of 287. According to the researchers:

Our data set is not only the largest currently available for Java, it’s also the only one validated against ground truth answers from Stack Overflow  .... " 

Ticketing with a Blockchain

Interesting example of a kind of smart contract embedded in the blockchain to enforce agreements regards how tickets can be used.    So this means when a ticket is sold, if you can examine the context of the sale.    The ticket as redeemed it can be checked as to that context.

From the Hewlett Packard Enterprise:

Rock and roll and blockchain
Musicians get a lot of revenue from their live performances, but they lose income due to things like scalpers and ticket scams. So musician Jason Robert is creating a blockchain-based event ticketing platform to cut down on fraud, reduce price gouging, and help maintain the health of the live entertainment industry.

As a professional musician, Jason Robert has played in venues around the world, including Bonnaroo and the Newport Folk Festival. He opened for The Roots, and wrote songs for shows on NBC, ABC, Fox, Showtime, and Netflix. .... 

Q: What problems are there with live-event ticketing?
A: A lot of people are taking money, almost literally, out of the pockets of musicians. These days, artists get a lot of their revenue from live performances, but scalpers and tickets scams are getting money that should go to them. And that's a major, major problem.
The problems have to do with the primary and secondary ticket markets. If fans are lucky, they can buy tickets in the primary market. But that doesn't happen very often. Something like 50 percent of the primary market tickets are withheld from the general public. Brokers, scalpers, and bots buy up a large percentage of these tickets in the primary market. That creates the illusion of a sellout, which artificially pumps up the price on the secondary market. So a scalper may have purchased a ticket for $50, but it ends up on third-party websites for some crazy mark-up, like $150. Fans are forced to pay that because there's no more inventory left. And none of the markup for those tickets goes to the musicians or the music industry. It mostly goes to brokers and scalpers.
Not only that, but upwards of 20 percent of tickets sold on secondary market platforms are fraudulent. A broker, scalper, or nefarious reseller will buy a ticket, then create a fake PDF version of it. They’ll list that PDF on several secondary market platforms that don't talk to one another, whose databases don't know the inventory. Maybe one of those tickets is legit, but the others are frauds.
Q: How do you hope to solve the problem?

A: We’re working on a blockchain-based event ticketing platform which will enable event organizers and musicians to program the rules that govern the behavior of the ticket. For example, they can set a cap on how much the ticket can be sold for on the secondary market. In fact, they could restrict secondary market sale all together. So, if an artist wants to sell a ticket for $10 and have zero secondary market, they can do that. Also, by setting the terms of engagement, artists could get additional revenue if they allow a percentage markup on the secondary market and require that they get some of that money.

On the platform, everyone can see the transactions that happen on the blockchain. So, as a fan, I can go online and could see how many hands it's passed through. I can see the original face value of the ticket. And I know the ticket is verified and authentic, because it can’t be digitally reproduced. That solves the problem I described earlier about fake PDFs. Also, because the ticket money circulates in the music industry instead of going to scalpers, the industry becomes more sustainable.

Another possible use for blockchain is rights attributions for creative works. If I'm an artist and I release a song and get it on some TV show, I can define the rules of how much the music supervisor pays for that song. I also can see in real time who is using that creative work, and I get paid instantly, as opposed to having to wait six months or sometimes a year.  .... "

Your Smart TV is (and has been) Watching You

Almost obvious, its one reason that smart TVs are relatively cheap.  My TV seems to do an update every time I switch services.   A massive download?  After 1 AM every night it does a scheduled, major update.  Meanwhile some fairly obvious bugs in their system have persisted for years.   Don't think they are updating bugs.

It would be instructive to see what is being gathered, the Princeton paper, linked below,  has some hints. Consider that they already have quite a bit of information about behavioral, demographic, and life choices from what you choose.  And then adds to that things you don't choose.   Link this to some of the 'big data' collections being established?   Talk about data as an asset, in the midst of looking at that.    Knowledge Graphs that can be used to explore and simulate engagements?

Smart TVs are data-collecting machines, new study shows
Roku and Amazon Fire TVs are tracking you like everything else   By Zoe Schiffer@ZoeSchiffer in theVerge

Add smart TVs to the growing list of home appliances guilty of surveilling people’s movements. A new study from Princeton University shows internet-connected TVs, which allow people to stream Netflix and Hulu, are loaded with data-hungry trackers.   .... " 

Monday, October 14, 2019

IoT and Fintech

FinTech probably not an area you would first think of IOT,   but this non technical piece outlines some useful thoughts in the space.

How IoT Changes Banks and FinTech Companies  By Vova Shevchyk in ReadWrite

Can financial services benefit from the Internet of Things? Absolutely, and to the fullest extent. Not only for gathering more and better data about assets but IoT also changes the way we access banking and manage wealth, invest, and monitor assets.  Let’s see how IoT changes banks and FinTech companies.

To improve efficiency and customer service, banks eagerly invest in the Internet of Things. The technology has great potential and a wide range of uses.

How to ensure your financial company makes the best use of this technology? What are the benefits of implementing IoT? What does it take to build an IoT project? We’ll lay out and explain all of the above in this post.  ... " 

Proposed Model for Machine Learning

Intriguing.   Thinking this, its kind of like derived information transfer to some structured system.  Like a brain or a machine?  And then transferring that knowledge to a decision.

Machine Learning: A Formal Learning Model  By Divyesh Dharaiya in ReadWrite

In day to day life, individuals are effectively confronting a few choices to make. For a machine to settle on these sorts of decisions, the automatic route is to show the issues faced in a numerical articulation. The mathematical articulation could legitimately be structured from the issue foundation. Machine learning is a formal learning model.

There are commonly three kinds of Machine Learning that are dependent on continuous issues. These are also based on an informational index.

For example, a candy machine could utilize the gauges and security enrichment of cash to distinguish false payment. When using a different issue, you might merely obtain a few estimations about the criteria or security of money. Maybe you will put your machine learning to comparing names with the enrichment of cash.

However, you will quickly find that you don’t have the foggiest idea about the particular relationship among many of the elements of machine learning you are trying to employ. But, the machine learning itself — will be a superior method to locate the primary associations between parts that you are trying to capitalize on for your business.  .... " 

Game Theory and Autonomous Cars

How do we construct models about elements of autonomy and how they interact our own behavior?

Game Theory and AI Systems: Use Case For Autonomous Cars  By Dr. Lance Eliot, CEO, Techbrium Inc.   AI Trends Insider

When you get onto the freeway, you are essentially entering into the matrix. For those of you familiar with the movie of the same name, you’ll realize that I am suggesting that you are entering into a kind of simulated world as your car proceeds up the freeway onramp and into the flow of traffic. Whether you know it or not, you are indeed opting into playing a game, though one much more serious than an amusement park bumper cars arena.

On the freeway, you are playing a game of life-and-death.

It might seem like you are merely driving to work or trying to get to the ballgame, but the reality is that for every moment you are on the freeway you are at risk of your life. Your car can go awry, say it suddenly loses a tire, and you swerve across the lanes, possibly ramming into other cars or going off a freeway embankment. Or, you might be driving perfectly well, and all of a sudden, a truck ahead of you unexpectedly slams on its breaks and you crash into the truck.

 I hope this doesn’t seem morbid. Nor do I want to appear to be an alarmist. But, you have to admit, these scenarios are all possible and you are in fact at the risk of your life while on the freeway. For a novice driver, such as a teenager starting to drive, you can usually see on their face an expression that sums up the freeway driving circumstance – abject fear. They know that one wrong move can be fatal. They are usually somewhat surprised that anyone would trust a teenager to be in such a situation of great responsibility. Most teenagers are held in contempt by adults for a lack of taking responsibility seriously, and yet we let them get behind the wheel of a multi-ton car and drive amongst the rest of us. .... 

 For related article about how greed motivates drivers and its impacts on self-driving cars, see: 

 Leveraging Game Theory

Anyway, if you are willing to concede that we can think of freeway driving as a game, you then might be also willing to accept the idea that we can potentially use game theory to help understand and model driving behavior.

With game theory, we can consider the freeway driving and the traffic to be something that can be mathematically modeled. This mathematical model can take into account conflict. A car cuts off another car. One car is desperately trying to get ahead of another car. And so on. The mathematical model can also take into account cooperation. As you enter onto the freeway, perhaps other cars let you in by purposely slowing down and making an open space for you. Or, you are in the fast lane and want to get over to the slow lane, so you turn on your blinker and other cars let you make your way from one lane to the next. There is at times cooperative behavior on the freeway, and likewise at times there is behavior involving conflict. ..... "

US Regulators Look at Data as an Asset

Several projects examined data as an asset, had been looking at how data's value would be treated in regulation.   And likelihood of the nature of future regulation.

US says digital assets are covered by money laundering and disclosure laws   Just because it's virtual doesn't mean it's exempt. .... '

Jon Fingas, @jonfingas In Engadget

Sunday, October 13, 2019

Information Entropy and Data

With a background in physics this is a great topic.  It links the universe to information technologies in interesting ways.  Even includes a hint at the nature of 'surprise'.  At very minimum impress your friends.

 Gentle Introduction to Information Entropy   by Jason Brownlee  in Probability

Information theory is a subfield of mathematics concerned with transmitting data across a noisy channel.

A cornerstone of information theory is the idea of quantifying how much information there is in a message. More generally, this can be used to quantify the information in an event and a random variable, called entropy, and is calculated using probability.

Calculating information and entropy is a useful tool in machine learning and is used as the basis for techniques such as feature selection, building decision trees, and, more generally, fitting classification models. As such, a machine learning practitioner requires a strong understanding and intuition for information and entropy.

In this post, you will discover a gentle introduction to information entropy.

After reading this post, you will know:

Information theory is concerned with data compression and transmission and builds upon probability and supports machine learning.

Information provides a way to quantify the amount of surprise for an event measured in bits.
Entropy provides a measure of the average amount of information needed to represent an event drawn from a probability distribution for a random variable.

Discover bayes opimization, naive bayes, maximum likelihood, distributions, cross entropy, and much more in my new book, with 28 step-by-step tutorials and full Python source code.

Let’s get started.    .... " 

Designing Smart Home Products

A long time thought of mine, when I saw Worlds Fair displays of the future, when running an innovation center, what smart home products will be used?

Designing Smart-Home Products That People Will Actually Use
Ryan Shanks, Francis Hintermann in the HBR

In the late 1950s, RCA Whirlpool introduced the world to the “Miracle Kitchen,” a bold vision of the future in which every device in the home was automated, networked, and tasked with making life easier and safer. Part futuristic proof of concept, part propaganda, the Miracle Kitchen excited consumers and sparked the famous “Kitchen Debate” between Richard Nixon and Nikita Khrushchev. There was just one problem: The technology wasn’t ready.

More than 60 years later, the technology is finally ready, but many consumers still aren’t.

Despite the growing ecosystem of devices, software, and services for homes, we have yet to see explosive growth in the smart-home market. Aside from some technophiles, consumers still struggle to see how smart-home products or services will be relevant to them in their daily lives, let alone understand all the technology upgrade cycles and platform and integration options. Our research bears this out. According to an Accenture survey of more than 6,000 people across 13 geographic areas, 25% of consumers of smart-home products and services consider themselves “Explorers” (meaning lead adopters) while 63% say they are “Navigators” (followers). Because of this, the smart home is stuck in the chasm of the technology adoption curve — caught in the early-adopter phase and struggling to move to mass-market adoption.

We used this survey as a starting point to better understand what’s happening with smart-home technology. We also directly observed 40 individuals in their homes, allowing us to dig into their behaviors, routines, and communication and to explore how technology impacts their identities and motivations. We then tested our findings with more than 25 global clients during strategic innovation sessions at our R&D center, and used their feedback to refine our thinking.

Through the lens of this work, we see a rare opportunity for companies to rethink two areas of their business: product design and marketing. In our view, product design is still too removed from the end user, and marketing strategies are still too focused on selling products based on outmoded personas and traditional market segments.

We categorized smart-home customers into a new set of eight personas, and we suggest that companies consider their different characteristics before designing and marketing smart-home products.  .... "

Blogging Notes

Quotes in this blog are in italics or quote marks.  Non italics were written by me.   If you want to unquote something let me know.   Links will sometimes not work over time,  I will selectively repair or remove them if I find them or you inform me.  Depending on their importance or age. 

I usually identify non-trivial updates, also depending on their importance.   Also note the dates of posts,  they can become irrelevant over time.  Use the comments section,  I reserve all editorial rights.

Public Interest Technology

Of interest, this site and resource is new to me, now following, join in.  All tech is of public interest, but now it is evolving much faster, and getting involved in every daily interaction, so needs to considered closely.    More at the link:

Public-Interest Technology Resources
Maintained by Bruce Schneier. Last updated September 30, 2019.

Introduction
As technology—especially computer, information, and Internet technology—permeates all aspects of our society, people who understand that technology need to be part of public-policy discussions. We need technologists who work in the public interest. We need public-interest technologists.

Defining this term is difficult. One Ford Foundation blog post described public-interest technologists as “technology practitioners who focus on social justice, the common good, and/or the public interest.” A group of academics in this field wrote that “public-interest technology refers to the study and application of technology expertise to advance the public interest/generate public benefits/promote the public good.”

I think of public-interest technologists as people who combine their technological expertise with a public-interest focus, either by working on tech policy, working on a tech project with a public benefit, or working as a more traditional technologist for an organization with a public-interest focus. Public-interest technology isn’t one thing; it’s many things. And not everyone likes the term. Maybe it’s not the most accurate term for what different people do, but it’s the best umbrella term that covers everyone.

Public-interest technologists are a diverse and interdisciplinary group of people. Their backgrounds are in technology, policy, or law. (This is important, you do not need a computer-science degree to be a public-interest technologist.) They work inside governments, at nongovernmental agencies, for independent research institutions, within for-profit corporations, and for the press. Some of them do this full time as a career. Others take short leaves of absence from their careers to pursue public-interest technology. Still others do this in their spare time, as an avocation.

This is a resources page for public-interest technologists with a public policy focus. As such, it excludes the many tech organizations that are building public-interest tools. (Yes, they’re important, but they’re not my focus here.) The lists on this page are not exhaustive, and I am not endorsing or recommending any particular program. This is meant to be a curated sample of the richness of this space, one which I intend to further develop over time. Please e-mail me with corrections, additions, and suggestions — especially if you are from one of the organizations I list and I mischaracterize you in some important way.  .... " 

Wendys' Markets with a Role Playing Game

 Had seen several test examples of trying to use non-trivial games in the enterprise, but non that I thought actually worked.  Here an example?

 Wendy's New Role-Paying Game lets Players (and their dice) Defend 'Freshtovia; from Frozen Beef
'Feast of Legends' debuted at New York Comic Con with a livestream from 'Critical Role'  ... 

By I-Hsien Sherwood  in Adage
Wendy’s isn’t afraid to go where its customers are, as it did when it infiltrated the video game Fortnite—and earned a Cannes Lion Grand Prix in the process.  ... " 

Examining Smart City Backlash

A look at how people are reacting to 'smart' and trust, and surveillance and privacy stretching that are parts of smart city plans and implementations.  Is it best to see these ideas as primarily cost saving and life improving?  Or addressing outliers like solving and preventing crime in city spaces?     Both are happening today.

https://penniur.upenn.edu/  Penn Institute for Urban Research

What’s Fueling the Smart City Backlash?

A new phase of pause and double-check assumptions seems to have gripped the three-decades-old global movement of overstressed urban centers transitioning to so-called smart cities with innovative, technology-led promises. The latest phase is marked by scattered, local-level resistance by residents to smart-city programs in big cities like Toronto and New York to small towns such as Ross, California — near San Francisco — with less than 2,500 residents.

Other cities have banned specific technologies such as facial recognition software, amid doubts over its accuracy or concerns over cities stealthily collecting such data on their citizens through video surveillance. In some cases, they see technology companies forming opaque partnerships with city-level agencies to profit from projects at their expense, using public resources such as land and development rights.

Fears over privacy intrusions in today’s digital age and unbridled development compromising the public interest have been heightened by the erosion of trust between residents, city administrations and private companies leading “smart” projects. With increased transparency, and stronger citizen engagement, the smart-city movement could regain lost credibility and continue its growth, according to experts who spoke with Knowledge@Wharton.

For the most part, residents are wary about how city governments and big technology companies involved in the projects will track and collect data about their daily activities while not compromising their privacy and security by selling data without their consent. In several cases, legislators in many U.S. states have enacted or are considering laws to ban or limit the erection of 5G cell towers because of health concerns.

Data privacy and security issues are more sensitive in some settings than others. “Smart cities mean different things to different people, but big data is intrinsic to these initiatives and thus privacy concerns arise,” notes Susan Wachter, Wharton professor of real estate and finance. “However, some initiatives such as coordinated traffic lights are high on efficiency and low on privacy issues — and they are no brainers. Others, such as tracking people — much as is done in private places such as malls — provoke a backlash because they undermine the anonymity privilege of public spaces.” ... '

Space Weather Implications for Technology

Space weather, and further implications of solar activity can have huge implications for tech on earth.  The ability to monitor it  will be important.  NASA's launch last week of the ICON spacecraft from an aircraft is an example of efforts underway.  The sun's Coronal Mass Ejections (CMEs)  have the ability to set our tech back for many years.  Continuing to follow.

NASA uses a plane to launch a craft to the very edge of space  By Georgina Torbet in Digitaltrends

NASA has launched a new spacecraft, the Ionospheric Connection Explorer (ICON), for exploring the radiation-filled and inhospitable border between our planet’s atmosphere and space. On Thursday, the ICON spacecraft was carried aboard a Northrop Grumman Stargazer L-1011 aircraft that took it to an altitude of 39,000 feet before being deployed on a Northrop Grumman Pegasus XL rocket, whose automated systems launched it into space.  ... ' 

Further tech detail in The Verge.

Saturday, October 12, 2019

Comments on: Echo Auto Gets an Unexpected Ad Stream (Update): Free with a Buick

Been now using the Auto Echo for several months, this lets you connect with the Echo infrastructure cheaply from most cars.  The capability was advertised as providing the same Alexa environment you got at home, including smart home connections.    It works fairly well via a smartphone Echo App.

In just the last few days Amazon has started to insert amazon Advertising into the requested music stream, even for unlimited music subscribers.    Most requests for genre or even more specific music are converted into a 'station' request.   So the channel then includes advertising, fairly often in my estimation.   This is quite annoying, especially if you are paying for and expecting advertising-free streaming music, as I am.

In some cases I can buy directly by voice, adding more interaction delay.   Even Playlist requests convert into Stations, sometimes stretching the 'radio'  definition.    At home requests for music than come from Prime or Unlimited music come advertising-free as they always have been.   Station requested from Tunein or IHeartRadio seem to be working as before, with no additional ads.

I assume this addition of ads pays for the new auto infrastructure.  But the overall impression is negative.   I  pay extra for advertising-free, but now I get ads.    Its obvious that playlists and genres are being reinterpreted to get more ads to me.   The experience with Echo Auto is good, but not completely seamless.   This additional intrusion into the channel will not help sell it. 

I'd like to see Amazon's view of this.

Update:  I see that Echo Auto devices are being given out free with Buick automobiles in a promotion now.    Assuming these are being used to promote the use of Alexa infrastructure.  Again the way they work, interspersing Amazon ads,  would cheapen the gift,  if the promotion devices work the same way.

Consumer Experience and Trust

Probably mostly because trust contains many components.  Both long and short term aspects.    Both experience and influencer based.

Why is CX Not Impacting Trust?
By Gautam Mahajan  

CustomerThink has conducted a study on CX and its relevance. They find that the CX index and the CSat index has not gone up in the last few years, but the CX spend has.

The question is if CX is so important, and if so much work and money is going into it, why is CX Index not improving?

Trust is declining in business…
Take the example of trust. One aspect of doing business is trust. And business trust is going down or is low, in spite of CX efforts. Is trust an experience? It certainly is impacted by experience.

What people are finding is that buyers distrust sellers’ information. The buyers experience ranks 7th out of the 15 items studied as being a reliable source of information. Buyers rely on product demos, user reviews, vendor websites, free trials etc.

We can argue some of these are experience based, but they may well not be.  ....  " 

Language and AI: Steve Omohundro

Language interaction continues to evolve.

Linghacks Keynote: "Language and AI: Hacking Humanity's greatest invention"

On March 30-31 the wonderful “Linghack” organization supporting computational linguistics held their “Linghacks II” event in Silicon Valley:  https://linghacks.weebly.com/linghacks-ii.html

Steve Omohundro was invited to give the opening Keynote Address on “AI and Language: Hacking Humanity’s Greatest Invention”. His talk is available here starting at 14:20:
The slides are available here:    ... "

Race to Quantum

Useful sort view from Europe on quantum efforts underway.   There are very big stakes.   Solving the most complex, wicked problems in new ways.

By  Katia Moskvitch in Wired UK

Inside the High Stakes Race to Make Quantum Computers Work

Traditional computers—be it an Apple Watch or the most powerful supercomputer—rely on tiny silicon transistors that work like on-off switches to encode bits of data. Each circuit can have one of two values—either one (on) or zero (off) in binary code; the computer turns the voltage in a circuit on or off to make it work.

A quantum computer is not limited to this “either/or” way of thinking. Its memory is made up of quantum bits, or qubits—tiny particles of matter like atoms or electrons. And qubits can do “both/and,” meaning that they can be in a superposition of all possible combinations of zeros and ones; they can be all of those states simultaneously.

FOR CERN, THE quantum promise could, for instance, help its scientists find evidence of supersymmetry, or SUSY, which so far has proven elusive. At the moment, researchers spend weeks and months sifting through the debris from proton-proton collisions in the LCH, trying to find exotic, heavy sister-particles to all our known particles of matter. The quest has now lasted decades, and a number of physicists are questioning if the theory behind SUSY is really valid. A quantum computer would greatly speed up analysis of the collisions, hopefully finding evidence of supersymmetry much sooner—or at least allowing us to ditch the theory and move on.

A quantum device might also help scientists understand the evolution of the early universe, the first few minutes after the Big Bang. Physicists are pretty confident that back then, our universe was nothing but a strange soup of subatomic particles called quarks and gluons. To understand how this quark-gluon plasma has evolved into the universe we have today, researchers simulate the conditions of the infant universe and then test their models at the LHC, with multiple collisions. Performing a simulation on a quantum computer, governed by the same laws that govern the very particles that the LHC is smashing together, could lead to a much more accurate model to test.

Beyond pure science, banks, pharmaceutical companies, and governments are also waiting to get their hands on computing power that could be tens or even hundreds of times greater than that of any traditional computer.

And they’ve been waiting for decades. Google is in the race, as are IBM, Microsoft, Intel and a clutch of startups, academic groups, and the Chinese government. The stakes are incredibly high. .... "

Google Overviews Accessibility Efforts

Have always though of accessibility as having a goal of augmentation.   If its an assistant or simple access adjustment.  Good examples here.

Making technology accessible for everyone in the Google Blog
Super G pinata
Written by Keyword Team

The World Health Organization estimates that 15 percent of the world’s population—over 1 billion people—have a disability. October is National Disability Employment Awareness Month, an opportunity to look at how technology enables us to move toward a more accessible world. Google’s accessibility-first design mentality, coupled with advances in technology like AI, has led to the creation of new products for people with disabilities. And when we get it right, this technology will ultimately result in products that work better for all of us. It’s a perfect example of what we mean by building a more helpful Google for everyone. This month we will highlight some of those efforts and the people behind the work.    .... "

Friday, October 11, 2019

Review of Gary Marcus' Rebooting AI: Building Artificial Intelligence We Can Trust

Just reading this book.   Found the below non technical overview/review quite useful.  Mostly agree with this, and the outline of chapters is useful.   Saving it here for my own reference.   Buy a copy.

Prof Kenneth Forbus review of Gary Marcus' book: Rebooting AI: Building Artificial Intelligence We Can Trust 

Franz

IBM Introduces Sterling Supply Chain Suite

Our earliest work in supply chain, from supplier to plant to consumer, was supported by IBM optimization methods.   Recently saw some of their proposals for integrating Watson with supply chain.   Also worked with some of IBM's clients validating their SC methods integrated with AI.  Direct integration of Hyperledger Blockchain.   So this is of interest. Via Walter Riker

IBM launches blockchain-based supply chain service with AI, IoT integration
Tied to its existing Sterling Order Management System service, the new blockchain and AI-enabled cloud offering enables real-time views of order shipments, alerts and optimization recommendations.
By Lucas Mearian   Senior Reporter, Computerworld  

IBM this week launched a new supply chain service based on its blockchain platform and open-source software from recently-acquired Red Hat that allows developers and third-party apps to integrate legacy corporate data systems onto a distributed ledger.

Through the use of open APIs, the new Sterling Supply Chain Suite allows distributors, manufacturers and retailers to integrate their own data and networks – as well as those of their suppliers – onto a Hyperledger-based blockchain to track and trace products and parts. Among the data that can be integrated are IoT sensor systems for real-time shipment position location.

[ Further reading: Blockchain: The complete guide ]

"This is the first move from IBM in what we anticipate to be a significant investment in the reinvention of supply chains by global organizations in the coming decades," an IBM spokesperson said via email.    .... " 

Tracking Your Data: Exist

Brought to my attention:

Exist: Track everything together. Understand your behaviour.

By combining data from services you already use, we can help you understand what makes you more happy, productive, and active.

Bring your activity from your phone or fitness tracker, and add other services like your calendar for greater context on what you're up to.

Start your free trial  ... 

Kroger Selling its Marketing Technology

Excerpt from ModernRetail: 

Why Kroger’s advertising program remains a nascent opportunity

By Cale Guthrie Weissman

 " ..... Kroger — which boasts over 443,000 employees — for the last few years has been trying to improve its digital game — as part of its overall Restock Kroger business revitalization program. A program called 84.51 offers the tech, along with its own platform, called Precision Marketing.

But buyers and brands say it’s still in the very early stages. While Kroger is continuing to invest in its program, many people in the brand marketing space still see it as a venture with which to experiment, and not yet allocate a new permanent budget.

With both 84.51 and Precision, Kroger has been trying to be a leader in the space, but hasn’t yet made a real dent. “I haven’t seen anything that makes me think they are a standout,” said Jason Goldberg chief strategy officer at Publicis.  .... " 

The Future of Malicious Online Activity

Recent events have been troublesome here.     We like to trust our machines, but the more autonomous and 'intelligent' they are,  leads to lest trust and more complexity.  Malware can use autonomy and intelligence as well.   Data can be used to determine where it is coming from and the patterns of its use.

Decade of Cybersecurity Data Could Predict Future Malicious Online Activity
CSIRO (Australia)   By Chris Chelvan

Researchers from Australia’s Commonwealth Scientific and Industrial Research Organization (CSIRO) Data61 digital research network and Macquarie University, in collaboration with Nokia Bell Labs and the University of Sydney, have developed a comprehensive dataset of the global cybersecurity threat landscape from 2007 to 2017. The purpose of the FinalBlacklist dataset is to help cybersecurity specialists derive new insights on cybersecurity threats, and potentially predict future malicious online activity. The team collected 51.6 million reports of such malicious online activity involving 662,000 unique IP addresses worldwide. The data was categorized using machine learning techniques into six classes: malware, phishing, fraudulent services, potentially unwanted programs, exploits, and spamming. Said Macquarie University’s Dali Kaafar, "Our analysis revealed a consistent minority of repeat offenders that contributed a majority of the mal-activity reports. Detecting and quickly reacting to the emergence of these mal-activity contributors could significantly reduce the damage inflicted.”  ...' 

Thursday, October 10, 2019

Standardizing Deep Learning Model Deployment

Got this more detailed invite late,  and the talk is over,  but below you can follow along on slides and talk, see below:

Nick Pentreath, IBM : "Standardizing deep learning model deployment with the Model Asset Exchange"   Technical talk

Background :

Nick Pentreath is a principal engineer in IBM's Center for Open-source Data & AI Technology (CODAIT), where he works on machine learning. Previously, he cofounded Graphflow, a machine learning startup focused on recommendations. He has also worked at Goldman Sachs, Cognitive Match, and Mxit. He is a committer and PMC member of the Apache Spark project and author of "Machine Learning with Spark". Nick is passionate about combining commercial focus with machine learning and cutting-edge technology to build intelligent systems that learn from data to add business value.

Task Description : The popular version of applying deep learning is that you take an open-source or research model and simply deploy it. However, in reality developers and data scientists face many challenges ranging from custom requirements for data pre- and post-processing, to inconsistencies across frameworks, to lack of standardization in serving APIs. The goal of the IBM Developer Model Asset eXchange (MAX) is to remove these barriers to entry for developers to obtain, train and deploy open-source deep learning models for their enterprise applications. For model deployment, MAX provides container-based, fully self-contained model artefacts, encompassing the end-to-end deep learning predictive pipeline and exposing a standardized REST API. This talk explores the MAX deployment and serving framework, covering best practices for cross-framework, standardized deep learning model deployment  .... 

Slides and recording of the talk now posted:   http://cognitive-science.info/community/weekly-update/ 

Please retweet: https://twitter.com/sumalaika/status/1181291829091864581?s=20

Join LinkedIn Group https://www.linkedin.com/groups/6729452

Rolling Your Own Neural Network

Seen several of these tutorials now interlaced with all concepts, math and code.  This one seems nicely done.   When we first played with these in the 80s we wrote our own from bottom up code, now you can roll your own with libraries quite easily.

Build an Artificial Neural Network(ANN) from scratch: Part-1
Towards Data Science by Nagesh Singh Chauhan 

This article focused on building an Artificial Neural Network using numpy python library. ... 

New Reinforcement Training Frameworks

Good to see the kind of open source development being done.

DeepMind Has Quietly Open Sourced Three New Impressive Reinforcement Learning Frameworks
Three new releases that will help researchers streamline the implementation of reinforcement learning programs   By Jesus Rodriguez

Deep reinforcement learning(DRL) has been at the center of some of the biggest breakthroughs of artificial intelligence(AI) in the last few years. However, despite all its progress, DRL methods remain incredibly difficult to apply in mainstream solutions given the lack of tooling and libraries. 

Consequently, DRL remains mostly a research activity that hasn’t seen a lot of adoption into real world machine learning solutions. Addressing that problem requires better tools and frameworks. Among the current generation of artificial intelligence(AI) leaders, DeepMind stands alone as the company that has done the most to advance DRL research and development. Recently, the Alphabet subsidiary has been releasing a series of new open source technologies that can help to streamline the adoption of DRL methods.  ... " 

What is Watson? A Significant Part of the Future of AI

Thoughtful piece.   Despite recent criticism as not having attained enough, Watson is a genuine accomplishment.    We saw it early on and remarked about how it was the direction for enterprise AI, solving difficult, data rich problems.   I continue to watch how IBM is advancing this.   See our ISSIP and CSI speaker series, which includes many IBMers who are making real development progress, much of it available to all.    Have worked with a number of IBM Watson practitioners.  As mentioned here, its a series of microservices that will become a key part of the future of AI.   Easy to explore today.

This Is Watson
IBM Watson: Reflections and Projections
Written by: Rob Thomas
Categorized: AI | Analytics | Data Science | IBM Watson | This Is Watson

AI has gone through many cycles since we first coined the term “machine learning” in 1959. Our latest resurgence began in 2011 when we put Watson on national television to play Jeopardy! against humans. This became a cornerstone event, demonstrating that we had something unique. And we saw early success, putting Watson to work on projects with clients. This created even more excitement. That excitement led to more opportunity. At this stage, we have a large product organization, separate dedicated research organization, and an entire health organization all leveraging and building on this technology.

So, what is Watson? This is the question I’ve been asked the most since IBM combined its Data and AI software units earlier this year.

Let’s start with what it’s not. Watson is not a personal assistant like Alexa, Siri or Google Assistant – its capabilities far exceed those of a consumer AI device. However, consumers likely interact with some form of Watson every day, they just are not aware of it. That’s because Watson was built to enable business-to-business interactions. Watson technology spans everything from powering virtual assistants to embedding AI in business processes across many industries.

Watson does not have a voice, gender or personality. Many people associate Watson with the measured male voice used to bring it to life on Jeopardy! and in older TV commercials. We gave it a voice for those instances, but it is not a box that talks back to you. It is a set of composable microservices (software) that live in the cloud. Any cloud, public or private.

Put simply, Watson is software capable of making sense of data sets and understanding natural language to provide recommendations, make predictions, and automate work. As we have fine-tuned our approach, the “Watson” name is only used on IBM products and solutions that significantly utilize IBM Watson technology. For products and solutions in which AI is an embedded enhancement, we use the designation “with Watson.”  .... "