/* ---- Google Analytics Code Below */
Showing posts with label measures. Show all posts
Showing posts with label measures. Show all posts

Tuesday, August 10, 2021

How did the Pandemic Change Internet Traffic?

And how might that change in future pandemics?    Implications? Could we detect events that effect the internet this way?   Look at the Internet and the Pandemic as two experiments that have escaped into the wild.   Can we model that usefully?    Lots of data gathered, what does it mean.

RESEARCH HIGHLIGHTS

Technical Perspective: Tracking Pandemic-Driven Internet Traffic  By Jennifer Rexford

Communications of the ACM, July 2021, Vol. 64 No. 7, Page 100  10.1145/3465173

The Internet is a research experiment that "escaped from the lab" to become a critical global communications infrastructure during our lifetimes. Over the past year of the COVID-19 pandemic, the Internet has supported friends and families staying in touch and supporting each other, remote work and learning, and the global collaboration of experts designing much-needed treatments and vaccines. As challenging as the past year (and more) has been, the Internet has made it possible for many important aspects of life, work, and culture to continue.

In March 2020, the Internet suddenly became a lifeline for people all over the world. Designed to withstand failures, attacks, and fluctuations in traffic, the Internet proved up to the task. Almost overnight, demand for Internet services grew dramatically, and shifted in both time and space. Many Internet service providers (ISPs) had network designs with spare capacity, deployed more bandwidth in critical locations, and relaxed bandwidth caps on low-income households. The Internet protocols, designed to adapt to changing conditions, were able to deliver reasonable service to many users by sharing the available resources dynamically. ... 

Monday, March 01, 2021

Global Variances in Digital Trust

 Digital Trust, supposedly accurately measured.  Had not seen this before, probably useful if accurate.   The components of the measure also seem  they would be hard to generally measure.   And likely have   considerable variance over time, depending on the news.  Below a summary of a long article in the   HBR.   

How Digital Trust Varies Around the World  by Bhaskar Chakravorti, Ajay Bhalla, and Ravi Shankar Chaturvedi     February 25, 2021

Summary.   

As economies around the world digitalize rapidly in response to the pandemic, one component that can sometimes get left behind is user trust. What does it take to build out a digital ecosystem that users will feel comfortable actually using? To answer this question, the authors explored four components of digital trust: the security of an economy’s digital environment; the quality of the digital user experience; the extent to which users report trust in their digital environment; and the extent to which users actually use the digital tools available to them. They then used almost 200 indicators to rank 42 global economies on their performance in each of these four metrics, finding a number of interesting trends around how different economies have developed mechanisms for engendering trust, as well as how different types of trust do — or don’t — correspond to other digital development metrics.   ...' 

Tuesday, January 26, 2021

Computing Turbulence with Competitive AI

 We used turbulence analysis in roasting applications. Could have used this as a means to do better predictions in simulations.

Intriguing application I had not seen yet.  

ETH Researchers Compute Turbulence With AI

ETH Zurich (Switzerland), Simone Ulmer, January 4, 2021

The modeling of turbulence has been automated by researchers at ETH Zurich in Switzerland by merging reinforcement learning (RL) algorithms with turbulent flow simulations on the Swiss National Supercomputing Centre's (CSCS) "Piz Daint" supercomputer. The two major approaches for simulating turbulent flows are direct numerical simulation (DNS) and large eddy simulation (LES). The researchers used artificial intelligence (AI) to determine the best turbulent closure models from DNS and apply them to LES. Their RL algorithm uses the grid points that resolve the flow field as AI agents, which observed thousands of flow simulations to learn turbulence closure models. Said ETH's Petros Koumoutsakos, "The machine 'wins' when it succeeds to match LES with DNS results, much like machines learning to play a game of chess or GO.” Koumoutsakos added that the new methodology “offers a new and powerful way to automate multiscale modeling and advance science through a judicious use of AI."

Tuesday, July 14, 2020

Maintaining the Measures

Like the general thought, measures are important, but what is driving and changing the measures?  Another example of maintaining the model in use.  Changes will happen.

Modern IT KPIs emphasize cloud, DevOps and user experience

When it comes to KPIs, IT ops teams have typically prioritized process-centric metrics, but recent technical and cultural shifts have started to change that.

By Alyssa Fallon, Assistant Site Editor

Key performance indicators drive IT operations teams to work more efficiently -- but what drives KPIs?

While specific metrics will always vary between organizations, the IT KPIs that enterprises track are evolving as a whole. Technical initiatives, such as cloud and DevOps adoption, as well as organizational changes that emphasize IT-business alignment, have IT shops eyeing a new, or at least more expansive, set of key performance indicators.

Technical drivers
The rise of cloud and DevOps has transformed IT in many ways -- including KPIs.

"DevOps has a unique set of KPIs in terms of how applications are developed, how they are provisioned, how they are maintained, how they operate, how frequently they may fail or need to change, and how quickly they can be fixed," said Carl Lehmann, principal analyst at 451 Research. These unique DevOps KPIs include metrics such as time to provision, time to upgrade and time to value.... " 

Tuesday, December 03, 2019

IQMetrix: Business Case for Store Technology

Received, of interest.  New takes on measures are always useful to consider.

Free whitepaper on ROI of in store technology:

IQMetrix

ROI Uninterrupted:

How to Build a Business Case for New In-Store Technology
How to align the team, prove the ROI, and present a winning business case

When you’re preparing your stores for the future of retail, one of the most effective changes you can enact is to implement innovative technologies. But it’s not enough for new systems to be what your business needs; they need to show proven profitability.

Building a strong case for a proposed retail technology is the best way to influence decision-makers in choosing the platform you recommend. Through careful analysis and thoughtful, fact-filled presentation, you can showcase why your system is best for the growth and longevity of the organization.  ..... " 

Sunday, November 17, 2019

Measuring Intelligence

A considerable look at the question, 64 pages.    Not necessarily technical but philosophical at times.  Requires some knowledge of how the problem is being addressed in the press and by academics now. Reading now.

The Measure of Intelligence
Francois Chollet ∗   Google, Inc.      fchollet@google.com
November 6, 2019

Abstract
To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. 

 We summarize and critically assess these definitions and evaluation approaches while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks, such as board games and video games. 

We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to “buy” arbitrary levels of skills for a system, in a way that masks the system’s own generalization power. 

We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience, as critical pieces to be accounted for in characterizing intelligent systems. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like.

Finally, we present a new benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.

I thank Jose Hernandez-Orallo, Julian Togelius, Christian Szegedy, and Martin Wicke for their valuable comments on the draft of this document.  ...  

Full text at the link.

Friday, September 27, 2019

Measures for AI

Essential to get these straight,  sometimes quite simple, often not.   How do they link to goals?

The problem with metrics is a big problem for AI in Fast.AI

Written: 24 Sep 2019 by Rachel Thomas

Goodhart’s Law states that “When a measure becomes a target, it ceases to be a good measure.” At their heart, what most current AI approaches do is to optimize metrics. The practice of optimizing metrics is not new nor unique to AI, yet AI can be particularly efficient (even too efficient!) at doing so.

This is important to understand, because any risks of optimizing metrics are heightened by AI. While metrics can be useful in their proper place, there are harms when they are unthinkingly applied. Some of the scariest instances of algorithms run amok (such as Google’s algorithm contributing to radicalizing people into white supremacy, teachers being fired by an algorithm, or essay grading software that rewards sophisticated garbage) all result from over-emphasizing metrics. We have to understand this dynamic in order to understand the urgent risks we are facing due to misuse of AI. ... "

Tuesday, May 21, 2019

Telecom Customer Analytics

Good piece from DSC on definitions used for customer analytics in Telecom industry.   Specifics of the definitions are interesting, and can be further considered in other industries.   Need for measures and goals can be determined based on measures like these. 

Telecom Customer Analytics    by Dr. Moloy De in DSC
I was deputed to work at Lagos, Nigeria in 2011 to work for a telecom giant there. The project in hand was to develop customer analytics modules using SAS on customer's newly built Oracle data warehouse. We thought about developing following modules.

Customer Churn Analysis
Calculating Product Propensities
Customer Lifetime Value Calculation
Customer Segmentation   .... "        Continued at the link .... 

Friday, February 08, 2019

Helping Alexa Assistant to Learn

Fascinating piece in the Alexa Dev blog on memory measures for Alexa Assistant, am examining.  See the full technical paper linked to below.  Lengthy

How Alexa May Learn to Retrieve Stored "Memories"

By Rasool Fakoor who is an applied scientist in the Alexa Intelligent Decisions group

In May 2018, Amazon launched Alexa’s Remember This feature, which enables customers to store “memories” (“Alexa, remember that I took Ben’s watch to the repair store”) and recall them later by asking open-ended questions (“Alexa, where is Ben’s watch?”). At this year’s IEEE Spoken Language Technologies conference, we presented a paper relating to the technology behind this feature.

Most memory retrieval services depend on machine learning systems trained on sample questions and answers. But they often suffer from the same problem: the machine learning systems are trained using one criterion of success — a loss function — but evaluated using a different criterion — the F1 score, which is a cumulative measure of false positives and false negatives.

In our paper, we use a reinforcement-learning-based model to directly train a memory retrieval system using the F1 score. While the model is not currently in production, our experiments show that it can deliver significant improvements in F1 score over methods that use other criteria during training.

Typically, a machine learning system is trained to minimize some loss function, which describes how far the system is from perfect accuracy. After every pass through the training data, a learning algorithm estimates the shape of the loss function and modifies the system’s settings, in an attempt to find a lower value for the function. It’s a process called gradient descent, because, essentially, the algorithm tries to determine which way the function slopes and to move down the slope. ... " 

Technical Paper    https://arxiv.org/pdf/1810.00679.pdf

"Direct Optimization of F-Measure for Retrieval-Based Personal Question Answering"

Wednesday, June 27, 2018

Make KPI's Great Again says Sloan Study

Via Michael Schrage ....

In Forbes.com  
Make KPIs Great Again, Study Urges   Joe McKendrick , CONTRIBUTOR

For every technology effort worth its salt, the output is tied to some form of key performance indicators (KPIs) that measure its effect on the business. Now, a new MIT Sloan Management Review-Google study calls these vaunted metrics into question. Despite this being the information age,  filled with data-driven organizations, nearly 30% of business leaders don’t even bother to use KPIs to drive change in their organizations.

"Our research indicates that KPIs are mismanaged and undervalued,” according to Michael Schrage, a research fellow at the MIT Sloan School’s Center for Digital Business and a coauthor of the report. The survey of 3,200 executives finds only 26% of senior managers strongly agree that their KPIs "are aligned with their organization’s strategic objectives." That's consult speak, by the way, for "only 26% have a clue as to what the heck the KPIs are supposed to be telling them." ... " 

Wednesday, October 18, 2017

China IQ Test for Assistants?

Am somewhat skeptical about such tests, its not raw 'IQ" , but rather how well a system provides value in context.  Still it will be interesting to see how such a system could be a starting point for assistance systems.  Aiming to take a look at this.   Measures are always important, but in specific useful contexts.

Now There's an IQ Test for Siri and Friends 
Technology Review   October 13, 2017

Researchers at the Chinese Academy of Sciences in Beijing have developed an intelligence test that both machines and humans can take, and used it to rank intelligent assistants such as Google Assistant and Siri on the same scale used for humans. The test is based on the "standard intelligence model," in which systems must have a way of obtaining data from the outside world. They must be able to transform the data into a form they can process, they must be able to use this knowledge in an innovative way, and they must feed the resulting knowledge back into the outside world. The researchers found that even a six-year-old human outperforms the most advanced digital assistant, which according to this test is Google Assistant. However, machine intelligence is rapidly improving. In 2014, Google Assistant scored 26.4 on this test, compared with a score of 47.28 in the most recent test.  .... "

Monday, October 09, 2017

Advertising Stats: A Cheat Sheet

At first I thought these would define the stats and how they are obtained,  but its a statement of their value, and in some cases trends.  Rounded values.  Mostly US numbers.  Still interesting.  A few links to sources.    In the Gartner Blog:

36 Advertising Stats: A Cheat Sheet
By Martin Kihn   .... "

Saturday, October 07, 2017

KPI in User Experience

Always good to think about useful measures.

What is the most important KPI in User Experience?
by Magnus Revang  in The Gartner Blog

I have many conversations about User Experience with many different organizations. A recurring theme is one of “what do we measure?” and “how much do we invest?”. The key to both these questions lie with the single most important metric in User Experience:    .... " 

Friday, August 18, 2017

Disruption is not the Measure

Sometimes all I hear is disruption, but it is not all there is, and itself is not a measure.  We made money for many years on improvement alone.  So look for disruption, but include hints to value along the way.  By Deloitte in the WSJ:

Disruption Is Not the Key to Winning
Companies can create competitive advantage by leveraging digital technologies to provide exceptional experiences for customers. Six enablers can help.

“Disruptor” is a term used frequently to describe successful modern companies. Airbnb is often credited with disrupting the lodging industry, Uber is cited as a disruptor of the transportation business, and Amazon is widely seen as disrupting retail. Yet while these companies have certainly transformed their industries through innovative business models, disruption isn’t the yardstick for measuring success.  ... " 

Tuesday, July 04, 2017

AI for Manufacturing Process Control

Interesting process manufacturing example.  Weight control is a very often used, and a simple control process.   Interesting to see how this differs.

Hershey adopts AI process to perfect Twizzlers production

The Hershey Co. is seeing success after partnering with Microsoft to develop an artificial intelligence solution to a longstanding production variability issue that impacted product weights. The confectioner's production machines are now able to auto-adjust factors such as temperature and pressure up to 240 times a day to make sure that the product that goes into the package is the correct, advertised weight.  .. " 

Saturday, June 24, 2017

Snap Getting Better Store Sale Measures

Snap acquires Placed to better measure in-app ads to in-store sales
Placed is able to attribute brand’s digital, TV and out-of-home campaigns to store visits and in-store sales.
Tim Peterson on June 5, 2017 at 9:10 pm
Snapchat’s parent company, Snap, has acquired location analytics firm Placed, a company spokesperson said on Monday, confirming a GeekWire report published earlier in the day.

The spokesperson declined to say how much Snap paid for Placed — Bloomberg reported the price to be $125 million — but it’s easy to see how buying Placed — which measures store visits and offline revenue generated by digital, TV and out-of-home ads — could pay off for Snap.

Advertisers like Procter & Gamble and Unilever are pressuring digital ad sellers like Google, Facebook and Snapchat to prove that the money brands spend on ads results in people spending money on the brands’ products. As a result, Google, Facebook and Snapchat have stepped up their measurement capabilities, especially when it comes to measuring if a digital ad led to a real-world purchase. .... " 

Wednesday, March 15, 2017

Data Mining and Advertising Targets

How Data Mining Can Help Advertisers Hit Their Targets?

Wharton's Shawndra Hill discusses her research on TV ads and online search.

Podcast and text: 

Shawndra Hill, a senior fellow at the Wharton Customer Analytics Initiative, likes to dig into the details. As someone who studies data mining, she looks for new ways to apply what she finds to solve business problems. Hill’s latest research paper, “Television and Digital Advertising: Second Screen Response and Coordination with Sponsored Search,” focuses on TV ads, online search and the connections between them. The paper was co-authored with Gordon Burtch from the University of Minnesota and Michael Barto, a data scientist at Microsoft. Hill recently spoke with Knowledge@Wharton about what she found.  ... " 

Sunday, February 26, 2017

Do Solution Order Analytics

When I first read the below I did not understand the '2nd order' aspects,  but then I thought, well yes .... it has to be applied.   Like that the application is considered.   But then should it not have been thought of in detail before doing the analytics?  So it is best applied?  Too often I see today the analytics are thrown out there,  and later its seen if they fit.  Lets model and understand the process,  do the right analytics, measure its value, and repeat to get more.    Is that 0-ith order?  Solution-order.  Lets do it.

In DatafloqMoving Beyond Predictions – Second Order Analytics
" ... Identifying The Action Is The Next Step ... 
Once I have a prediction, simulation, or forecast, the next step is to identify what action is required to realize the potential value uncovered. Let’s consider the example of using sensor data for predictive or condition-based maintenance. In this type of analysis, sensor data is captured and analyzed to identify when a problem for a piece of equipment is likely. For example, an increase in friction and temperature within a gear might point to the need to replace certain components before the entire assembly fails. ... " 

Monday, September 12, 2016

Too Much Data? First Use Right Measures

In era of hunger for bigger data, we can still get it wrong.  Its more important to have the right metrics.  Via Think with Google:

Weekly Thought-Starter
Too much data can make for muddled metrics. To drive business growth, focus measurement on your brand’s true KPIs, such as sales and lifetime value.  ....   

Marketers are capitalizing on this by delivering digital marketing strategies to meet consumers in these moments of need. But with mobile taking center-stage for many brands, a new set of challenges has arisen around how to quantify digital's value and capture growth opportunities. We chatted with Adam Lavelle, Chief Growth Officer at Merkle (a leading performance marketing agency) to discuss how they help clients not only measure the impact of digital campaigns, but also drive real business growth.    ... ." 

Monday, March 14, 2016

Caution with the Statistical P Value

Statistical measures like P-Values and R Squares are dragged out to prove a number of things.  But caution should be considered.  This Nature article does a good job of explaining the needed cautions:

" ... Scientific method: Statistical errors
P values, the 'gold standard' of statistical validity, are not as reliable as many scientists assume.
by Regina Nuzzo     ... " 

The final quote in the article brings us back to how any study, analytic or statistical should be considered.  It is about the process involved.  

  "  .... Statistician Richard Royall of Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland, said that there are three questions a scientist might want to ask after a study: 'What is the evidence?' 'What should I believe?' and 'What should I do?' One method cannot answer all these questions, Goodman says: “The numbers are where the scientific discussion should start, not end.  .... "