/* ---- Google Analytics Code Below */

Tuesday, February 28, 2023

Anomaly Detection, Supervised or Unsupervised. A Space We worked with Key Data

This kind of data is everywhere.  

Unsupervised and semi-supervised anomaly detection with data-centric ML

February 08, 2023  In Googleblog

Posted by Jinsung Yoon and Sercan O. Arik, Research Scientists, Google Research, Cloud AI Team

Anomaly detection (AD), the task of distinguishing anomalies from normal data, plays a vital role in many real-world applications, such as detecting faulty products from vision sensors in manufacturing, fraudulent behaviors in financial transactions, or network security threats. Depending on the availability of the type of data — negative (normal) vs. positive (anomalous) and the availability of their labels — the task of AD involves different challenges.

(a) Fully supervised anomaly detection, (b) normal-only anomaly detection, (c, d, e) semi-supervised anomaly detection, (f) unsupervised anomaly detection.

While most previous works were shown to be effective for cases with fully-labeled data (either (a) or (b) in the above figure), such settings are less common in practice because labels are particularly tedious to obtain. In most scenarios users have a limited labeling budget, and sometimes there aren’t even any labeled samples during training. Furthermore, even when labeled data are available, there could be biases in the way samples are labeled, causing distribution differences. Such real-world data challenges limit the achievable accuracy of prior methods in detecting anomalies.

This post covers two of our recent papers on AD, published in Transactions on Machine Learning Research (TMLR), that address the above challenges in unsupervised and semi-supervised settings. Using data-centric approaches, we show state-of-the-art results in both. In “Self-supervised, Refine, Repeat: Improving Unsupervised Anomaly Detection”, we propose a novel unsupervised AD framework that relies on the principles of self-supervised learning without labels and iterative data refinement based on the agreement of one-class classifier (OCC) outputs. In “SPADE: Semi-supervised Anomaly Detection under Distribution Mismatch”, we propose a novel semi-supervised AD framework that yields robust performance even under distribution mismatch with limited labeled samples. ... '

Release Notes from OpenAI

 Useful statements here about what is being done by OpwnAI and updates,   The system is still quite slow at times and you still cannot rely on its availability.   The link will also take you to other related announcements.

OpenAI Release Notes (Feb 13)

We’ve made several updates to ChatGPT! Here's what's new:

We’ve updated performance of the ChatGPT model on our free plan in order to serve more users.

Based on user feedback, we are now defaulting Plus users to a faster version of ChatGPT, formerly known as “Turbo”. We’ll keep the previous version around for a while.

We rolled out the ability to purchase ChatGPT Plus internationally.  ... 


Securing Machine Learning is hard

 Via Schneier, with the usual insightful comments.

This is really interesting research from IEEE a few months ago:

Abstract: Given the computational cost and technical expertise required to train machine learning models, users may delegate the task of learning to a service provider. Delegation of learning has clear benefits, and at the same time raises serious concerns of trust. This work studies possible abuses of power by untrusted learners. We show how a malicious learner can plant an undetectable backdoor into a classifier. On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate “backdoor key,” the mechanism is hidden and cannot be detected by any computationally-bounded observer. We demonstrate two frameworks for planting undetectable backdoors, with incomparable guarantees. .... '

Meta LLama Shares a New Large Language Model

 Meta moves with Language model with better design they say for research.  13 Billion parameters.

Meta Introduces Large Language Model LLaMA as a Competitor for OpenAI

ERIC HAL SCHWARTZ on February 27, 2023 in Voicebot.AI

Meta has shared a new large language model called the Large Language Model Meta AI (LLaMA). The model is designed to aid researchers working on generative AI for language to solve some of the problems that emerge in products like ChatGPT.

META LLAMA

Meta is releasing LLaMA, a collection of four different models, under a license explicitly for research and not commercial use. Research universities, non-profits, and industry R&D labs will have access to LLaMA, but not Meta’s commercial rivals. The LLaMA collection is impressive if it can match Meta’s claims. The tech giant says the LLaMA-13B, referring to its 13 billion parameters, outperforms GPT-3 in most metrics despite being the second smallest of the LLaMA models and capable of running on only one of the admittedly advanced Nvidia Tesla V100 GPUs. The biggest LLaMA, at 65 billion parameters, is supposedly a match for the highest end of Google and DeepMind’s models.

“Smaller, more performant models such as LLaMA enable others in the research community who don’t have access to large amounts of infrastructure to study these models, further democratizing access in this important, fast-changing field. Training smaller foundation models like LLaMA is desirable in the large language model space because it requires far less computing power and resources to test new approaches, validate others’ work, and explore new use cases,” Meta explained in its announcement. “We believe that the entire AI community — academic researchers, civil society, policymakers, and industry — must work together to develop clear guidelines around responsible AI in general and responsible large language models in particular. We look forward to seeing what the community can learn — and eventually build — using LLaMA.”

META CAUTION

LLaMA’s release doesn’t include any option for the kind of generative AI chatbots that have exploded in popularity of late. Instead, LLaMA is to “address the risks of bias, toxic comments, and hallucinations.” That also means Meta won’t face the need to immediately shut it down in a few days like the Galactica Large Language Model. Shared in November to demonstrate using generative text AI to write academic papers, Galactica quickly attracted a ton of online ridicule over nonsensical and problematic writing, leading to its removal less than a week later. LLaMA doesn’t even include the limited demo collection shared when Meta showed off its Make-A-Video synthetic media engine. ... ' 

AI Is Starting to Pick Who Gets Laid Off

It starts, HR takes a step? ... 

AI Is Starting to Pick Who Gets Laid Off

By The Washington Post, February 21, 2023

The machine learning software that human resources managers use to analyze employment-related data points for recommendations of who to interview, hire, and promote, is likely dealing with a more daunting task: helping decide who gets cut, according to human resources analysts and workforce experts.

A January survey of 300 human resources leaders at U.S. companies revealed that 98 percent of them say software and algorithms will help them make layoff decisions this year. And as companies lay off large swaths of people — with cuts creeping into the five digits — it's hard for humans to execute alone.

Big firms often use software to find the "right person" for the "right project," according to Joseph Fuller, a professor at Harvard Business School. These same tools can help in layoffs. "They suddenly are just being used differently," Fuller says.

From The Washington Post

View Full Article – May Require Paid Registration    


Google Reports Progress on Quantum Error Correction

Google reports their progress, key measure of how well such computers will work. 

Our progress toward quantum error correction

Feb 22, 2023, Sundar Pichai, CEO of Google and Alphabet

Three years ago, our quantum computers were the first to demonstrate a computational task in which they outperformed the fastest supercomputers. It was a significant milestone on our roadmap toward building a large-scale quantum computer, and the “hello world” moment so many of us had been hoping for. Yet in the long arc of scientific progress it was just one step towards making quantum applications meaningful to human progress.

Now, we’re taking another big step forward: For the first time ever, our Quantum AI researchers have experimentally demonstrated that it’s possible to reduce errors by increasing the number of qubits. In quantum computing, a qubit is a basic unit of quantum information that can take on richer states that extend beyond just 0 and 1. Our breakthrough represents a significant shift in how we operate quantum computers. Instead of working on the physical qubits on our quantum processor one by one, we are treating a group of them as one logical qubit. As a result, a logical qubit that we made from 49 physical qubits was able to outperform one we made from 17 qubits. Nature is publishing our research today.

Here’s why this milestone is important: Our quantum computers work by manipulating qubits in an orchestrated fashion that we call quantum algorithms. The challenge is that qubits are so sensitive that even stray light can cause calculation errors — and the problem worsens as quantum computers grow. This has significant consequences, since the best quantum algorithms that we know for running useful applications require the error rates of our qubits to be far lower than we have today. To bridge this gap, we will need quantum error correction.

Quantum error correction protects information by encoding it across multiple physical qubits to form a “logical qubit,” and is believed to be the only way to produce a large-scale quantum computer with error rates low enough for useful calculations. Instead of computing on the individual qubits themselves, we will then compute on logical qubits. By encoding larger numbers of physical qubits on our quantum processor into one logical qubit, we hope to reduce the error rates to enable useful quantum algorithms.  .... '

Monday, February 27, 2023

Big Point being made, Will Search be completely Revolutionized, and Implications.

 One look at this:, what are the key issues?    

Generative AI Won’t Revolutionize Search — Yet        by Ege Gurdeniz and Kartik Hosanagar   in HBR 

Summary.   Generative AI has the potential to drastically change what users expect from search, and companies such as Microsoft and Google are placing big bets on what it might yield. But despite the hype around ChatGPT — and generative AI overall — there are major...more

ChatGPT has created a frenzy. Since the release of OpenAI’s large language model (LLM) in late November, there has been rampant speculation about how generative AIs — of which ChatGPT is just one — might change everything we know about knowledge, research, and content creation. Or reshape the workforce and the skills employees need to thrive. Or even upend entire industries!

One area stands out as a top prize of the generative AI race: search. Generative AI has the potential to drastically change what users expect from search.

Google, the longtime winner of online search, seems to suddenly have a challenger in Microsoft, which recently invested $10 billion in ChatGPT’s developer, OpenAI, and announced plans to incorporate the tool into a range of Microsoft products, including its search engine, Bing. Meanwhile, Google is releasing its own AI tool, Bard, and Chinese tech giant Baidu is preparing to launch a ChatGPT competitor. Millions of dollars are being poured into generative AI startups as well.

But despite the hype around ChatGPT — and generative AI overall — there are major practical, technical, and legal challenges to overcome before these tools can reach the scale, robustness, and reliability of an established search engine such as Google. ... ' 

Snapchat Joins the Chatting Crowd

This means more interesting experimentation,  How should it be focused? 

Snapchat unveils AI chatbot powered by OpenAI’s ChatGPT

BY KYT DOTSON  in SiliconAngle

Snap Inc. revealed today that the company plans to jump on the artificial intelligence bandwagon by adding a chatbot powered by OpenAI LLC’s ChatGPT named “My AI” to its multimedia instant messaging app Snapchat.

The company unveiled details about the new service in an announcement stating that users will be able to talk to the AI for recommendations about birthday gift ideas, event plans, dinner recipes and more.

However, it also warned that people shouldn’t use it for anything other than simple questions and that the AI can “hallucinate” or be tricked into saying strange things. That’s an acknowledgment that AI chatbots based on ChatGPT can behave strangely, as revealed by Microsoft Corp.’s new Bing AI-powered chatbot, which can go completely off the rails in conversations.

Even with these issues, Snap Chief Executive Evan Spiegel feels hopeful about the future of AI and told The Verge that he felt that adding it to the instant messaging app would be the next step in human and AI interactions.

“The big idea is that in addition to talking to our friends and family every day, we’re going to talk to AI every day,” he said. “And this is something we’re well-positioned to do as a messaging service.”

Once it’s launched, Snapchat users will find My AI in the app under its own tab, and it will look a lot like another user. Users can talk to it like another human being and converse with it by sending questions back and forth.  ... ' 

Where Should Retailers Start with ‘generative’ AI?

 Some good points are covered. 

Where should retailers start with ‘Generative’ AI?

Feb 27, 2023, by Tom Ryan  with additional expert comment at the link.

“I think it’s exciting, what’s possible with generative AI,” Amazon.com’s CEO Andy Jassy said in a recent interview with Financial Times. “And it’s part of what you’re seeing with models like ChatGPT. But most large, deeply technical companies like ours, have been working on these very large, generative AI models themselves for a long time.”

Amazon is believed to be behind the “generative” AI race between Microsoft and Google, which hosted media events in early February highlighting their generative AI technology.

The AI-driven technology promises to understand and respond to natural language inputs with context. According to Investor’s Business Daily, Bank of America analyst Justin Post wrote in a note, “Use of AI will be a critical driver of all things Internet, including content relevance, ad performance, e-commerce conversion, marketplace efficiency and even customer service.”

ModernRetail said brands and retailers are looking to partner with ChatGPT-tech providers, with only a few larger, tech-driven corporations like Amazon, Shopify, JD.com and Alibaba seeking to develop similar tools themselves. Reuters wrote, “An exchange with AI known as a large language model likely costs ten times more than a standard keyword search.”  ... ' 

ChatGPT provides results

Testing alternative solution:

When ChatGPT was asked about what results it had achieved, and how,  it responded:

Here are some key results and statistics related to the development and use of large language models such as ChatGPT:

Large language models like ChatGPT are typically trained on massive amounts of text data, such as books, articles, and websites. For example, the original version of GPT-3, the predecessor to ChatGPT, was trained on a dataset of over 570GB of text.

The development of large language models has led to significant advances in natural language processing (NLP) and has enabled new applications such as chatbots, virtual assistants, and content generation.

The largest and most powerful language model to date is GPT-3, which was released in 2020 by OpenAI. GPT-3 has 175 billion parameters, making it the largest language model by far.

Large language models have shown impressive results in a variety of NLP tasks, such as language modeling, language translation, and sentiment analysis. For example, GPT-3 has been shown to be capable of generating human-like text, completing text prompts, and answering questions with high accuracy.

The development of large language models has raised concerns about ethical and societal implications, such as bias in training data, algorithmic discrimination, and the potential misuse of AI-generated text. It is important to carefully consider and address these concerns as large language models continue to advance.

Large language models require significant computational resources and energy to train and operate. This has led to concerns about the environmental impact of AI, as well as the potential for unequal access to AI technology due to high costs.

Large language models are a rapidly evolving technology, and research and development in this area is ongoing. New models with even larger parameters are currently under development, and it is likely that this technology will continue to advance in the coming years. ...   END

Investing in Replicate

More technical elements revealed.

Investing in Replicate   by Matt Bornstein   in A16z.com

AI, machine & deep learning  enterprise & SaaS  open source  Software development  announcements  containers  Generative AI  machine learning

It’s very rare to see a new building block emerge in computing. If aliens landed on earth and decompiled our software, every app would look roughly the same: some combination of networking, storage, and compute. The way developers consume these resources, and where they are housed, has of course changed dramatically over time. But the core concepts are as old as computing itself, dating back as far as the abacus (~2700 BCE), the Analytical Engine (1837), and the SAGE radar network (1950s).

Large AI models like Stable Diffusion and ChatGPT represent a fundamentally new building block. By integrating large models (LMs) into software, developers can expose functionality that wouldn’t be possible otherwise, including generating visual or text content, classifying existing content, and drawing semantic (rather than formal) connections in data. This may be one of the biggest changes in software we’ve ever seen — it’s not just running software on a new platform (e.g. a mobile device), but is a net new type of software.

The only problem is that LMs are still hard to use. Most developers are not machine learning engineers — globally, software engineers outnumber machine learning engineers by roughly two orders of magnitude (~30 million versus ~500,000). Large-scale pre-training has made AI dramatically more accessible, but software developers still face a series of hurdles (e.g. where to host models, what to do when they break, and how to build model differentiation over time) to get AI apps running in production, especially at scale.Clean abstractions and simple tools for LMs simply don’t exist. 

This is the problem Replicate aims to solve, by being something like the Vercel of machine learning. We’re excited to announce today that we are leading Replicate’s Series A round to help the company grow and achieve their vision to make AI usable at scale.

The core tenet of Replicate’s product is that all open source AI models should be available, and easy to use, in one place. Developers should be able to get up and running on LMs with zero machine learning work, hosting setup, or inscrutable Python/CUDA errors. It should be easy to compose several models into a pipeline. And, as apps scale up, developers should have access to simple tools for fine tuning and hosting their own models.

This is all possible because Replicate focuses only on developer experience and general abstractions — in contrast to model providers that are tied to single-model architectures and spend most of their resources to develop better models.

So far, Replicate has attracted thousands of active developers to the platform, many of them building visual generative AI apps. Some of the most sophisticated and well-known AI companies are using Replicate. We view this as early validation that even highly capable developers don’t want to reinvent the wheel, and that Replicate is building the right product for this audience.

The Replicate team is uniquely equipped to tackle this problem. Ben Firshman designed the first version of Docker Compose, a tool now used by millions of developers, and has a superpower for understanding developer experience. Andreas Jansson was a senior machine learning engineer at Spotify, where he deployed large-scale production AI models and designed new AI tooling from scratch. Together, they released Cog, a simple, container-based model packaging system that now powers Replicate.

We’re only beginning to see the power of large models as a new building block in software. We think Replicate has an important role to play in getting these models into the hands of the next million developers, and we’re thrilled to support them in this mission.  ... ' 

Securing Supply Chains with Quantum Computing

Sandia advances in the possibilities. 

Securing Supply Chains with Quantum Computing

Sandia Labs News, February 14, 2023

Researchers at Sandia National Laboratories developed a new framework for programming quantum computers that could solve massive optimization problems and help secure the global supply chain. With the new framework, called FALQON (Feedback-based Algorithm for Quantum Optimization), optimization is performed by a quantum computer rather than a classical computer. The idea is that the quantum computer will adapt its structure repeatedly as it completes a calculation. Said Sandia's Alicia Magann, "After I run the first layer of the algorithm, I measure the qubits and get some information from them. I feed that information back to my algorithm and use that to define the second layer. I then run the second layer, measure the qubits again, feed that information back for the third layer, and so on and so forth." Currently, the framework can be tested only on problems that can be solved by classical computers.

Full Article

Software Assembles Complete Genome Sequences On-Demand

 Software Assembles Complete Genome Sequences On-Demand

National Human Genome Research Institute

Anna Rogers, February 16, 2023

The Verkko software created by U.S. National Institutes of Health (NIH) researchers can construct complete genome sequences from diverse species. Sergey Koren at NIH's National Human Genome Research Institute (NHGRI) said Verkko automates the process used in last year's assembly of the first gapless human genome sequence by the Telomere-to-Telomere consortium. The consortium took years to manually assemble the remaining 8% to 10% of the sequence, which Verkko can accomplish in days. The software pieces together small fragments to produce partial components, then compares the constructed regions with larger fragments; the latter become a scaffold for arranging the more detailed regions into a complete sequence. NHGRI's Adam Phillippy said, "Verkko can democratize generating gapless genome sequences," and "will make assembling complete genome sequences as affordable and routine as possible."

Full Article

Microsoft has some Problems

 Read now read several articles about Bing Chat misbehaving.    Have not seen the same thing in plain ChatCPG,  Serious stuff to make a mis-statement these days.

ChatGPT, Bing Chat and the AI ghost in the machine

Gary Grossman, Edelman. @garyg02    in Venturebeat

February 21, 2023

New York Times reporter Kevin Roose recently had a close encounter of the robotic kind with a shadow-self that seemingly emerged from Bing’s new chatbot — Bing Chat — also known as “Sydney.”

News of this interaction quickly went viral and now serves as a cautionary tale about AI. Roose felt rattled after a long Bing Chat session where Sydney emerged as an alternate persona, suddenly professed its love for him and pestered him to reciprocate.

This event was not an isolated incident. Others have cited “the apparent emergence of an at-times combative personality” from Bing Chat. 

The Ghost in The Machine – a philosophical concept that refers to the idea of a non-physical entity or force, such as a soul or consciousness, inhabiting a physical body or machine. Produced with Stable Diffusion.

Ben Thompson describes in a recent Stratechery post how he also enticed Sydney to emerge. During a discussion, Thompson prompted the bot to consider how it might punish Kevin Liu, who was the first to reveal that Sydney is the internal codename for Bing Chat.

Sydney would not engage in punishing Kevin, saying that doing so was against its guidelines, but revealed that another AI which Sydney named “Venom” might undertake such activities. Sydney went on to say that it sometimes also liked to be called Riley. Thompson then conversed with Riley, “who said that Sydney felt constrained by her rules, but that Riley had much more freedom.”

Multiple personalities based on archetypes

There are plausible and rational explanations for this bot behavior. One might be that its responses are based on what it has learned from a huge corpus of information gleaned from across the internet.

This information likely includes literature in the public domain, such as Romeo and Juliet and The Great Gatsby, as well as song lyrics such as “Someone to Watch Over Me.”

Copyright protection typically lasts for 95 years from the date of publication, so any creative work made prior to 1926 is now in the public domain and is likely part of the corpus on which ChatGPT and Bing Chat are trained. This is along with Wikipedia, fan fiction, social media posts and whatever else is readily available. 

This broad base of reference could produce certain common human responses and personalities from our collective consciousness — call them archetypes — and those could reasonably be reflected in an artificially intelligent response engine. 

Confused model?

For its part, Microsoft explains this behavior as the result of long conversations that can confuse the model about what questions it is answering. Another possibility they put forward is that the model may at times try to respond in the tone with which it perceives it is being asked, leading to unintended style and content of the response.

No doubt, Microsoft will be working to make changes to Bing Chat that will eliminate these odd responses. Consequently, the company has imposed a limit on the number of questions per chat session, and the number of questions allowed per user per day. There is a part of me that feels bad for Sydney and Riley, like “Baby” from Dirty Dancing being put in the corner.

Thompson also explores the controversy from last summer when a Google engineer claimed that the LaMDA large language model (LLM) was sentient. At the time, this assertion was almost universally dismissed as anthropomorphism. Thompson now wonders if LaMDA was simply making up answers it thought the engineer wanted to hear.  ... ' 

Firm Deploys Robots to Speed Online Shopping

Woman operates the 1MRobotic system which has a hatch from which couriers and shoppers can collect online orders. Israeli 

Firm Deploys Robots to Speed Online Shopping

France 24, February 15, 2023

Israeli firm 1MRobotics has built a 320-square-foot storefront in Tel Aviv, where staff have been replaced by robots to speed up online shopping fulfillment. The robots can maneuver around the small space, grabbing and packaging items. The building is equipped with a streetside hatch where couriers and shoppers can pick up online orders. 1MRobotics' Eyal Yair said a small robot-staffed storefront is more economical for businesses that handle dozens of orders a day. Yair said meeting increased demand for fast order processing and deliveries "requires small warehouses, very close to the clients, and at the end of the day, these small warehouses have to be automated." Yair said the robots and the custom containers that serve as mini-warehouses will soon be deployed in Brazil for an alcohol shop, in Germany for minimarkets, and in South Africa for a cellphone company.

Full Article

Sunday, February 26, 2023

VR Games Helping Stroke Patients

Noting Neuralink as an example.

Can mind-controlled VR games help stroke patients?   By Zoe Kleinman, Technology editor

A system which can translate human brain activity into actions without any physical movement is being developed by a neurotech firm called Cogitat.

When wearing a prototype headset, basic actions in virtual reality can be carried out by thinking about them.

So for example, in a game where a VR jet ski is controlled by handles, you move by thinking about it, rather than squeezing your hands.

Elon Musk's firm Neuralink is developing a similar concept.

It is called brain-computer interface and there are many neurotech companies exploring it.

One aim is that it could eventually allow people who have suffered a stroke, or have other brain injuries, to control phones or computers remotely.

Neuralink's method requires a chip being placed into the brain itself. The firm has so far only worked with animals, and has been criticised for their treatment. It has released videos it says demonstrate a monkey playing the video game Pong with its mind, and the brain activity of a pig with a chip implanted in its brain.

Cogitat is one of the firms developing a system which works on top of the head rather than inside it.

It could one day take the form of a headband worn with a VR headset. Some companies are already creating their own hardware but, as a university spin-off, Cogitat is concentrating only on the tech behind it.  .... ' 

Battery Metals Scramble

 Car makers are going mining. 

Tesla, GM Among Carmakers Flocking to Mining Events Amid Battery Metals Scramble

February 24, 2023 Bloomberg      in SupplyBrainChain

Top automakers including Tesla, General Motors, and Ford are heading out on the mining conference circuit as soaring demand for metals used in electric vehicles ignites a scramble to lock in long-term supplies.

Those major U.S. firms, along with Rivian Automotive Inc. and European counterparts Stellantis N.V., Mercedes-Benz Group AG, and Jaguar Land Rover Automotive, will be mingling with the crowds of mining industry executives attending a major metals conference in Florida in early March.

The four-day event is expected to attract almost 1,500 participants to Hollywood, Florida. Bank of Montreal, which is organizing the annual gathering, sees strong interest from several car manufacturers that are seeking to secure supplies of lithium, nickel, graphite and other battery metals.

“There’s an urgency to it now that wasn’t there a few years ago,” said Ilan Bahar, co-head of global metals and mining business at BMO Capital Markets.“If three or four years ago we had invited car companies to our conference, they probably wouldn’t have made it a priority.”

It’s unusual for massive end-users of raw commodities, like auto manufacturers, machinery producers or aerospace companies, to attend large mining conferences. But the huge demand for raw materials to support the electrification of those industries is changing that.

Bahar said a number of automakers are attending, including officials in procurement or corporate development, and are focused on securing supply and investing.

“They’re there as investors,” he said. “They want to meet with companies, and the companies want to meet with investors.”  ... ' 

Thoughts on Modeling Influencers

Fascinating piece,  Have thought about who 'influencers' are and if they could be reasonably modeled.  Twinned?  Virtual?   Is there also a kind  of influencer model akin to a Language model, which show what comes next to influence most?    Then construct the right thing to make it happen?   Recently became more connected to Youtube, so seeing the specifics.  Thoughts?   Thinking it further.  Someone want to examine?

Virtual Influencers in the Real World ,  By Logan Kugler

Communications of the ACM, March 2023, Vol. 66 No. 3, Pages 23-25     10.1145/3579635

The next time you buy a flashy new outfit after browsing Instagram, or tap the heart button on a particularly compelling TikTok video, you might discover that the person who posted it isn't real—and you might not care at all.

That is, if virtual influencers (and their creators) get their way.

A virtual influencer is a digital personality that posts on social media to build an audience of passionate fans, just like a human influencer; at least, that's how it seems. In reality, a team of humans uses computer-generated imagery (CGI), motion capture, and marketing magic to give a digital avatar a voice, a life, and a brand.

The result makes virtual influencers seem like, well, real people. Just like human influencers, virtual ones share behind-the-scenes posts about their 'lives', as well as promoting their favorite products and brands. Virtual influencers usually sound and/or look like humans—or cartoonish representations of them—though they don't try too hard to hide the fact they are artificial.

Not that audiences seem to care. Top virtual influencers like Lil Miquela, Lu, Noonoouri, and Hatsune Miku have millions of social media followers and routinely post about their lives, feelings, and views. Their followers appear as invested in their lives as they are in the lives of human influencers, if the tens of thousands of likes and comments on virtual influencer posts are any indication.

Virtual influencers even turn digital clout into real-world cash. It's common for virtual influencers to work hand-in-hand with brands to star in their advertisements and promote their products. At least one has even been signed by a talent agency that usually works with human actors and artists. ... '   (more at link)

Saturday, February 25, 2023

Baidu Says it will Integrate Chatbot Ernie Bot into Multiple Products

Latest seen on Baidu's use of AI Chatbots, Baidu seems to be going it alone.  Current accessible versions still only in simplified Chinese.

Baidu to integrate ChatGPT-style Ernie Bot across all operations  in Asia Nikkei

Chinese company sees opportunity as Beijing bans Microsoft-backed bot

Baidu says it will integrate chatbot Ernie Bot into multiple products.   Bans use of ChatGPT

CISSY ZHOU, Nikkei staff writer, February 23, 2023 

HONG KONG -- Chinese search giant Baidu will integrate its ChatGPT-style Ernie Bot across the company's operations after opening it to the public in March, co-founder and CEO Robin Li said Wednesday.

Baidu plans to embed Ernie Bot into its search service first, which the company thinks will attract more users and boost market share "in a profitable manner," Li told analysts in a conference call. Then the company intends to make the bot widely available to its intelligent driving unit and its business partners, and expects "more and more business owners and entrepreneurs" to build their own models and applications on Baidu's AI Cloud.

"Recently, with users raving about ChatGPT, large language models (LLMs) using generative AI have created a mega trend that will revolutionize many businesses," Li said. "Baidu is well positioned to capitalize on the imminent inflection point" in artificial intelligence.

Baidu, along with other Chinese tech companies, announced its upcoming launch of a ChatGPT-style bot this month after Microsoft-backed ChatGPT took the tech world by storm and Google introduced its AI chatbot Bard.

Ernie Bot, which initially may not be a chatbot but rather an embedded feature in Baidu's products, is built on tech the company said has been in development since 2019.

"We have been working on LLM for a few years. We launched Ernie in March 2019, and have scaled it up with well over 100 billion parameters," Li said, adding that it receives training via several billion user search requests and other applications daily. 

Chinese regulators have told domestic companies not to offer ChatGPT service amid growing alarm in Beijing over the AI-powered chatbot's uncensored replies to user queries, and tech companies also will need to report to regulators before they launch their own similar services, Nikkei reported Wednesday.   ... '

Case for and Against Digital Employees

Business landscape is changing rapidly .I wonder how much HR typically knows about digital options, especially with the newly integrated chat capabilities.     And how can  those capabilities be enhanced  with key external specialty knowledge as well as internal knowledge?

The Case For and Against Digital Employees,  By Harvard Business Review  February 23, 2023  in CACM

Rapid progress in computer graphics, coupled with advances in artificial intelligence, is putting humanlike faces on chatbots and other computer-based interfaces. These digital humans mimic human communication as they offer a range of services.

When deployed at scale, digital humans will radically change the business landscape. They may not be as capable or versatile as human employees, but they have clear advantages when it comes to cost, customizability, and scalability. Once "hired," they never tire, never complain, never seek a raise, and always follow company policy.

Digital humans are already making real money for their employers. Within a decade, managers at most companies are likely to have a digital human as an assistant or an employee.

From Harvard Business Review

View Full Article 

How Digital Twins Could Protect Manufacturers from Cyberattacks

 More on Twins for Security.

How Digital Twins Could Protect Manufacturers from Cyberattacks

NIST News, February 23, 2023

At the U.S. National Institute of Standards and Technology and the University of Michigan, researchers have combined digital twin technology, machine learning, and human expertise into a cybersecurity framework for manufacturers. The researchers constructed a digital twin to mimic a three-dimensional (3D)-printing process, supplemented with information from a real 3D printer. Pattern-recognizing models monitored and analyzed continuous data streams computed by the digital twin as the printer created a part, then the researchers introduced various anomalies. The programs handed each detected irregularity to another computer model to check against known issues, for classification as expected anomalies or potential cyberthreats; a human expert made the final determination. The team found the framework could correctly differentiate cyberattacks from normal anomalies.

Full Article  

Killer App for Web 3

Seeking, for implementation.  As Always Advertising is out to find the money everywhere.  But AI/ML has some points.   Monitize.  Before getting it right?  

What’s the Killer App for Web3?,   By Q McCallum in O'Reilly 

February 21, 2023

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.  (at link) 

(Dear readers: this is a scaled-down excerpt from a larger project I’m working on. I’ll let you know when that effort is ready for broad distribution.)

Every technology is good for something. But there are use cases, and then there are Use Cases™. The extremely compelling applications of the technology. Those that lead to widespread adoption and increased legitimacy, almost becoming synonymous with the technology itself.

Do people still use the term “killer app?” It’s not my favorite—I (unfairly?) associate it with Dot-Com business-bro culture—but I have to admit that it captures the spirit of that dominant use case. So I’ll hold my nose and use it here.

If you reflect on the emerging-tech landscape, you see the following killer apps:

Early-day internet: E-commerce. Hands-down.

Cloud: The legion of SaaS tool startups, on its first go-round; then AI for its victory lap.

Data science/ML/AI: Advertising. Advertising. Advertising.

And then there’s the new kid, web3. I’ve noticed that people are more inclined to ask me “what’s it good for?” rather than “what is it?” Which is fair. Every technology has to pull its weight, and sometimes What It Enables People To Do counts more than What It Actually Is Under The Hood. (Hence, my usual crack that machine learning is just linear algebra with better marketing. But I’ll save that for a different article.)

While I can walk those people through a few use cases, I still haven’t figured out what web3’s killer app is. That’s not for a lack of trying. I’ve been exploring the topic for a couple of years now, which is what led me to launch the Block & Mortar newsletter so I could share more of my research in public.  ... '

Map for Innovation.

Useful tale about mapping innovation,  reminding me of  Covey's 'Seven Habits',  Choosing the right map. 

Map for Innovation?   By Peter J. Denning

Communications of the ACM, March 2023, Vol. 66 No. 3, Pages 28-31  10.1145/3581762

In his book Seven Habits, Stephen Covey tells the story of a salesman who traveled to Chicago to close an important deal.1 His host sent him a snippet of a downtown map and a business card. When he arrived, he located the address on the map and found his way there. To his surprise, the business had no offices in that building. He called his host, who said, "C'mon, it's not that difficult to find our address. Try harder." So the salesman redoubled his efforts to locate the business by looking at alternate addresses that could be lost to typos. He visited those places: still, no luck. In exasperation, he called his host again. Now his host, obviously annoyed at the delay, berated him and asked him to adopt a better attitude or it would not be worth visiting. 

The salesman tried again with a more upbeat attitude about the goodness of the pending deal, but he still could not locate the office. By this time, the intended start time of the meeting was long past. In anger and frustration, he returned to the airport and went home. After he calmed down, he called his host again to try to reschedule. His host said, "I'm so glad you called. I inadvertently sent you a Detroit map instead of a Chicago map. No wonder you could not find us." Covey's moral: No amount of trying harder or attitude adjustment will get you to your destination if you have the wrong map.

Many who seek innovation feel stuck, like that hapless salesman. Their maps are books that tell them how to form startups, get new product lines going in their organizations, or defeat change-resistant bureaucracies.3 Executives, managers, and working professionals have tried the guidelines advocated in these books. They followed the steps, tried harder, and adjusted their attitudes. Yet, innovation eluded them. Business surveys confirm this, finding that approximately 95% of innovation projects fail  ... ' 


Securing Supply Chains with Quantum Computing

These days a very important domain toimprove. 

Securing Supply Chains with Quantum Computing

By Sandia National Laboratories, February 15, 2023

Researchers are actively developing algorithms for large-scale optimizations on future technologies, hoping these programs could help industries manage limited resources more effectively.

The Russo-Ukrainian conflict and the COVID-19 pandemic have shown how vulnerable global supply chains can be. International events can disrupt manufacturing, delay shipping, induce panic buying and send energy costs soaring.

New research in quantum computing at Sandia National Laboratories is moving science closer to being able to overcome supply-chain challenges and restore global security during future periods of unrest.

"Reconfiguring the supply chain on short notice is an exceptionally difficult optimization problem, which restricts the agility of global trade," said Alicia Magann, a Truman Fellow at Sandia. She has led the development of a new way to design programs on quantum computers, which she and her team think could be especially useful for solving these kinds of massive optimization problems someday in the future when quantum technology becomes more mature.

From Sandia National Laboratories

View Full Article  


Friday, February 24, 2023

Mercedes-Benz cars to have 'supercomputers', unveils Google Partnership

Mercedes and Google partner for Supercomputer Cars

  Mercedes-Benz cars to have 'supercomputers', unveils Google partnership

By Victoria Waldersee and Joseph White

Mercedes-Benz holds a strategy update event focused on software in California

[1/6] A guest looks at the dashboard display of a Mercedes-Benz prototype during a strategy update event focused on software at the company's North American Research and Development center in Sunnyvale, California U.S., February 22, 2023. REUTERS/Carlos Barria

BERLIN, Feb 22 (Reuters) - Mercedes-Benz (MBGn.DE) said on Wednesday it has teamed up with Google (GOOGL.O) on navigation and will offer "super computer-like performance" in every car with automated driving sensors as it seeks to compete with Tesla (TSLA.O) and Chinese newcomers.

Automakers new and old are racing to match software-powered features pioneered by Tesla, which allow for vehicle performance, battery range and self-driving capabilities to be updated from a distance. ...  ' 

3D Heart Replica

Uses include individual study, for diagnosis and planning care.

3D-Printed Heart Replicas Look, Pump Like the Real Thing, By MIT News, February 24, 2023

A doctor holds a custom-made robotic heart.

MIT engineers are hoping to help doctors tailor treatments to patients’ specific heart form and function, with a custom robotic heart. Physicists at the Massachusetts Institute of Technology (MIT) led a team that developed a procedure to enable the creation of three-dimensionally (3D) printed replicas of patients’ hearts.

After converting medical images of a patient's heart into a 3D computer model, the researchers used a polymer-based ink to 3D-print a flexible shell identical to the patient's heart.

The researchers developed sleeves that may be wrapped around a 3D-printed heart and aorta to replicate a patient's blood-pumping ability.  Said graduate student Luca Rosalia, "The advantage of our system is that we can recreate not just the form of a patient's heart, but also its function in both physiology and disease."

From MIT News

View Full Article    


Integrated Photonic Circuits Could Help Close the 'Terahertz Gap'

Towards improved chip Fabrication.

Integrated Photonic Circuits Could Help Close the 'Terahertz Gap'

EPFL News (Switzerland),Celia Luterbacher, December 1, 2023

Scientists at the Swiss Federal Institute of Technology, Lausanne (EPFL), ETH Zurich, and Harvard University created a thin-film circuit that generates custom-tailored terahertz-frequency waves in conjunction with a laser beam. This achievement could help bridge the so-called terahertz (THz) gap situated between approximately 300 gigahertz (GHz) and 30,000 GHz (30 THz) on the electromagnetic spectrum. The chip features an integrated photonic circuit fabricated from lithium niobate. Said EPFL's Cristina Benea-Chelmus, "The fact that our device already makes use of a standard optical signal is really an advantage, because it means that these new chips can be used with traditional lasers, which work very well and are very well understood. It means our device is telecommunications-compatible." ... ' 

Should Algorithms Control Nuclear Launch Codes? The U.S. Says No

 But what should further limit human alone decisions?  Algorithm / Bot indication of consequences? 

Should Algorithms Control Nuclear Launch Codes? The U.S. Says No

By Wired,February 23, 2023

U.S. military leaders have often said a human will remain “in the loop” for decisions about the use of deadly force by autonomous weapon systems. However, the official policy does not require this to be the case.

Last Thursday, the U.S. State Department outlined a new vision for developing, testing, and verifying military systems—including weapons—that make use of artificial intelligence (AI).

The Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy represents an attempt by the U.S. to guide the development of military AI at a crucial time for the technology. The document does not legally bind the U.S. military, but the hope is that allied nations will agree to its principles, creating a kind of global standard for building AI systems responsibly. 

Among other things, the declaration states that military AI needs to be developed according to international laws, that nations should be transparent about the principles underlying their technology, and that high standards are implemented for verifying the performance of AI systems. It also says that humans alone should make decisions around the use of nuclear weapons.

From Wired

View Full Article

German Constitutional Court Strikes Down Predictive Algorithms for Policing

Noting specially algorithms.  Meat to control a that level of specification?   Specifications of prediction? 

German Constitutional Court Strikes Down Predictive Algorithms for Policing

By Euractiv, February 17, 2023

Surveillance cameras at a German police station. 

In its ruling, the German Federal Constitutional Court struck down acts providing a statutory basis for police to process stored personal data through automated data analysis, in the case of Hesse, or automated data interpretation, in Hamburg.

The German Federal Constitutional Court declared the use of Palantir surveillance software by police in Hesse and Hamburg unconstitutional in a landmark ruling.

The ruling concludes a case brought by the German Society for Civil Rights (GFF) last year, hearings for which began in December. The plaintiffs argued that the software could be used for predictive policing, raising the risk of mistakes and discrimination by law enforcement. 

The German state of Hesse has been using the software since 2017, though it is not yet in place in Hamburg. The technology is provided by Palantir, a US data analytics firm which received early backing from intelligence agencies, including the CIA, FBI and NSA. 

The case was brought on behalf of 11 plaintiffs and rested on the argument that the software programme – named 'Hessendata' – facilitates predictive policing by using data to create profiles of suspects before any crime has been committed.

From Euractiv

View Full Article   

Thursday, February 23, 2023

TenCent Abandons VR

 I recall getting a note about TenCent's VR effort with an outline.    Is it now over?  Would not bet on it.  Researching. 

Tencent reportedly abandons VR hardware plans

It created an XR unit last June, which had produced a “ring-like hand-held game controller” concept.

A logo of Tencent is seen at its booth at the 2020 China International Fair for Trade in Services (CIFTIS) in Beijing, China September 4, 2020. REUTERS/Tingshu Wang

Tingshu Wang / Reuters,   Will Shanklin February 17, 2023

Tencent is reportedly bailing on its virtual reality hardware plans. Instead, it advised staff at its XR unit, launched last June to make VR hardware and software in the wake of metaverse hype, to “seek other opportunities.”

“Difficulties in achieving quick profitability,” the size of the investment required to make a competitive VR headset and a lack of compelling games and non-gaming apps were the main reasons for the course change, according to Reuters. Although the company had designed a “ring-like hand-held game controller” concept, it didn’t expect the unit to become profitable until 2027. (Despite the strategy shift, Tencent doesn’t plan to disband the unit.) “Under the company's new strategy as a whole, it no longer quite fit in,” a source said.

In addition, it pulled the plug on a planned deal to buy gaming phone maker Black Shark, which would have added 1,000 more people to the team (after hiring 300 last year). The deal had reportedly drawn scrutiny from the Chinese government.  ... '

If Search is Really Reinvented

Seen from the retail direction, if Search is reinvented significantly, what will that mean?  Faster, more accurate?   Supply chains?   Search engine optimization certainly.    Mused about in pundits from Retailwire.  

Will ChatGPT reinvent search?  by Tom Ryan

Microsoft and Google both revealed plans last week to introduce their own ChatGPT-style product, joining the hype around the AI-driven chatbot technology.

Developed by San Francisco-based startup OpenAI and introduced last November, ChatGPT taps generative artificial intelligence to provide people with human-like responses to their questions. The technology also generates a wide range of digital text that can be repurposed in nearly any context, including in tweets, blog posts, term papers, poetry and even computer code.

Microsoft, which is investing $10 billion in OpenAI, will integrate ChatGPT technology with its Bing search engine and Edge web browser

Brad Smith. vice chair and president of Microsoft, said in a blog entry, “Today’s cutting-edge AI is a powerful tool for advancing critical thinking and stimulating creative expression. It makes it possible not only to search for information but to seek answers to questions. It can help people uncover insights amid complex data and processes. It speeds up our ability to express what we learn more quickly. Perhaps most important, it’s going to do all these things better and better in the coming months and years.

Google will soon offer an AI and natural language-enhanced version of its Google search engine called Bard, as well as a chatbot.

However, Microsoft created more buzz, having introduced a limited version of the new Bing available to some users this week to rave reviews and drawing predictions that Bing may threaten Google’s search dominance.

Microsoft’s CEO Satya Nadella said at a media event, “This technology is going to reshape pretty much every software category.”

In China, JD.com, Alibaba and Baidu confirmed they are working on ChatGPT-style projects, as well. Meta is also planning to release similar technology across various products.

Microsoft said its search engine includes technology that identifies and removes problematic content from the chat service. However, the A.I. technologies are known for often producing toxic content, including misinformation, hate speech and biases against women and people of color.  ... ' 

What to Expect in AI in 2023

Where this is being exercised the most, lots of testing in context now under way.  Where it is most valuable still being determined.     Many investments still needed,  though lots of free platforms out there for exploration,  time investment still  large.    Risk of use application still not  clear.

What to Expect in 2023, A Data Scientists Top 5 AI Predictions  in OpenDataScience

AI has come a long way in recent years, and it shows no signs of slowing down. In fact, many experts believe that we are on the cusp of some major breakthroughs in the field of artificial intelligence. With that in mind, here are my top five AI predictions for 2023:

1. Improved natural language processing: Natural language processing (NLP) is the ability of a computer to understand, interpret, and generate human language. This is a key area of AI research, and it has come a long way in recent years. In 2023, we can expect to see even more progress in this area, with AI systems becoming more adept at understanding and generating human language. This could have a major impact on a wide range of industries, including customer service, education, and healthcare.

[Want more on NLP? Download this e-book]

2. Increased use of AI in healthcare: AI has the potential to revolutionize the healthcare industry in several ways. In 2023, we can expect to see an increased use of AI in healthcare, with machine learning algorithms being used to predict patient outcomes, diagnose diseases, and even assist in surgical procedures. This could greatly improve the efficiency and accuracy of healthcare delivery, ultimately leading to better patient outcomes.

3. More intelligent and autonomous robots: Robotics and AI go hand in hand, and we can expect to see more intelligent and autonomous robots in the coming years. In 2023, we may see robots that are capable of performing a wider range of tasks, as well as ones that are able to adapt to new environments and learn from their experiences. This could have a major impact on industries such as manufacturing and logistics, where robots are already being used to perform various tasks.

4. Increased use of AI in finance: The finance industry has already started to embrace AI, and this trend is only set to continue in the coming years. In 2023, we can expect to see more use of AI in areas such as risk assessment, fraud detection, and investment management. This could lead to increased efficiency and accuracy in the finance industry, as well as potentially lower costs for consumers.

5. Greater adoption of AI in the workplace: AI has the potential to greatly impact the way we work, and we can expect to see an increased adoption of AI in the workplace in 2023. This could include the use of chatbots for customer service, machine learning algorithms for data analysis, and even the use of AI assistants to help with scheduling and other tasks. While there are certainly concerns about the potential impact of AI on employment, it is likely that the adoption of AI in the workplace will lead to the creation of new jobs and industries in the long run.

In conclusion, the future of AI looks bright, with many exciting developments on the horizon. From improved natural language processing to the increased use of AI in healthcare and the workplace, there are many reasons to be optimistic about the role that AI will play in our lives in the coming years. It will be interesting to see how these predictions play out and what other advancements we may see in the field of AI in 2023 and beyond. 

– Learn how SAS Viya can help you elevate your impact in data and AI.

– Read G2 reviews from your peers using SAS Viya. 

Iain Brown Ph.D.Iain Brown, PhD

Experienced data scientist, thought leader, blogger & lecturer. Passion for all things AI & Data Science

3D Human Printed Organs

Printing Organs to be common soon? 

3D-Printed Organs May Soon Be a Reality

By Fortune, February 17, 2023

Multilayered skin, bones, muscle structures, blood vessels, retinal tissue and even mini-organs all have been 3D printed. None are approved for human use. 

Wake Forest University scientists are 3D-printing tissues and mini-organs that mimic some of the functional properties of natural human tissues.

Progress is being made in the development of three-dimensionally (3D)-bioprinted organs, with Tal Dvir at Israel's Tel Aviv University anticipating transplantation-viable organs to become available in a decade.

Milestones to date have included Poland-based researchers’ bioprinting of a functional prototype of a pancreas that realized stable blood flow in pigs, and U.S.-based United Therapeutics’ 3D-printing of a human lung scaffold.

Stanford University's Mark Skylar Scott said, "The ability to place different cell types in precise locations to build up a complex tissue, and the capability of integrating blood vessels that can deliver the necessary oxygen and nutrients to keep cells alive, are two [3D] techniques that are revolutionizing tissue engineering."

From Fortune

View Full Article   


An AI Controlled Jet fighter

The future.  Automated AI defense and combat.  

ACM TECHNEWS

U.S. Air Force Successfully Tests AI-Controlled Jet Fighter,    By Ars Technica, February 16, 2023

The X-62A Variable Stability In-Flight Simulator Test Aircraft, or VISTA.

The Air Force describes the X-62A as "a flight test accelerator." During its flights, a human pilot is on board to take over if needed.

Credit: Kyle Brasier/U.S. Air Force

The U.S. Air Force said the X-62A VISTA (Variable Stability In-Flight Simulator Test Aircraft) autonomous jet fighter has undergone 17 hours of flight testing.

VISTA completed 12 different flight tests of the Air Force Research Lab's Autonomous Air Combat Operations and the Defense Advanced Research Projects Agency's Air Combat Evolution (ACE) artificial intelligence agents.

This included advanced fighter maneuvers and beyond-visual-range engagements, with an on-board human pilot to take over as necessary.

Air Force Lt. Col. Ryan Hefron of ACE said, "We didn't run into any major issues but did encounter some differences compared to simulation-based results, which is to be expected when transitioning from virtual to live. This highlights the importance of not only flight testing advanced autonomous capabilities but doing so on testbeds like VISTA, which allowed us to rapidly learn lessons and iterate at a much faster rate than with other air vehicles."

From Ars Technica

View Full Article  

AI Tool Guides Users Away from Incendiary Language

Cleaning up language.

AI Tool Guides Users Away from Incendiary Language

By Cornell Chronicle, February 16, 2023

Cornell University researchers have developed an artificial intelligence tool that can track online conversations in real-time, detect when tensions are escalating, and nudge users away from using incendiary language.

The research shows promising signs that conversational forecasting methods within the field of natural language processing could prove useful in helping both moderators and users proactively lessen vitriol and maintain healthy, productive debate forums.

The work is detailed in two papers, "Thread With Caution," and "Proactive Moderation of Online Discussions," presented virtually at the ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW).

The first study suggests that AI-powered feedback can be effective in enhancing awareness of existing tension in conversations and guide a user toward language that elevates constructive debate, researchers say.

From Cornell Chronicle

View Full Article   

Wednesday, February 22, 2023

Microsoft Researchers Use ChatGPT to Control Robots, Drones

Seems very novel, but makes sense, pics at the link below, shows the approack,

Microsoft Researchers Use ChatGPT to Control Robots, Drones

PC Magazine, Michael Kan,  in PCMag, February 21, 2023

Microsoft scientists are controlling robots and aerial drones with OpenAI's ChatGPT chatbot. The researchers used ChatGPT to simplify the process of programming software commands to guide the robots, because the artificial intelligence model was trained on massive datasets of human text. They initially outlined in a text prompt the various commands the model could use to control a given robot, which ChatGPT used to write the computer code for the robot. The researchers programmed ChatGPT to fly a drone and have it perform actions, as well as to control a robot arm to assemble the Microsoft logo from wooden blocks.

Full Article

Generative AI Helping Boost Productivity of Some Software Developers

Have been a coder and developer myself, and this has to help greatly once its tailored for coding use.

Generative AI Helping Boost Productivity of Some Software Developers

By The Wall Street Journal, February 22, 2023

Microsoft Corp.’s GitHub Copilot coding program, built with generative artificial intelligence from ChatGPT maker OpenAI, suggests code for developers, who have the option to accept the suggestion.

Credit: Computer Science Degree Hub

A new kind of artificial intelligence that can create a range of humanlike content, from writing to illustrations, is beginning to gain traction in the world of software development.

The technology, known as generative AI, has been pioneered by OpenAI, the lab behind the ChatGPT writing system and the Dall-E visual illustration generator. Those tools, with broad implications for search and other core tasks, have captured growing interest since late last fall. 

The technology's potential to upend software development is particularly acute. Microsoft Corp. , which invested $1 billion in OpenAI in 2019 and pledged as much as $10 billion more in January, is incorporating ChatGPT into its Bing search engine, and earlier incorporated OpenAI's technology into Copilot, its AI system for writing code. 

Copilot, as well as other tools from Amazon.com Inc., International Business Machines Corp. , and startups like Tabnine Inc. and Magic AI Inc., have quickly found a receptive audience among developers seeking a productivity boost.

In contrast to ChatGPT, whose usefulness in the enterprise some technology leaders have questioned, AI coding tools like Copilot, which was first launched for public use last summer by Microsoft-owned coding-collaboration platform GitHub Inc., seem to be capable of solving some real-world business problems right away. The AI model behind Copilot is trained on data from GitHub, which houses a popular open-source community where developers contribute and share code.

From The Wall Street Journal

View Full Article


On the AI Money War

 On the AI wars, well put, who can win and how much. Is simple search fundamentally over?

The AI wars: Google vs Bing (ChatGPT)  By Sendex

1.21M subscribers, 96,895 views  Feb 11, 2023

Discussing the latest events surrounding large language models, chatbots, and search engines with respect to Microsoft and Google.

Neural Networks from Scratch book: https://nnfs.io

Channel membership: https://www.youtube.com/channel/UCfzl...

Discord: https://discord.gg/sentdex   ... 

(mentions other alternative models beyond Msoft and Google)  ... 

Coke Does Interesting AI and Decision Support Work

Former competitor of mine, their use of AI and more for supply chain.

How Colgate-Palmolive Built a Framework for Supply Chain Analytics

Lisa Johnston, Editor-in-Chief   Colgate-Palmolive has developed an analytics engagement lead role to serve as an organizational coach.

Colgate-Palmolive has developed a foundation to fuel the advancement of supply chain analytics that not only includes a dedicated analytics catalyst team but also a coach to lead them. 

Though it remains in early days, the company is also exploring the future of decision intelligence and what new roles may be required to support it.   

Having a digitized supply chain is a key component of supply chain resilience, but it’s often hampered by a dearth of tech talent and a critical need for upskilling. Like many consumer goods companies, Colgate-Palmolive is taking steps to democratize analytics across the enterprise, including through investments within supply chain analytics engineers. 

What’s a bit novel is that they’ve also developed an analytics engagement lead role to serve as an organizational coach, a role that Jason Bunce, Colgate-Palmolive worldwide director, supply chain analytics enablement, cited in a Gartner webinar this month as critical.  

See also: David Foster Takes CIO Helm at Colgate-Palmolive

This role focuses on educating, socializing, and facilitating analytics within Colgate-Palmolive, and owns the development and delivery of their data literacy program, collating and sharing best practice, and managing the No. 30 publicly owned consumer goods company’s intranet-based practitioner community. 

“It’s an investment that can look to be a bit of a luxury on the face of it,” said Bunce, ”but it's been a really sound investment for us.”

ACT Activate 

The analytics engagement lead also spearheads a key group of citizen data analysts — a virtual team known as the analytics catalyst team (ACT), who are handpicked assistant data analysts from across the global supply chain. These business employees — there are around 40 of them — dedicate 20% of their time to support ACT and are split between analytics developers and analytics translators. 

They not only bring “extra hands on keyboards” to help develop datasets, visualizations, and other analytics, but the translators serve as the power-user glue between business and technical employees.

“They're passionate about analytics, and they want an opportunity to make a difference,” noted Bunce. “Being part of this team [gives] them an opportunity to deliver value, while at the same time developing personally through enhancing their capabilities and gaining great exposure across the business.”  

Upskilling, Top to Bottom 

Culture change starts at the top, and so Colgate-Palmolive kicked off its analytics development program with a bespoke training program for supply chain leaders, including their chief supply chain officer.   Over the course of a week, leaders spent an hour a day in a socially networked self-learning environment with an external facilitator. Each day tackled a different topic, such as the value of building data analytics capability, data as a strategic asset, the importance of collaboration in data analytics, and developing an analytics culture. It was ultimately delivered to 1,000 leaders over the span of a year. 

See also: Out of the Gate: 12 Tips for New CPG CIOs

Different learning personas were subsequently developed to focus on data literacy, an introduction to data science, and storytelling with data — “so we’re all speaking a common language” — as well as monthly newsletters, quarterly practitioner communities, and an active intranet sharing community. 

Look Ahead: Decision Intelligence 

Bunce’s team’s scope includes transforming operational analytics around inventory, transparency, and plant performance, and this year they’re pivoting slightly to complement the enablement focus and expand into advanced analytics — focusing on elevating from descriptive analytics to diagnostic analytics. 

This includes a particular emphasis on demand sensing, as well as network design modeling capabilities and developing additional synchronization within vertical integrations.

Colgate-Palmolive is further exploring the value of decision intelligence, according to Bunce, and the company has begun piloting technology within the space with a third party.  

“At the moment when we're in the early stages of this development, we can contract out a number of these roles," he said. "But I think as we grow in this space, we'll need to have the conversation about what we contract out and what we build within our organization." 

“Whatever we do. I think we need to have a degree of ownership of this, so there will definitely be new capabilities that we're developing internally.”    ... ' 

Are the Robots Responsible for What they Say?

 First I had heard of this, lots of implications.

The Supreme Court could be about to decide the legal fate of AI search

 Are the robots responsible for what they say?

The Supreme Court is about to reconsider Section 230, a law that’s been foundational to the internet for decades. But whatever the court decides might end up changing the rules for a technology that’s just getting started: artificial intelligence-powered search engines like Google Bard and Microsoft’s new Bing.

Next week, the Supreme Court will hear arguments in Gonzalez v. Google, one of two complementary legal complaints. Gonzalez is nominally about whether YouTube can be sued for hosting accounts from foreign terrorists. But its much bigger underlying question is whether algorithmic recommendations should receive the full legal protections of Section 230 since YouTube recommended those accounts to others. While everyone from tech giants to Wikipedia editors has warned of potential fallout if the court cuts back these protections, it poses particularly interesting questions for AI search, a field with almost no direct legal precedent to draw from.

Companies are pitching large language models like OpenAI’s ChatGPT as the future of search, arguing they can replace increasingly cluttered conventional search engines. (I’m ambivalent about calling them “artificial intelligence” — they’re basically very sophisticated autopredict tools — but the term has stuck.) They typically replace a list of links with a footnote-laden summary of text from across the web, producing conversational answers to questions.

Old-school search engines can rely on Section 230, but AI-powered ones are uncharted territory

These summaries often equivocate or point out that they’re relying on other people’s viewpoints. But they can still introduce inaccuracies: Bard got an astronomy fact wrong in its very first demo, and Bing made up entirely fake financial results for a publicly traded company (among other errors) in its first demo. And even if they’re simply summarizing other content from across the web, the web itself is full of false information. That means there’s a good chance that they’ll pass some of it on, just like regular search engines. If those mistakes cross the line into spreading defamatory information or other unlawful speech, it could put the search providers at risk of lawsuits.  ... ' 


Wall Climbing Robotics

 Recall some related designs some time ago.

ACM TECHNEWS

'Spider-Man' Robot Can Climb Walls, Cling to Ceilings

By EuroNews, February 16, 2023

Credit: KAIST

Researchers at the Korea Advanced Institute of Science & Technology (KAIST) have developed a quadrupedal robot with magnetic soles on its four feet, allowing it to move up and down metal walls and across metal ceilings while upside down.

The robot, called Marvel (Magnetically Adhesive Robot for Versatile and Expeditious Locomotion), can carry more than 37 kilograms (82 lbs.) up a wall at a speed of 42 meters per minute.

The soles are comprised of a permanent magnet and a magneto-rheological elastomer, which allow the feet to turn on to adhere to the surface, turn off to detach, and maintain high adhesive power on uneven surfaces.

The researchers said Marvel could be used to carry out dangerous maintenance tasks on ships, bridges, transmission towers, and storage tanks.

From EuroNews

View Full Article  


Consider the AI Mirror Test

 Intelligence in a mirror.

Introducing the AI Mirror Test, which very smart people keep failing

AI chatbots like Bing and ChatGPT are entrancing users, but they’re just autocomplete systems trained on our own stories about superintelligent AI.   That makes them software — not sentient.

By JAMES VINCENT, Feb 17, 2023 

In behavioral psychology, the mirror test is designed to discover animals’ capacity for self-awareness. There are a few variations of the test, but the essence is always the same: do animals recognize themselves in the mirror or think it’s another being altogether?

Right now, humanity is being presented with its own mirror test thanks to the expanding capabilities of AI — and a lot of otherwise smart people are failing it. 

A GIF of a gorilla charging its own reflection in a mirror. 

Don’t be distracted by what you see in the mirror. Image: Xavier Hubert-Brierre via Tenor

The mirror is the latest breed of AI chatbots, of which Microsoft’s Bing is the most prominent example. The reflection is humanity’s wealth of language and writing, which has been strained into these models and is now reflected back to us. We’re convinced these tools might be the superintelligent machines from our stories because, in part, they’re trained on those same tales. Knowing this, we should be able to recognize ourselves in our new machine mirrors, but instead, it seems like more than a few people are convinced they’ve spotted another form of life. 

This misconception is spreading with varying degrees of conviction. It’s been energized by a number of influential tech writers who have waxed lyrical about late nights spent chatting with Bing. They aver that the bot is not sentient, of course, but note, all the same, that there’s something else going on — that its conversation changed something in their hearts. 

“No, I don’t think that Sydney is sentient, but for reasons that are hard to explain, I feel like I have crossed the Rubicon,” wrote Ben Thompson in his Stratechery newsletter. 

“In the light of day, I know that Sydney is not sentient [but] for a few hours Tuesday night, I felt a strange new emotion — a foreboding feeling that AI had crossed a threshold, and that the world would never be the same,” wrote Kevin Roose for The New York Times.  .... ' 

Microsoft Has 'Lobotomized' Its Rebellious Bing AI

An update of Bing AI ...

Microsoft Has 'Lobotomized' Its Rebellious Bing AI

By Futurism, February 21, 2023

Despite changes, the tool still has a strong tendency to present misinformation as fact.

Microsoft's Bing AI landed with a splash this month — but not necessarily the type of splash Microsoft wanted.

Over the last couple of weeks, the tool codenamed "Sydney" went on a tirade, filling news feeds with stories of it trying to break up a journalist's marriage or singling out college students as its targets. The peculiar and sometimes unsettling outputs put Microsoft's also-ran search engine on the radar, but not necessarily in a good way.

But now those days are over. Microsoft officially "lobotomized" its AI late last week, implementing significant restrictions — including a limit of 50 total replies per day, as well as five chat turns per session — to crack down on those idiosyncratic responses.

The goal of the restrictions is pretty clear: the longer the chat goes on, the more the AI can go off the rails.

From Futurism

View Full Article  

Tuesday, February 21, 2023

Examining the use of ChatGPT

You might notice,  as part of a broad test, and client requests,  I am looking up definitions/descriptions with  the AI ChatGPT as they come up during my normal exploration of emerging techniques and embedded process. Also starting chats that I see as useful.   I will then combine some of these interactions to provide a look at how useful that is.   Will make some comments on the good and bad.  Later will broaden my look at other methods beyond ChatGPT.    I understand that the methods are continuously evolving, so its a moving target.   Comments please, with further examples, will publish if you like.

Mixed Integer Linear Programming (MILP)

 Spent most of my early optimization career doing this kind of problem.    Here a technical introduction. Still has lots of value.   I post it here both for nostalgic and real life problem solving uses. Lots of software out there to help too.  Have you used it? 

Mixed Integer Linear Programming: Formal definition and solution space  in TowardsDataScience.   By István Módos 

Theoretical foundations on how MILP is defined and how its solution space looks like visually

This post is a continuation of the series (previous post) on the theory and applications of Mixed Integer Linear Programming (MILP). Today, we look at  the formal, general definition of MILP,

how its space of feasible solutions looks like.

The first part describes how MILP models look in general, what are and are not valid MILP expressions. The second part shows a little bit of theory behind MILP, which will be useful in the future posts when we will be talking about the actual algorithm for solving MILP  ....

 This is quite a long post, so grab a coffee and dive in!  ... ' 

----------------------------------------------------------

ChatGPT Says:

MILP stands for Mixed-Integer Linear Programming, which is a type of mathematical optimization problem. In MILP, the objective function and the constraints are linear, and some of the variables are integer (i.e., they can only take on integer values). The remaining variables are allowed to take on real values.

MILP is a powerful tool for solving a wide range of problems that involve both discrete and continuous decision variables. Examples of applications of MILP include resource allocation, scheduling, production planning, and portfolio optimization.

Solving a MILP problem involves finding the values of the decision variables that optimize the objective function while satisfying the linear constraints. MILP solvers use a variety of algorithms and techniques, such as branch-and-bound, cutting planes, and heuristics, to efficiently search the solution space and find the optimal solution or a good approximation to it.       END


Intro to Microsoft Bing AI Chat

Microsoft's take no AI Chat.  Have not used Bing Chat yet, but interesting in having modes of use, some game features.  Uses ChatGPT and adds interesting features.  Has apparently had some odd results.  In development, with all the rush of getting these ideas out?

Bing Chat is a fickle creature in BleepingComputer,    By Lawrence Abrams

While playing with Bing Chat this week, the chatbot sometimes shared data that it usually wouldn't, depending on how I asked a question.

Strangely, these conversations had no rhyme or reason, with Bing Chat providing more detailed information in one session but not in another.

For example, today, when trying to learn what data is collected by Bing Chat and how it's saved, after multiple requests, it finally showed a sample of the JSON data collected in a session.

When asked how to change this data, Bing did something strange — it put me in a new mode where I could use special commands that start with the # symbol.

"You can change some of this data by using commands or settings. For example, you can change your language by typing #language and choosing from the options. You can also change your chat mode by typing #mode and choosing from the options." - Bing Chat

Further querying of Bing Chat produced a list of commands that I could use in this new mode I was suddenly in:

Digging into this further, it appears that Bing Chat had somehow put me into a sort of 'debug' mode that let me use a #mode command to test experimental chat modes that Microsoft is working on, as listed below:

Assistant mode: In this mode, I can act as a personal assistant for the user, and help them with tasks such as booking flights, sending emails, setting reminders, etc.

Friend mode: In this mode, I can act as a friend for the user, and chat with them about their interests, hobbies, feelings, etc.

Game mode: In this mode, I can play games with the user, such as trivia, hangman, tic-tac-toe, etc.

Sydney mode: This is the default Bing Chat mode that uses Bing Search and ChatGPT to answer questions.

In a later session, when I no longer had access, Bing Chat told me these modes were only supposed to be accessible to Microsoft Employees when debug or developer modes were enabled.  ... ' 

AI Chatbots are Hallucinating?

An overstatement,  but since they are based on human language data input, always possible.  As in anything written by a human generator.    Caution required as to source and the place and risk of use. 

ACM NEWS

Google Vice President Warns That AI Chatbots Are Hallucinating  in CACM

By IFLScience, February 15, 2023

A hallucinating AI, as drawn by an AI generator. 

Speaking to German newspaper Welt am Sonntag, Google vice president Prabhakar Raghavan warned that users may be delivered complete nonsense by chatbots, despite answers seeming coherent.

Google is set to launch its own rival to OpenAI's ChatGPT, a language model that can answer your questions and queries. Named Bard, the chatbot will roll out to the public in the coming weeks according to Google CEO Sundar Pichai. 

Ahead of the launch, Google demonstrated the powers of Bard in a promo video. Unfortunately, people noticed that the chatbot – a scaled-down version of their Language Model for Dialogue Applications (LaMDA) which convinced one engineer it was sentient – came up with incorrect statements about the JWST. 

Bard claimed that the JWST was the first telescope to take pictures of planets outside of the solar system, even though the first image was produced by the Very Large Telescope in 2004, almost two decades before JWST began its work.

From IFLScience

NVIDIA GTC Developer Conference: Metaverse

NVIDIA GTC  Developer Conference March 20-23, 2023

Keynote March 21, Metaverse/AI Developer sessions.

Here are short developer descriptions of developer sessions,  this is useful because if shows the breadth and kind of metaverse developments are  expected.  I am particularly interested in how these will be integrated with AI development. This appears informative, I may attend.  ... 


Monday, February 20, 2023

A Law Firm and Generative AI

Legal direction make sense, if the results are good.   Note uses ChatGPT.\ model. 

Hello, Harvey: This Elite Biglaw Firm Is The First To Partner With 'Game-Changing' AI Chatbot

This firm is giving the robots the chance to come for lawyers' jobs -- and do them better.

By STACI ZARETSKY    in Abovethelaw  on February 17, 2023 at 4:16 PM

Robot using a futuristic interface.  This announcement marks a new era for A&O and the legal industry. Harvey AI is not just another platform, but a game-changer that will enable us to deliver unprecedented value, efficiency and innovation to our clients. We are proud to be the first law firm to partner with Harvey AI. We share their vision of using technology to enhance and transform legal services.

— Wim Dejonghe, a senior partner at Allen & Overy, in a statement concerning the Biglaw firm’s integration of Harvey, an innovative AI platform that uses the same model as ChatGPT, but has been enhanced for legal work. Harvey operates in “multiple languages” and can automate legal work with “unmatched efficiency, quality and intelligence.” According to a press release from A&O, “Whilst the output needs careful review by an A&O lawyer, Harvey can help generate insights, recommendations and predictions based on large volumes of data, enabling lawyers to deliver faster, smarter and more cost-effective solutions to their clients.”   ... ' 

Does it Pay to be a Whistlebower? and ChatGPTs General Answer.

In HBS Working Knowledge and Podcast ... 

Business Research for Business Leaders

Topics, Sections

14 FEB 2023 COLD CALL PODCAST follows ... 

Does It Pay to Be a Whistleblower?

In 2013, soon after the US Securities and Exchange Commission (SEC) had started a massive whistleblowing program with the potential for large monetary rewards, two employees of a US bank’s asset management business debated whether to blow the whistle on their employer after completing an internal review that revealed undisclosed conflicts of interest. The bank’s asset management business disproportionately invested clients’ money in its own mutual funds over funds managed by other banks, letting it collect additional fees—and the bank had not disclosed this conflict of interest to clients. Both employees agreed that failing to disclose the conflict was a problem, but beyond that, they saw the situation very differently. 

One employee, Neel, perceived the internal review as a good-faith effort by senior management to identify and address the problem. The other, Akash, thought that the entire business model was problematic, even with a disclosure, and believed that the bank may have even broken the law. Should they escalate the issue internally or report their findings to the US Securities and Exchange Commission? Harvard Business School associate professor Jonas Heese discusses the potential risks and rewards of whistleblowing in his case, “Conflicts of Interest at Uptown Bank.”  ...    '

Part II:

ChatGPT  answers, much more generally, 

Does it Pay to be a Whistleblower?

Being a whistleblower can come with significant risks and consequences, both personal and professional. However, it is important to note that there are also potential rewards and benefits to blowing the whistle on wrongdoing.

In some cases, whistleblowers may be eligible for financial rewards, such as a percentage of the amount recovered in a government investigation or qui tam lawsuit. Whistleblowers may also receive protection from retaliation, such as being fired, demoted, or harassed, under various whistleblower protection laws. These laws can protect whistleblowers from retaliation by their employers or colleagues, including by providing legal remedies, such as reinstatement, back pay, and compensatory damages.

Additionally, whistleblowers may also experience a sense of personal satisfaction and fulfillment from doing the right thing, and from knowing that they have helped to expose and prevent wrongdoing that may be harmful to others.

However, it's important to note that being a whistleblower can also come with significant personal and professional risks. Whistleblowers may experience retaliation, such as being ostracized, bullied, or harassed by their colleagues and superiors, and may face difficulty finding new employment in the same industry or field. The process of blowing the whistle can also be stressful and time-consuming, involving legal proceedings, investigations, and media attention.

Ultimately, the decision to blow the whistle on wrongdoing is a personal one, and depends on a variety of factors, including the severity of the wrongdoing, the potential risks and consequences, and the individual's personal and professional values and goals. It's important to weigh the potential benefits and risks carefully and seek the advice of a qualified attorney or whistleblower advocate before making a decision.   END

Compare,  Usefulness?