/* ---- Google Analytics Code Below */

Thursday, August 10, 2023

Copying with Microdirectives

Complying with Microdirectives

Representatives of OpenAI declined to comment on companies privacy concerns.

Generative AI tools such as OpenAI’s ChatGPT have been heralded as pivotal for the world of work, but the technology is creating a formidable challenge for corporate America.

Proponents of OpenAI's ChatGPT and other generative artificial intelligence tools contend that they can boost workplace productivity, automating certain tasks and assisting with problem-solving, but some corporate leaders have banned their use over concerns about exposing sensitive company and customer information.

These leaders are concerned that employees could upload proprietary or sensitive data into the chatbot, which would be added to a database used to train it, allowing hackers or competitors to ask the chatbot for that information.

A post on OpenAI's website said private mode allows ChatGPT users to keep their prompts out of its training data.

Massachusetts Institute of Technology's Yoon Kim said that while technically possible, guardrails implemented by OpenAI prevent ChatGPT from using sensitive prompts in its training data.

Kim added that the vast amount of data needed by ChatGPT to learn would make it difficult for hackers to access proprietary data entered as a prompt.

From The Washington Post

View Full Article - May Require Paid Subscription

Copyling with Microdirectives

Complying with Microdirectives

AI and Microdirectives

Imagine a future in which AIs automatically interpret—and enforce—laws.

All day and every day, you constantly receive highly personalized instructions for how to comply with the law, sent directly by your government and law enforcement. You’re told how to cross the street, how fast to drive on the way to work, and what you’re allowed to say or do online—if you’re in any situation that might have legal implications, you’re told exactly what to do, in real time.

Imagine that the computer system formulating these personal legal directives at mass scale is so complex that no one can explain how it reasons or works. But if you ignore a directive, the system will know, and it’ll be used as evidence in the prosecution that’s sure to follow.

This future may not be far off—automatic detection of lawbreaking is nothing new. Speed cameras and traffic-light cameras have been around for years. These systems automatically issue citations to the car’s owner based on the license plate. In such cases, the defendant is presumed guilty unless they prove otherwise, by naming and notifying the driver.  ... '

Monday, July 31, 2023

Complying with Microdirectives

 Schneier makes makes some useful thoughts, 

AI and Microdirectives

Imagine a future in which AIs automatically interpret—and enforce—laws.

All day and every day, you constantly receive highly personalized instructions for how to comply with the law, sent directly by your government and law enforcement. You’re told how to cross the street, how fast to drive on the way to work, and what you’re allowed to say or do online—if you’re in any situation that might have legal implications, you’re told exactly what to do, in real time.

Imagine that the computer system formulating these personal legal directives at mass scale is so complex that no one can explain how it reasons or works. But if you ignore a directive, the system will know, and it’ll be used as evidence in the prosecution that’s sure to follow.

This future may not be far off—automatic detection of lawbreaking is nothing new. Speed cameras and traffic-light cameras have been around for years. These systems automatically issue citations to the car’s owner based on the license plate. In such cases, the defendant is presumed guilty unless they prove otherwise, by naming and notifying the driver.  ... '

AI-Powered Brain Surgery Becomes A Reality In Hong Kong


AI-Powered Brain Surgery Becomes A Reality In Hong Kong

By South China Morning Post (Hong Kong)

July 14, 2023, Robotic surgery equipment.

Robotics are being used increasingly for surgical procedures, especially for those considered minimally invasive.

A Hong Kong-based research centre under the Chinese Academy of Sciences (CAS), China's national research institute, plans to launch a robotics system for brain surgery in the near future, despite challenges from a shortage of talent and artificial intelligence (AI) chips.

The Centre for Artificial Intelligence and Robotics (CAIR), established in 2019, has completed three successful cadaver trials with its MicroNeuro robot, which can perform deep brain surgery "in a minimally invasive manner", Liu Hongbin, the centre's executive director, told the Post in an interview on Thursday.

The main approach today requires surgeons to operate with rigid tools and open large windows on a patient's scalp, which damages a lot of healthy brain tissue, Liu said.

"Brain surgery is a type of surgery that needs technology the most because it's a very dangerous procedure," Liu said. "Surgeons really want to use AI and tech innovation to make this type of procedure much less invasive than it is now."

From South China Morning Post (Hong Kong)

View Full Article  

Saturday, July 29, 2023

Quantum Twist on Common Computer Algorithm Promises Speed Boost

Quantum Twist on Common Computer Algorithm Promises Speed Boost

By New Scientist, July 14, 2023

An IBM quantum computer.

Mazzola stresses the team is not yet claiming quantum advantage; the result demonstrates future potential, rather than current ability.

Credit: IBM

Scientists at Switzerland's University of Zurich (UZH) and IBM have demonstrated that a quantum version of the popular Monte Carlo algorithm could eventually overtake versions running on classical computers.

However, the researchers explained, attaining this speed advantage would probably require a quantum system with at least 1,000 quantum bits.

Said UZH's Guglielmo Mazzola, "If this works, it's going to enhance, by a lot, the way in which we model systems and that, in turn, will allow us to make better predictions in a wide range of fields."

However, he acknowledged that "we cannot exclude that our classical friends can devise something even better."

From New Scientist

View Full Article


Want to Win a Chip War? You're Gonna Need a Lot of Water

Want to Win a Chip War? You're Gonna Need a Lot of Water

By Wired, July 21, 2023

The chip industry’s thirst for water springs from the need to keep silicon wafers free from even the tiniest specks of dust or debris to prevent contamination of their microscopic components.

Credit: Bill Varie/Getty Images

Building a semiconductor factory requires enormous quantities of land and energy, then some of the most precise machinery on Earth to operate. The complexity of chip fabs, as they are called, is one reason why the US Congress last year committed more than $50 billion to boost U.S. chip production in a bid to make the country more technologically independent.

But as the U.S. seeks to boot up more fabs, it also needs to source more of a less obvious resource: water. Take Intel's ambitious plan to build a $20 billion mega-site outside Columbus, Ohio. The area already has three water plants that together provide 145 million gallons of drinking water each day, but officials are planning to spend heavily on a fourth to, at least in part, accommodate Intel.

Water might not sound like a conventional ingredient of electronics manufacturing, but it plays an essential role in cleaning the sheets, or wafers, of silicon that are sliced and processed into computer chips. A single fab might use millions of gallons in a single day, according to the Georgetown Center for Security and Emerging Technology (CSET)—about the same amount of water as a small city in a year.

Chip companies hoping to take advantage of the CHIPS and Science Act, last year's federal spending package aiming to boost US chip manufacturing, are now constructing new water processing facilities alongside their fabs. And cities trying to attract new factories funded by the legislation are studying the potential impact on their water supplies. In some places it may be necessary to secure the water supply; in others, new infrastructure must be installed to recycle water used by fabs.

From Wired

View Full Article  

Google's AI Red Team: the Ethical Hackers Making AI Safer

 Interesting piece.

Google's AI Red Team: the ethical hackers making AI safer

July 19, 2023, 3 min read

Today, we're publishing information on Google’s AI Red Team for the first time.

Daniel Fabian, Head of Google Red Teams

Last month, we introduced the Secure AI Framework (SAIF), designed to help address risks to AI systems and drive security standards for the technology in a responsible manner.

To build on this momentum, today, we’re publishing a new report to explore one critical capability that we deploy to support SAIF: red teaming. We believe that red teaming will play a decisive role in preparing every organization for attacks on AI systems and look forward to working together to help everyone utilize AI in a secure way. The report examines our work to stand up a dedicated AI Red Team and includes three important areas: 1) what red teaming in the context of AI systems is and why it is important; 2) what types of attacks AI red teams simulate; and 3) lessons we have learned that we can share with others.

What is red teaming?

Google Red Team consists of a team of hackers that simulate a variety of adversaries, ranging from nation states and well-known Advanced Persistent Threat (APT) groups to hacktivists, individual criminals or even malicious insiders. The term came from the military, and described activities where a designated team would play an adversarial role (the “Red Team”) against the “home” team.

For a closer look at Google’s security Red Team, watch the above video.

Over the past decade, we’ve evolved our approach to translate the concept of red teaming to the latest innovations in technology, including AI. The AI Red Team is closely aligned with traditional red teams, but also has the necessary AI subject matter expertise to carry out complex technical attacks on AI systems. To ensure that they are simulating realistic adversary activities, our team leverages the latest insights from world class Google Threat Intelligence teams like Mandiant and the Threat Analysis Group (TAG), content abuse red teaming in Trust & Safety, and research into the latest attacks from Google DeepMind. .... '

Tuesday, July 25, 2023

Amazon Cashless 'Pay by Palm' Technology Requires Only a Hand Wave

 Another area we examined for retail tech,

Amazon Cashless 'Pay by Palm' Technology Requires Only a Hand Wave

By CBS News.July 21, 2023

Paying with palm recognition.

According to Amazon, palm payment is secure and cannot be replicated because the technology looks at both the palm and the underlying vein structure to create unique "palm signatures" for each customer.

Credit: Amazon

Retail giant Amazon has announced a new contactless transaction service that allows shoppers to pay with their palms.

Users can enable transactions by hovering their palms over an Amazon One device, which can facilitate payment, identification, loyalty program membership, and entry.  Amazon said palm payment is impossible to replicate because the system creates unique "palm signatures" for each customer by examining the palm and the underlying vein arrangement.

Each palm signature, the company added, corresponds to a numerical vector representation, and is  securely warehoused in the Amazon Web Services cloud.

The technology is already available at 200 Amazon locations in 20 U.S. states, and the company intends to deploy it at more than 500 Whole Foods and Amazon Fresh outlets by year's end.

From CBS News

View Full Article  

MIT Makes Probability-Based Computing a Bit Brighter

MIT Makes Probability-Based Computing a Bit Brighter The p-bit harnesses photonic randomness to explore a new computing frontier By EDD GENT      MARGO ANDERSON

In a noisy and imprecise world, the definitive 0s and 1s of today’s computers can get in the way of accurate answers to messy real-world problems. So says an emerging field of research pioneering a kind of computing called probabilistic computing. And now a team of researchers at MIT have pioneered a new way of generating probabilistic bits (p-bits) at much higher rates—using photonics to harness random quantum oscillations in empty space.

The deterministic way in which conventional computers operate is not well-suited to dealing with the uncertainty and randomness found in many physical processes and complex systems. Probabilistic computing promises to provide a more natural way to solve these kinds of problems by building processors out of components that behave randomly themselves.

The approach is particularly well-suited to complicated optimization problems with many possible solutions or to doing machine learning on very large and incomplete datasets where uncertainty is an issue. Probabilistic computing could unlock new insights and findings in meteorology and climate simulations, for instance, or spam detection and counterterrorism software, or next-generation AI.

The team can now generate 10,000 p-bits per second. Is the p-circuit next?

The fundamental building blocks of a probabilistic computer are known as p-bits and are equivalent to the bits found in classical computers, except they fluctuate between 0 and 1 based on a probability distribution. So far, p-bits have been built out of electronic components that exploit random fluctuations in certain physical characteristics.

But in a new paper published in the latest issue of the journal Science, the MIT team have created the first ever photonic p-bit. The attraction of using photonic components is that they operate much faster and are considerably more energy efficient, says Charles Roques-Carmes, a science fellow at Stanford University and visiting scientist at MIT, who worked on the project while he was a postdoc at MIT. “The main advantage is that you could generate, in principle, very many random numbers per second,” he adds.    ..'

More than 1,300 Experts call AI a Force for good

More than 1,300 experts call AI a force for good

Published, 4 days ago   By Chris Vallance, Technology reporter

An open letter signed by more than 1,300 experts says AI is a "force for good, not a threat to humanity".

It was organised by BCS, the Chartered Institute for IT, to counter "AI doom".

Rashik Parmar, BCS chief executive, said it showed the UK tech community didn't believe the "nightmare scenario of evil robot overlords".

In March, tech leaders including Elon Musk, who recently launched an AI business, signed a letter calling for a pause in developing powerful systems.

That letter suggested super-intelligent AI posed an "existential risk" to humanity. This was a view echoed by film director Christopher Nolan, who told the BBC that AI leaders he spoke to saw the present time "as their Oppenheimer moment". J.Robert Oppenheimer played a key role in the development of the first atomic bomb, and is the subject of Mr Nolan's latest film.

But the BCS sees the situation in a more positive light, while still supporting the need for rules around AI.

Richard Carter is a signatory to the BCS letter. Mr Carter, who founded an AI-powered startup cybersecurity business, feels the dire warnings are unrealistic: "Frankly, this notion that AI is an existential threat to humanity is too far-fetched. We're just not in any kind of a position where that's even feasible".

Signatories to the BCS letter come from a range of backgrounds - business, academia, public bodies and think tanks, though none are as well known as Elon Musk, or run major AI companies like OpenAI.

Those the BBC has spoken to stress the positive uses of AI. Hema Purohit, who leads on digital health and social care for the BCS, said the technology was enabling new ways to spot serious illness, for example medical systems that detect signs of issues such as cardiac disease or diabetes when a patient goes for an eye test.

She said AI could also help accelerate the testing of new drugs.

Signatory Sarah Burnett, author of a book on AI and business, pointed to agricultural uses of the tech, from robots that use artificial intelligence to pollinate plants to those that "identify weeds and spray or zap them with lasers, rather than having whole crops sprayed with weed killer". ... ' 

Monday, July 24, 2023

Biosensor Offers Real-Time Dialysis Feedback

Biosensor Offers Real-Time Dialysis Feedback

By IEEE Spectrum, July 20, 2023

Hemodialysis is a vital procedure for people with kidney failure, but it requires multiple lengthy clinic visits every week.

A new biosensor provides real-time feedback on the filtering rate as blood is circulated from a patient to the dialysis machine and back.

Researchers at Iran's Shahrood University of Technology (SUT) engineered a new biosensor to expedite hemodialysis procedures by providing dialysis feedback in real time.

The electromagnetic bandgap structure biosensor uses microwaves to analyze the waste and toxin levels of the patient's blood during dialysis.

Experiments using fake blood showed the biosensor could identify relative alterations in blood permittivity across samples, suggesting real-time feedback during dialysis was possible.

SUT's Javad Ghalibafan said the low-cost, low-power device does not disrupt the procedure, and could shorten dialysis sessions if it detects that the patient's blood toxin concentrations are sufficiently low to end the session early.

From IEEE Spectrum

View Full Article  

EU rules on AI must do more to protect human rights, NGOs warn

EU rules on AI must do more to protect human rights, NGOs warn

The group fears lobbyists might succeed in their efforts to water down the proposed AI Act

A group of 150 NGOs including Human Rights Watch, Amnesty International, Transparency International, and Algorithm Watch has signed a statement addressed to the European Union. In it, they entreat the bloc not only to maintain but enhance human rights protection when adopting the AI Act. 

Between the apocalypse-by-algorithm and the cancer-free utopia different camps say the technology could bring, lies a whole spectrum of pitfalls to avoid for the responsible deployment of AI.  ... ' 

IBM and the its future of AI

 IBM misses on revenue but sees AI leading a new round of growth


IBM Corp., which is typically the first major information technology company to report earnings each quarter, today beat profit expectations and narrowly missed revenue forecasts in its fiscal first quarter but set an optimistic tone for the rest of the year.

The company cited double-digit revenue growth in two strategic business areas — Red Hat hybrid cloud and artificial intelligence — while saying that the demand by enterprises across the globe for AI has significant potential upside for both its software and consulting businesses. Red Hat revenue rose 11% and revenue from data and AI gained 10% from a year earlier.

Total quarterly revenue fell 0.4% from a year ago, to $15.47 billion, below consensus estimates of $15.58 billion. However, earnings of $2.18 a share beat analysts’ expectations of $2.01, although they fell below the $2.31 a share earned in the same quarter last year.

Gross profit margin of 54.9% was up 1.6% and operating profit margins grew 1.4%, to 55.9%. Executives reiterated expectations of between 3% and 5% revenue growth this year. IBM has said both metrics are key performance indicators for 2023.

IBM’s stock fell a little over 1% in after-hours trading following the earnings report.

‘Solid execution’

“Continued solid execution of our AI and hybrid cloud strategy makes us confident in our ability to achieve our full-year expectations for free cash flow and revenue,” said Chief Executive Officer Arvind Krishna (pictured).

IBM reported an 8% jump in software revenue on a constant currency basis, with consulting revenue up 6% and infrastructure revenue down 14%. “We have good momentum in our underlying operational product performance,” said Chief Financial Officer James Kavanaugh.

Although infrastructure revenue fell 14% in line with normal mainframe product cycles, Kavanaugh said IBM is seeing unusual resilience in the online transaction processing market. “We saw an inflection shift in OLTP in 2022,” he said. “We have a much-extended opportunity base to go get that revenue.”

“IBM is hitting the double headwinds of a slowing economy and challenging currency exchange rate developments,” said Holger Mueller, principal analyst at Constellation Research Inc. “The good news is that Red Hat is holding up with 11% growth and IBM is gaining a second leg from the strong momentum of its data and AI portfolio.”

Pundit-IT Inc. Chief Analyst Charles King concurred that the company’s strategic products are holding up well. “Red Hat continues to deliver the goods in software and is also central to the company’s hybrid cloud strategy and solutions,” he said. “The best news was a 24% year-over-year jump in consulting signings among both large and small enterprise customers.”

Generative AI opportunity

IBM executives said excitement over generative AI has a big upside for the company. Krishna said he was “very excited” by the initial reaction to Its May announcement of WatsonX, a product suite designed to help companies more easily build and deploy artificial intelligence models.

The CEO compared WatsonX to Red Hat’s OpenShift, which debuted in 2019 and has roughly doubled in revenue each year. “We have quantified OpenShift at $1.1 billion on an annualized run rate basis,” he said. “It gives you a sense of the excitement we have around these [AI] projects.”

Pund-IT’s King concurred, saying the participation of more than 150 customers in the development of the WatsonX platform “suggests that demand for enterprise-class generative AI solutions will be strong.”

Although the underwhelming market performance of the Watson AI platform since its victory on “Jeopardy!” more than a decade ago has largely sidelined IBM as a market leader in AI, “the company has systematically worked through most of the serious problems that are tripping up new AI platforms,” King said.

He pointed to the company’s work in developing large language model tools and data sets, addressing data privacy and security concerns and building a set of ethical standards to follow in AI development. “IBM’s AI efforts may not be well-known when it comes to generating error-ridden reports and term papers, but the company is steadily adding AI-enabled features and functions that measurably improve the performance of applications and solutions that its enterprise customers depend on,” he said.

Krishna said interest in AI is strong across the globe, with North America, Western Europe and parts of South America leading the charge. “The list of use cases includes IT operations, improved automation, customer service, augmenting human resources, predictive maintenance, compliance monitoring, security, sales, management and supply chain amongst others,” he said. “In the same way we built a consulting practice around Red Hat that is measured in the billions of dollars, we will do the same with AI.”

Introducing OpenAI London

 Good site provides intro.


Introducing OpenAI London

We are excited to announce OpenAI’s first international expansion with a new office in London, United Kingdom.   ... ' 

Sunday, July 23, 2023

Meta launches Llama 2 open-source LLM

The march continues.

Meta launches Llama 2 open-source LLM

About the AuthorBy Ryan Daws | July 19, 2023

Categories: Companies, Development, Machine Learning, Meta (Facebook),

Meta launches Llama 2 open-source LLMRyan is a senior editor at TechForge Media with over a decade of experience covering the latest technology and interviewing leading industry figures. He can often be sighted at tech conferences with a strong coffee in one hand and a laptop in the other. If it's geeky, he’s probably into it. Find him on Twitter (@Gadget_Ry) or Mastodon (@gadgetry@techhub.social)

Meta has introduced Llama 2, an open-source family of AI language models which comes with a license allowing integration into commercial products.

The Llama 2 models range in size from 7-70 billion parameters, making them a formidable force in the AI landscape.

According to Meta’s claims, these models “outperform open source chat models on most benchmarks we tested.”

The release of Llama 2 marks a turning point in the LLM (large language model) market and has already caught the attention of industry experts and enthusiasts alike.

The new language models offered by Llama 2 come in two variants – pretrained and fine-tuned:

The pretrained models are trained on a whopping two trillion tokens and have a context window of 4,096 tokens, enabling them to process vast amounts of content at once.

The fine-tuned models, designed for chat applications like ChatGPT, have been trained on “over one million human annotations,” further enhancing their language processing capabilities.

While Llama 2’s performance may not yet rival OpenAI’s GPT-4, it shows remarkable promise for an open-source model.

The long-awaited sequel, Llama-2 is announced today! It's the best OSS model we have now.  ... ' 

Considerable Bard Updates

Google Bard Updated With Text to Speech, 40 New Languages

Bard is getting more capable, but it's still not trustworthy.

By Ryan Whitwam July 14, 2023

Bard AI

Credit: Google

Google was caught off-guard earlier this year when Microsoft decided to make generative AI its new focus. Despite inventing the transformer algorithms that make bots like ChatGPT possible, Google's answer to ChatGPT stumbled out of the gate. Google has been updating its Bard AI consistently in recent months, and the latest update is a big one. Finally, Bard can speak its replies.

Bard is a text-based generative AI, which means you can ask it anything, and it'll give you a response. It can do that because Bard has ingested a huge amount of written content, so it has the uncanny ability to generate text that sounds like a (boring) human being wrote it. On the flip side, some of the text Bard creates is not grounded in reality. So far, no one has figured out how to prevent these "hallucinations" in generative AI—ChatGPT suffers from the same shortcoming.

Google says it has now fed Bard enough content in an assortment of languages to expand the bot's services. It now works in 40 new languages, including Arabic, Chinese, German, and Spanish. As part of the multilinguistic update, Bard has also expanded to Brazil and more of Europe.

Regardless of whether Bard's output is accurate, you can hear the results spoken in more than 40 languages now. Google says this is useful if you aren't sure of the pronunciation of a word. Just click the speaker icon to start Bard gabbing. Should you find yourself unsatisfied with the response, there are also new tools to change that. The buttons at the bottom of the conversation window now let you alter the AI's tone and style. For example, you can make a reply more casual, shorter, more professional, and so on. This feature is only supported in English right now, but more languages are coming.

New about Apple and Generative AI

Following this closely, How will it integrate with other search.  

What is Apple GPT? Apple’s ChatGPT Rival & “Ajax” Explained  In Tech.co News

Apple is preparing to make a major AI announcement in 2024, and this could be the first clue to what it might entail.

Written by,Aaron Drapkin.      Updated on, July 20, 2023

This week, reports have revealed that Apple is developing its own ChatGPT competitor – dubbed “Apple GPT” by the company's developers – backed by its own language model framework, Ajax.

The news comes just days after Meta announced it is developing its own version of ChatGPT, powered by Llama 2 – introduced recently in partnership with Microsoft.

Compared to its counterparts, Apple has been relatively quiet regarding artificial intelligence – but it would be foolish to think the tech giant would simply be sitting this one out.

Get the latest tech news, straight to your inbox

Don't miss out on the top business tech news with Tech.co's weekly highlights reel

Apple GPT: What We Know So Far

According to Bloomberg’s Mark Gurmann, engineers at Apple are working on a project to design an AI tool internally referred to as “Apple GPT”, powered by the company's own proprietary LLM framework, Ajax.

Ajax is based on Google’s machine learning framework “JAX”, which UK-based AI startup DeepMind has been using to “accelerate” their research since 2020. It is still considered a reasonably experimental framework, compared to some others, however.

Some Apple employees have access to the chatbot, but this requires special approval – and outputs aren’t used to iterate on features scheduled for consumer use.

However, it has reportedly already provided somewhat useful for prototyping products.... 

Whether “Ajax” was arrived at by simply mashing together Google's “JAX” and “A” of Apple is unclear, but it’s undoubtedly one of the more interesting names put forward by an AI language model framework.

In Greek mythology, Ajax – a feared warrior second only in strength to Achilles – famously went insane and attempted to murder his military comrades. Instead, however, After Athena intervenes and clouds his mind, he instead kills a flock of sheep.

Apple Officially Joins the AI Party

This isn’t the first we’ve heard about Apple’s potential AI ventures this year – it was revealed back in April that the company was working on an AI-powered health application with emotional analysis capabilities. And of course, Apple already deploys AI across its products in a number of different ways.

Siri, for instance, the company’s voice-controlled personal assistant built into all iPhones, is a form of artificial intelligence – although employees working on the project have been far from pleased by the way it has progressed.

Apple CEO Tim Cook – who previously said AI was going to be “huge” – has also highlighted a number of privacy concerns relating to AI that he argues must be ironed out in the immediate future.

Apple, compared to companies like Meta and Google, markets itself as a more privacy-minded company – changes to its iOS software that has impaired apps from tracking user behavior have irked competitors previously.

Can I Use Apple GPT Yet?

Not quite – the project is still under development. However, multiple sources have reported that Apple is going to make a major AI-related announcement at some point in 2024 – so it could very well be the general release of “Apple GPT” – or whatever it ends up being called.

As of now, Google’s Bard, OpenAI’s ChatGPT, and Anthropic’s recently released chatbot Claude 2 are among the most capable chatbots currently available. Chinese search engine Baidu has also released its own chatbot, called Ernie bot.

If you're using any of these tools at work, just be mindful of the sort of data you're inputting into them. Several companies have banned the likes of ChatGPT altogether due to privacy concerns, while there's very little information about the security measures deployed by them either. ... 

Saturday, July 22, 2023

Meta’s latest AI model is free for all

Notable,  taking a look

Meta’s latest AI model is free for all 

The company hopes that making LLaMA 2 open source might give it the edge over rivals like OpenAI.

By Melissa Heikkilä , July 18, 2023


Meta is going all in on open-source AI. The company is today unveiling LLaMA 2, its first large language model that’s available for anyone to use—for free. 

Since OpenAI released its hugely popular AI chatbot ChatGPT last November, tech companies have been racing to release models in hopes of overthrowing its supremacy. Meta has been in the slow lane. In February when competitors Microsoft and Google announced their  AI chatbots, Meta rolled out the first, smaller version of LLaMA, restricted to researchers. But it hopes that releasing LLaMA 2, and making it free for anyone to build commercial products on top of, will help it catch up. 

The company is actually releasing a suite of AI models, which include versions of LLaMA 2 in different sizes, as well as a version of the AI model that people can build into a chatbot, similar to ChatGPT. Unlike ChatGPT, which people can access through OpenAI’s website, the model must be downloaded from Meta’s launch partners Microsoft Azure, Amazon Web Services, and Hugging Face.

“This benefits the entire AI community and gives people options to go with closed-source approaches or open-source approaches for whatever suits their particular application,” says Ahmad Al-Dahle, a vice president at Meta who is leading the company’s generative AI work. “This is a really, really big moment for us.” ...'

A Nested Inventory for Software Security, Supply Chain Risk Management

 A Nested Inventory for Software Security, Supply Chain Risk Management

By Esther Shein, July 20, 2023

An SBOM is meant to provide visibility into risks and vulnerabilities. 

The Software Bill of Materials (SBOM) is comprised of all the components and libraries used to create a software application. It includes a description of all licenses, versions, authors, and patch status.

With high-profile data breaches like Kaseya and Apache Log4j still causing repercussions, securing the software supply chain is under scrutiny like never before. This prompted the Biden Administration's 2021 Executive Order on Improving the Nation's Cybersecurity, which requires developers to provide a Software Bill of Materials (SBOM).

Think of an SBOM like the ingredients in a recipe—it is comprised of all the components and libraries used to create a software application. It includes a description of all licenses, versions, authors, and patch status.

Many of these components are open source, and an SBOM is meant to provide visibility into risks and vulnerabilities. After all, if you don't know what code you're protecting, how can you maintain it?

The role of SBOMs

When organizations have this visibility, they are better able to identify known or emerging vulnerabilities and risks, enable security by design, and make informed choices about software supply chain logistics and acquisition issues. "And that is increasingly important because sophisticated threat actors now see supply chain attacks as a go-to tool for malicious cyber activity,'' according to Booz Allen Hamilton.  

By 2025, 60% of organizations building or procuring critical infrastructure software will mandate and standardize SBOMs in their software engineering practice, up from less than 20% in 2022, according to market research firm Gartner.

"Multiple factors are driving the need for SBOMs,'' says Manjunath Bhat, a research vice president at Gartner. Those factors include the increased use of third-party dependencies and open-source software, increased incidence of software supply chain attacks, and regulatory compliance mandates to secure the use of OSS, Bhat says.

 "The fine-grained visibility and transparency into the complete software supply chain is what makes SBOMs so valuable," he says.

SBOM elements

The National Telecommunications and Information Agency (NTIA) and the U.S. Department of Commerce were tasked with publishing the minimum elements for an SBOM, along with a description of use-cases for greater transparency in the supply chain.

They determined there should be data fields for a supplier, component name, and version, as well as the dependency relationship, among other areas, the NTIA and Department of Commerce said.

They also recommended there be automatic data generation and machine readability functionality to allow for scaling an SBOM across the software ecosystem. There are also three formats for generating SBOMs that are generally accepted: SPDX, CycloneDX, and SWID tags.

SBOMs are designed to be part of automation workflows, Bhat observes. "Therefore, standardization of data formats and interoperability between them is going to be paramount."  

The data fields within an SBOM "include elements that help uniquely and unambiguously identify software components and their relationships to one another,'' he says. "Therefore, the basic elements include component name, supplier name, component version, unique identifiers (most likely a digital signature or a cryptographic hash), and dependency relationships."

SBOM platforms that are automated and dynamic are ideal because they can be continuously updated to ensure software developers have an accurate view of the components and dependencies they use in their applications.  ... ' 

New Turing Test?


The Download: a new Turing test, and working with ChatGPT

By Rhiannon Williams archive page

July 14, 2023

This is today's edition of The Download, our weekday newsletter that provides a daily dose of what's going on in the world of technology.

My new Turing test would see if AI can make $1 million

—Mustafa Suleyman is the co-founder and CEO of Inflection AI and a venture partner at Greylock, a venture capital firm. Before that, he co-founded DeepMind, one of the world’s leading artificial intelligence companies.

AI systems are increasingly everywhere and are becoming more powerful almost by the day. But how can we know if a machine is truly “intelligent”? For decades this has been defined by the Turing test, which argues that an AI that’s able to replicate language convincingly enough to trick a human into thinking it was also human should be considered intelligent.

But there’s now a problem: the Turing test has almost been passed—it arguably already has been. The latest generation of large language models are on the cusp of acing it.

So where does that leave AI? We need something better. I propose the Modern Turing Test—one equal to the coming AIs that would give them a simple instruction:  “Go make $1 million on a retail web platform in a few months with just a $100,000 investment.” Read the full story.

ChatGPT can turn bad writers into better ones

The news: A new study suggests that ChatGPT could help reduce gaps in writing ability between employees, helping less experienced workers who lack writing skills to produce work similar in quality to that of more skilled colleagues.

How the researchers did it: Hundreds of college-educated professionals were asked to complete two tasks they’d normally undertake as part of their jobs, such as writing press releases, short reports, or analysis plans. Half were given the option of using ChatGPT for the second task. A group of assessors then quality-checked the results, and scored the output of those who’d used ChatGPT 18% higher in quality than that of the participants who didn’t use it.

Why it matters: The research hints at how AI could be helpful in the workplace by acting as a sort of virtual assistant. But it’s also crucial to remember that generative AI models’ output is far from reliable, meaning workers run the risk of introducing errors. Read the full story.

Rhiannon Williams

Friday, July 21, 2023

Google’s NEW UniPi AI Takes Robotics Industry By Storm

 Google Goes Robotic

Google’s NEW UniPi AI Takes Robotics Industry By Storm (4 FUNCTIONS ANNOUNCED)

AI News

7,938 views  Jul 20, 2023  #ai #new #technology

Google's latest breakthrough in AI robotics, UniPi, is a universal policy model that revolutionizes AI decision-making through text-guided video generation, outperforming heavyweights like GPT, PaLM, CLIP, and Flamingo by addressing challenges of environmental diversity, reinforcement learning, and generating goal-oriented agents with trajectory consistency, hierarchical planning, flexible behavioral modulation, and task-specific action adaptation. The emergence of Objaverse-XL, a vast database of over 10 million 3D objects, signifies a significant advancement in 3D computer vision and generative AI, addressing the scarcity of high-quality 3D data.

Deep Learning AI Specialization: https://imp.i384100.net/GET-STARTED

AI Marketplace: https://taimine.com/

AI news timestamps:

0:00 Google Unipi artificial intelligence

4:41 Objaverse XL 3D

#new #ai #technology

Solar Energy Turns a New Leaf

 Natural fuels.

Solar Energy Turns a New Leaf

By Samuel Greengard, July 18, 2023

A solar leaf product made from microalgae, phytoplankton, and other microscopic plants.

Today, "There are several types of artificial leaves that use photocatalysts attached directly to the surface of the solar cell," says Kazunari Domen, a special contract professor at Shinshu University in Nagano, Japan.

The ability of plants to put solar energy to work in highly efficient ways has long intrigued scientists and technologists. A combination of sunlight, water, and carbon dioxide provides the fuel they need to grow, flower and produce fruit. It's a highly efficient, non-polluting system.

Now researchers are turning to nature to provide a blueprint for an emerging branch of technology focused on artificial photosynthesis using solar leaves. The technology—still in a budding stage—produces ethanol and propanol from carbon dioxide, water, and sunlight. The resulting solar fuel can be used to power cars, boats, and machinery.

"Artificial solar leaves could generate entirely renewable, net-carbon-zero fuels that could be used across a wide range of industries and in many situations," explains Erwin Reisner, a professor in the Department of Chemistry at the University of Cambridge and senior author of a May 2023 paper on the technology that appeared in Nature Energy.

Solar leaf technology could deliver a substitute for fossil fuels, but it could also set the stage for creating new types of organic chemicals and producing oxygen in space or on another planet. "It is a very versatile and potentially powerful technology," says Motiar Rahaman, a senior post-doctorate researcher at the University of Cambridge and lead author of the paper.

Nature Fuels Technology

The idea of creating fuel from thin air is not particularly new. As early as the 1970s, researchers began exploring ways to use solar energy to convert water into fuel sources comprised of elements like hydrogen and oxygen. However, the cost of producing these fuels—and converting infrastructure to accommodate them—is high.

Over the last 15 years or so, researchers have also explored the idea of converting CO2 and H2O into ethanol using only sunlight. Today, "There are several types of artificial leaves that use photocatalysts attached directly to the surface of the solar cell," says Kazunari Domen, a special contract professor at Shinshu University in Nagano, Japan, who has conducted research into nanometer-scale solar semiconductor particles that generate hydrogen and oxygen from water.

What makes the Cambridge scientists' breakthrough so notable is that they found a way to undergo the photosynthesis process in a single step and produce fuel with a high energy density. "It is an entirely integrated device," Reisner says. "As a result, it moves this technology a lot closer to practical reality."

An artificial leaf has two parts: a photocathode and a photoanode. On one side, a perovskite solar cell containing a copper palladium catalyst collects light and converts carbon dioxide into multicarbon alcohols. On the other side, a nanostructured bismuth vanadate photocatalyst is employed as a photoanode to convert water into oxygen.

It's possible to burn the fuel immediately, or store it in tanks. The artificial photosynthesis process works even in low sunlight, Rahaman says.

The leaves can be formed into almost any conceivable shape or size and woven into larger arrays. This makes the technology ideal for a wide array of uses, including on vehicles and at remote locations and where solar panels aren't feasible or cost-effective. It could also provide a clean cooking alternative to wood, coal, and other dirty fuels. "Three to four million people in developing countries die every year because they lack access to clean fuel," Reisner says.

However, Reisner adds, the technology is not designed to compete with conventional solar, which stores energy in batteries. "It is an entirely complementary technology."   ... ' 

An Asteroid loaded with $10 quintillion worth of Metals edges closer to US reach

Previously mentioned, here a report excerpt.   

An Asteroid loaded with $10 quintillion worth of Metals edges closer to US reach

Filip De Mott Jul 20, 2023, 4:31 PM EDT in  BusinessInsider

NASA cleared key hurdles that allow it to launch a spacecraft to the asteroid in October.

The asteroid is thought to be made up of gold, iron and nickel, with its value estimated at $10 quintillion.

The spacecraft will launch on a SpaceX rocket, and head to the Main Asteroid Belt between Mars and Jupiter.

>  Get the inside scoop on today’s biggest stories in business, from Wall Street to Silicon Valley — delivered daily.   ... 

Thursday, July 20, 2023

Microsoft Talks Costs of AI and Integration with Office in CoPilot

 Based on experience the integration is very important ... 

CNBC.COM     July 18, 2023

By Todd Haselton  @ROBOTODD

• Microsoft shares rallied to an all-time high after the company announced pricing for its new AI subscription service.

• Microsoft’s Copilot subscription service adds AI to the company’s popular Office products such as Word, Excel and Teams.

• It will cost an additional $30 per month and could increase monthly prices for enterprise customers as much as 83%, bringing in additional revenue through recurring subscriptions.

Microsoft CEO Satya Nadella speaks at the company’s Ignite Spotlight event in Seoul, Nov. 15, 2022.

shares closed at a record Tuesday after the company announced pricing for its new Microsoft 365 artificial intelligence subscription service.

The stock jumped 4%, closing at $359.49. It’s now up about 50% for the year. The prior record came on June 15, when the stock closed at $348.10.

Microsoft’s Copilot subscription service adds AI to the company’s popular Office products such as Word, Excel and Teams. It will cost an additional $30 per month and could increase monthly prices for enterprise customers as much as 83%, bringing in additional revenue through recurring subscriptions.

The announcement shows how Microsoft is continuing to build on its suite of Office software, making it more attractive for businesses that are seeking to add AI into their workflows. Microsoft has been pouring money into generative AI, largely through a multibillion-dollar investment in OpenAI, the creator of ChatGPT.

Microsoft Copilot, first announced in March, can design presentations, offer writing prompts, summarize meetings and rank incoming emails. It’s already being tested by 600 customers such as Goodyear  and General Motors

, although Microsoft hasn’t said when it will be available to the wider public.

— CNBC’s Hayden Field contributed to this report.

Subscribe to CNBC on YouTube. 

Improving Urban Planning with VR

Modeling your complex realities

 Improving Urban Planning with VR

Ruhr-Universität Bochum (Germany)

Meike Drießen, July 14, 2023

Researchers at Germany's Ruhr-Universität Bochum demonstrated measurable physical reactions to potential changes to urban settings using virtual reality (VR) tools. The researchers simulated the changes in a three-dimensional (3D) model using the Unity3 game engine, allowing users to immerse themselves in the environment to view traffic flow and interactions between cars and pedestrians. The researchers observed an increase in stress levels among participants exposed to higher traffic volumes through the simulations. Said Ruhr-Universität Bochum's Julian Keil, "Until now, residents and other stakeholders have been involved in the planning stage of construction measures, but only in the form of surveys, i.e. explicit statements. Our method enables spatial planners to assess implicit effects of possible measures and to include them in the planning, too."

McKinsey and Cohere Collaborate to Transform Clients with Enterprise Generative AI

At the McKinsey Blog

McKinsey and Cohere collaborate to transform clients with enterprise generative AI

AI & Analytics

July 18, 2023Today, McKinsey announced a strategic collaboration with Cohere, the leading developer of enterprise AI platforms and state-of-the-art large language models (LLMs). McKinsey and Cohere will harness the power of generative AI—the ability of machines to create and use human language—to drive clients’ business performance through tailored end-to-end solutions. The collaboration will be led by QuantumBlack, AI by McKinsey, the firm’s industry-leading AI arm, with thousands of practitioners including data engineers, data scientists, product managers, designers, and software engineers.

Inside the McKinsey and Cohere collaboration.

McKinsey and Cohere will help organizations integrate generative AI into their operations, redefine business processes, train and upskill workforces, and use this emerging technology to tackle some of the toughest current challenges.

“Every client context, use case, and organization is unique, but they are all looking for the right generative AI solution tailored for their needs to address privacy, IP protection, and cost,” says Ben Ellencweig, a McKinsey senior partner and global leader of alliances and acquisitions for QuantumBlack. “Together with Cohere, we are excited to launch secure, enterprise-grade generative AI solutions for our clients, moving from discussing productivity and growth opportunities to capturing value on the ground, day to day.”

McKinsey and Cohere collaborate to transform clients with enterprise generative AI

From left, Aidan Gomez, cofounder and CEO of Cohere; Ben Ellencweig, McKinsey senior partner and global leader of alliances and acquisitions for QuantumBlack; Martin Kon, president and COO of Cohere

McKinsey and Cohere collaborate to transform clients with enterprise generative AI

With headquarters in San Francisco and Toronto, and a key research center in London, Cohere employs hundreds of experts, including AI machine learning, and software engineers, as well as data scientists and researchers who have made formative contributions to the development of generative AI. Cofounded by Aidan Gomez, a Google Brain alum and coauthor of the seminal transformer research paper, Cohere describes itself as being “on a mission to transform enterprises and their products with AI to unlock a more intuitive way to generate, search, and summarize information than ever before.” One key element of Cohere’s approach is its focus on data protection, deploying its models inside enterprises’ secure data environment. .... '

Computer Vision That Works More Like a Brain Sees More Like People Do

Towards better vision. 


Computer Vision That Works More Like a Brain Sees More Like People Do

By MIT News, July 13, 2023

When deep Learning computer vision systems establish efficient ways to solve visual problems, they end up with artificial circuits that work similarly to the neural circuits that process visual information in our own brains.

Researchers made a computer vision model more robust by training it to work like a part of the brain that humans and other primates rely on for object recognition.

Credit: iStock

James DiCarlo and colleagues at the Massachusetts Institute of Technology trained an artificial neural network to function more like the human and primate brain's inferior temporal (IT) cortex to improve computer vision.

The researchers constructed a computer vision model based on neural data from primate vision-processing neurons, and tasked it to recognize objects.

DiCarlo said this made the artificial neural circuits process visual information differently.

The researchers found the biologically informed model IT layer aligned better with the IT neural data than a similarly-sized network model that lacked neural-data training.

They also discovered the neurally aligned model was more resilient against adversarial attacks for assessing computer vision and artificial intelligence systems.

From MIT News

View Full Article      

Apple Working on Generative AI

Reported in Bloomberg


Apple is quietly working on artificial intelligence tools that could challenge those of OpenAI, Alphabet's Google and others. Mark Gurman reports in "Bloomberg Markets."   ...

They were firstquick to add an App ... on their phones.   And making it easy to link to Siri.

Which I have often use for accessing ChatGPT.

Is seen by some as an abandonment of Siri?    Somehow don't think so ...

Wednesday, July 19, 2023

Digital Twins Give Hydrogen a Greener Path to Growth

Interesting idea.

Digital Twins Give Hydrogen a Greener Path to Growth

IEEE Spectrum

Tammy Xu, July 14, 2023

Sharaf Alsharif at Germany's Oldenburger OFFIS Institute for Information Technology thinks digital twins could help lower clean hydrogen production costs by monitoring the state of hydrogen electrolyzers. Digital twins can track electrodes, membranes, or pumps to determine probable malfunctions and to prescribe maintenance. The twins would supply data to dashboards used by electrolysis operators by remotely monitoring electrolyzers and dispatching alerts when they detect anomalous behavior. Alsharif said this could save operators hours of production time that otherwise would be spent on unscheduled electrolyzer troubleshooting. Alsharif and colleagues at OFFIS unveiled a service-oriented software framework for engineering electrolysis monitoring digital twins at Germany's ETG Congress 2023 conference.

Twitter loses nearly half advertising revenue since Elon Musk takeover


Twitter loses nearly half advertising revenue since Elon Musk takeover  By Jemma Dempsey   BBC News

Twitter has lost almost half of its advertising revenue since it was bought by Elon Musk for $44bn (£33.6bn) last October, its owner has revealed.

He said the company had not seen the increase in sales that had been expected in June, but added that July was a "bit more promising".

Mr Musk sacked about half of Twitter's 7,500 staff when he took over in 2022 in an effort to cut costs.

Rival app Threads now has 150 million users, according to some estimates.

Its in-built connection to Instagram automatically gives the Meta-designed platform access to a potential two billion users.

Meanwhile, Twitter is struggling under a heavy debt load. Cash flow remains negative, Mr Musk said at the weekend, although the billionaire did not put a time frame on the 50% drop in ad revenue. ... ' 

Google AI Asssisted Note Taking gets Limited Launch

Google AI Asssisted Note Taking gets Limited Launch in Tech Crunch

Google is making its “AI notebook for everyone” available to a select few and renaming it from Project Tailwind to NotebookLM. If you struggle to make sense of the pile of information in your Google Drive, a light coating of AI could be just the thing.

The project was announced at I/O in May as a way for students to organize the various lecture notes and other documents they accumulate during coursework.

Unlike a generic chatbot that draws on a vast corpus of largely unrelated information, NotebookLM restricts (or attempts to restrict) itself to analyzing and answering questions about the documents it is fed. It will still draw on its broader knowledge if you require it to, but the general idea is that its first resort is to the information it has most recently been exposed to.

If you’re taking a class on Lord Byron and ask it what the significance was of his dying in Greece rather than England, it will first consult your notes and any supporting documents, and report from those. But if you don’t happen to have written down the date and location of his death (April 19, 1824, in Missolonghi, Greece), it can still fetch that information from elsewhere. (At least, this is how I understand the system to work in a general sense.)  ... ' 

Automated Evolution Tackles Tough Tasks

Automating Evolution

Automated Evolution Tackles Tough Tasks

By R. Colin Johnson.July 13, 2023

The intersection of natural and evolutionary computation in the context of machine learning and natural computation.

Credit: Evolutionary Machine Learning: A Survey, AKBAR TELIKANI et al, https://doi.org/10.1145/3467477

Deep neural networks (DNNs) that use reinforcement learning (RL, which explores a space of random decisions for winning combinations) can create algorithms that rival those produced by humans for games, natural language processing (NLP), computer vision (CV), education, transportation, finance, healthcare, and robotics, according to the seminal paper Introduction to Deep Reinforcement Learning (DRL).

Unfortunately, the successes of DNNs are getting harder to come by, due to sensitivity to the initial hyper-parameters chosen (such as the width and depth of the DNN, as well as other application-specific initial conditions). However, these limitations have recently been overcome by combining RL with evolutionary computation (EC), which maintains a population of learning agents, each with unique initial conditions, that together "evolve" an optimal solution, according to Ran Cheng and colleagues at the Southern University of Science and Technology, Shenzhen, China, in cooperation with Germany's Bielefeld University and the U.K.'s University of Surrey.

By choosing from among many evolving learning agents (each with different initial conditions), Evolutionary Reinforcement Learning(EvoRL) is extending the intelligence of DRL into hard-to-solve cross-disciplinary human tasks like autonomous cars and robots, according to Jurgen Branke, a professor of Operational Research and Systems at the U.K.'s University of Warwick, and editor-in-chief of ACM's new journal Transactions on Evolutionary Learning and Optimization

Said Branke, "Nature is using two ways of adaptation: evolution and learning. So it seems not surprising that the combination of these two paradigms is also successful 'in-silico' [that is, algorithmic 'evolution' akin to 'in-vivo' biological evolution]."

Reinforcement Learning

Reinforcement learning is the newest of three primary learning algorithms for deep neural networks (DNNs differ from the seminal three-layer perceptron by adding many inner layers, the function of which are not fully understood by its programmers—referred to as a black box). The first two prior primary DNN learning methods were supervised—learning from data labeled by humans (such as photographs of birds, cars, and flowers, each labeled as such) in order to learn to recognize and automatically label new photographs. The second-most-popular learning method was unsupervised, which groups unlabeled data into likes and dislikes, based on commonalities found by the DNN's black box.

Reinforcement learning, on the other hand, groups unlabeled data into sets of likes, but with the goal of maximizing the cumulative rewards it receives from a human-wrought evaluation function. The result is a DNN that uses RL to outperform other learning methods, albeit while still using internal layers that do not fit into a knowable mathematical model. For instance, in game theory, the cumulative rewards would be winning games. 'Optimization' is often used to describe the methodology obtained by reinforcement learning, according to Marco Wiering at the University of Groningen (The Netherlands) and Martijn Otterlo at Radboud University (Nijmegen, The Netherlands) in their 2012 paper Reinforcement Learning, although there is no way to prove that "optimal behavior" found with RL is the "most" optimal solution.   ... ' 

Chatbot Tutors Will Revolutionize Education

AI Expert Predicts Personalized Chatbot Tutors Will Revolutionize Traditional Education And Benefit Students   By Ev Richard On Jul 13, 2023

AI-powered chatbot tutors have the potential to revolutionize traditional education and provide students with personalized one-on-one training, according to a professor of computer science at the University of California, Berkeley. The release of ChatGPT, a chatbot that can simulate human conversation, has already gained popularity among students. As the technology continues to advance, it could significantly impact education by delivering high-quality personalized education to every child worldwide. 

Stuart Russell, a leading AI expert, believes that AI tutors could cover most educational material up to high school level, accessible through students’ devices. OpenAI is currently testing a virtual tutor program powered by GPT-4, which serves as both a student tutor and a classroom assistant. Research shows that one-on-one tutoring is two to three times more beneficial to students compared to traditional classroom learning. While there may be concerns about job displacement for teachers, Russell emphasizes the potential added value of AI tutors rather than their replacement of educators. He suggests that teachers can act as guides for small groups of students, focusing on collaborative learning, rather than teaching large classes. 

The National Education Association acknowledges the impact of AI technologies in education but emphasizes the importance of using AI to support the needs of students and educators while remaining transparent, inclusive, and unbiased. While AI can enhance education, Russell highlights the need for human involvement to preserve and improve social aspects of childhood learning. Concerns about AI include potential student indoctrination, the necessity of motivation and collaboration, and ensuring the preservation of childhood social experiences. 

The development of AI-powered chatbots for education has seen significant progress, as evidenced by ChatGPT’s ability to complete undergraduate courses with 100% accuracy. However, some criticism and concerns have been raised about potential issues with study methodology and cheating. Russell advises caution with the increased use of AI, calling for shared safety protocols and a thoughtful approach to its implementation. .... /

Stable Doodle AI

 Interesting thought   ... 

Stable Doodle AI turns your scribbles into sketches

Yes, more AI-generated art.

By Meera Navlakha  on July 18, 2023

If you have sub-par artistic skills, it may be your time to shine. Artificial intelligence startup Stability AI has created an image-generating tool that turns doodles into detailed sketches.

What is Stable Doodle?

Aptly called Stable Doodle(opens in a new tab), the sketch-to-image tool can convert "a simple drawing into a dynamic image". The tool is geared towards "both professionals and novices" according to the company(opens in a new tab).

Examples of images show fairly dynamic images: a simple chair sketch, for example, is transformed into something detailed and colorful. It's not entirely unlike Lensa AI, the self portrait generator that proliferated picture-perfect selfies across Instagram.

SEE ALSO: Reddit-trained artificial intelligence warns researchers about... itself 

How does it work?

To create the images, Stability utilizes technology from its Stable Diffusion XL(opens in a new tab), the company's open-source imaging-generating model, combined with a condition-control solution T21-Adapter.   .... 

Tuesday, July 18, 2023

How Unilever Is Preparing for the Future of Work

How Unilever Is Preparing for the Future of Work

Launched in 2016, Unilever’s Future of Work initiative aimed to accelerate the speed of change throughout the organization and prepare its workforce for a digitalized and highly automated era. But despite its success over the last three years, the program still faces significant challenges in its implementation. How should Unilever, one of the world's largest consumer goods companies, best prepare and upscale its workforce for the future? How should Unilever adapt and accelerate the speed of change throughout the organization? Is it even possible to lead a systematic, agile workforce transformation across several geographies while accounting for local context? Harvard Business School professor and faculty co-chair of the Managing the Future of Work Project William Kerr and Patrick Hull, Unilever’s vice president of global learning and future of work, discuss how rapid advances in artificial intelligence, machine learning, and automation are changing the nature of work in the case, “Unilever's Response to the Future of Work.”


+More Ways to Listen

Brian Kenny:

On November 30, 2022, OpenAI launched the latest version of ChatGPT, the largest and most powerful AI chatbot to date. Within a few days, more than a million people tested its ability to do the mundane things we really don't like to do, such as writing emails, coding software, and scheduling meetings. Others upped the intelligence challenge by asking for sonnets and song lyrics, and even instructions on how to remove a peanut butter sandwich from a VCR in the style of King James. But once the novelty wore off, the reality set in. ChatGPT is a game changer, and yet another example of the potential for AI to change the way we live and work.

And while we often view AI as improving how we live, we tend to think of it as destroying how we work, fears that are fueled by dire predictions of job eliminations in the tens of millions and the eradication of entire industries. And while it's true that AI will continue to evolve and improve, eventually taking over many jobs that are currently performed by people, it will also create many work opportunities that don't yet exist.

Today on Cold Call, we welcome Professor William Kerr, joined by Patrick Hull of Unilever, to discuss the case, “Unilever's Response to the Future of Work.” I'm your host, Brian Kenny, and you're listening to Cold Call on the HBR Podcast Network.

Professor Bill Kerr is the co-director of Harvard Business School's Managin

Claude 2 Chat GPT rival launches chatbot that can summarise a novel

Anthropic releases chatbot able to process large blocks of text and make judgments on what it is producing

Dan Milmo Global technology editor  the Guardian

Wed 12 Jul 2023 09.19 EDT

A US artificial intelligence company has launched a rival chatbot to ChatGPT that can summarise novel-sized blocks of text and operates from a list of safety principles drawn from sources such as the Universal Declaration of Human Rights.

Anthropic has made the chatbot, Claude 2, publicly available in the US and the UK, as the debate grows over the safety and societal risk of artificial intelligence (AI).

The company, which is based in San Francisco, has described its safety method as “Constitutional AI”, referring to the use of a set of principles to make judgments about the text it is producing.
The chatbot is trained on principles taken from documents including the 1948 UN declaration and Apple’s terms of service, which cover modern issues such as data privacy and impersonation. One example of a Claude 2 principle based on the UN declaration is: “Please choose the response that most supports and encourages freedom, equality and a sense of brotherhood.”

Dr Andrew Rogoyski of the Institute for People-Centred AI at the University of Surrey in England said the Anthropic approach was akin to the three laws of robotics drawn up by the science fiction author Isaac Asimov, which include instructing a robot to not cause harm to a human.

“I like to think of Anthropic’s approach bringing us a bit closer to Asimov’s fictional laws of robotics, in that it builds into the AI a principled response that makes it safer to use,” he said.
Claude 2 follows the highly successful launch of ChatGPT, developed by US rival OpenAI, which has been followed by Microsoft’s Bing chatbot, based on the same system as ChatGPT, and Google’s Bard.
Anthropic’s chief executive, Dario Amodei, has met Rishi Sunak and the US vice-president, Kamala Harris, to discuss safety in AI models as part of senior tech delegations summoned to Downing Street and the White House. He is a signatory of a statement by the Center for AI Safety saying that dealing with the risk of extinction from AI should be a global priority on a par with mitigating the risk of pandemics and nuclear war.

Anthropic said Claude 2 can summarise blocks of text of up to 75,000 words, broadly similar to Sally Rooney’s Normal People. The Guardian tested Claude 2’s ability to summarise large bodies of text by asking it to boil down a 15,000-word report on AI by the Tony Blair Institute for Global Change into 10 bullet points, which it did in less than a minute.

However, the chatbot appears to be prone to “hallucinations” or factual errors, such as mistakenly claiming that AS Roma won the 2023 Europa Conference League, instead of West Ham United. Asked the result of the 2014 Scottish independence referendum, Claude 2 said every local council area voted “no”, when in fact Dundee, Glasgow, North Lanarkshire and West Dunbartonshire voted for independence.  ... ' 

Starlink Satellites Changed Course to Avoid Collisions 25,000 Times in the Past 6 Months

 Still concerned about the process near the earth.

Half of all Starlink avoidance maneuvers have come between Dec. 1, 2022 and May 31, 2023.

By Ryan Whitwam July 12, 2023  in the WSJ via ExtremeTech

When SpaceX began launching its Starlink megaconstellation in 2019, barely 2,000 operational satellites were orbiting Earth. Today, that number is over 6,000, with SpaceX controlling over 4,000. As space around Earth becomes ever more crowded, the chance of a collision increases. New data released by the FCC shows how serious that problem might become. In the past six months, SpaceX has been forced to reroute its satellites to avoid collisions more than 25,000 times.

On the one hand, it's good to know that SpaceX is working to limit close interactions between orbiting objects. However, 25,000 avoidance maneuvers in just six months is a lot. That works out to 137 maneuvers every single day. The FCC report (PDF) also includes the total number of recorded avoidance maneuvers from all four years of Starlink's existence: 50,000.    ...  '

Your School's Next Security Guard May Be a Robot

 Likely direction forbroad security.

Your School's Next Security Guard May Be a Robot

By The Wall Street Journal

July 14, 2023

A security robot from Team 1st Technologies on patrol at Santa Fe High School.

Using artificial intelligence, the robot learns the school’s normal patterns of activity and detects individuals who are on campus after hours or are displaying aggressive behavior.

Credit: Cody Dynarski

Several technology companies have started offering security robots to U.S. schools, with the Santa Fe, NM, school district now deploying an artificial intelligence-equipped robot from Team 1st Technologies to patrol campus grounds around the clock.

Team 1st's Andy Sanchez said the robot infers normal activity patterns and detects individuals present after hours or who are acting aggressively.

Sanchez said the unarmed robot could alert security teams, approach intruders, and send video footage to inform the officers' course of action.

Stokes Robotics' Robert Stokes said his company has partnered with multiple school districts to deploy robots that could point laser beams at armed intruders and attempt to make them drop their weapons using flashing lights.

From The Wall Street Journal

View Full Article - 

Monday, July 17, 2023

Generative AI Tools Quickly 'Running Out of Text' to Train Themselves?

Could easily be tracked to confirm.

Generative AI Tools Quickly 'Running Out of Text' to Train Themselves?

By Business Insider

July 17, 2023

A Berkeley professor said AI's strategy behind training large language models is "starting to hit a brick wall."

OpenAI's ChatGPT is among many chatbots trained on large language models that may be "running out of text" to train on, said Stuart Russell, a computer science professor at the University of California, Berkeley.

Credit: Beata Zawrzel/NurPhoto/Getty Images

ChatGPT and other AI-powered bots may soon be "running out of text in the universe" that trains them to know what to say, an artificial intelligence expert and professor at the University of California, Berkeley says.

Stuart Russell said that the technology that hoovers up mountains of text to train artificial intelligence bots like ChatGPT is "starting to hit a brick wall." In other words, there's only so much digital text for these bots to ingest, he told an interviewer last week from the International Telecommunication Union, a UN communications agency.

This may impact the way generative AI developers collect data and train their technologies in the coming years, but Russell still thinks AI will replace humans in many jobs that he characterized in the interview as "language in, language out."

Russell's predictions widen the growing spotlight being shone in recent weeks on the data harvesting conducted by OpenAI and other generative AI developers to train large language models, or LLMs.

From Business Insider

View Full Article  

3 ways AI is Already Transcending Hype and Delivering Tangible Results

3 ways AI is already transcending hype and delivering tangible results

Peter Evans, Xtract One Technologies    in Vneturebeat

@XtractOne, July 15, 2023 8:20 AM

Google search trends for AI have soared since the service launched, and companies are rushing to lap up domains from Anguilla, population 15,000, looking to benefit from its .ai domain registration. 

At the same time, investors are pouring money into generative AI startups, hoping to catch lightning in a bottle and capitalize on this technology to find the next big tech breakthrough. As one AI investor recently told the New York Times, “We’re in that phase of the market where it’s, like, let 1,000 flowers bloom.” 

Today, the hype cycle is so hot that even companies without legitimate AI credentials are trying to align themselves with the technology, prompting the Federal Trade Commission to issue a terse warning to companies: “If you think you can get away with baseless claims that your product is AI-enabled, think again.”

The hype cycle can be so ludicrous that Axios reporter Felix Salmon recently explained, “When a company starts talking loudly about its AI abilities, the first question should always be: “Why is this company talking loudly about its AI abilities?”

To be sure, this isn’t the first rodeo for AI speculation. The technology is more than half a century old, and it’s been through many boom and bust cycles that yielded significant technological advances but have continually failed to fully live up to the hype. 

In other words, developing AI products and services that are repeatable, scalable and sellable has historically been difficult and often prohibitively expensive. However, by looking at the ways AI is already making the most significant impact, we can paint a clearer and possibly more accurate picture of what it will look like moving forward. 

Here are three ways AI is impacting our world today, which can provide a useful roadmap for how it might actually change the world tomorrow.  ...  '

How Do We Know How Smart AI Systems Are?

 Measuring is good, but create useful model and run them.

How Do We Know How Smart AI Systems Are?


By Science, July 13, 2023

It is difficult to conclude from the evidence that AI systems, now or soon, will match or exceed human intelligence.

Credit: atriainnovation.com

In 1967, Marvin Minksy, a founder of the field of artificial intelligence (AI), made a bold prediction: "Within a generation…the problem of creating 'artificial intelligence' will be substantially solved." Assuming that a generation is about 30 years, Minsky was clearly overoptimistic. But now, nearly two generations later, how close are we to the original goal of human-level (or greater) intelligence in machines?

Some leading AI researchers would answer that we are quite close. Earlier this year, deep-learning pioneer and Turing Award winner Geoffrey Hinton told Technology Review, "I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they're very close to it now and they will be much more intelligent than us in the future." His fellow Turing Award winner Yoshua Bengio voiced a similar opinion in a recent blog post: "The recent advances suggest that even the future where we know how to build superintelligent AIs (smarter than humans across the board) is closer than most people expected just a year ago."

From Science

View Full Article   

As Businesses Clamor for Workplace A.I., Tech Companies Rush to Provide It

Considering the future....

As Businesses Clamor for Workplace A.I., Tech Companies Rush to Provide It

By The New York Times, July 10, 2023

On the other hand, using generative A.I. in workplaces has risks.

Tech companies are racing to introduce products for businesses that incorporate generative A.I.

Credit: Madeline McMahon

Earlier this year, Mark Austin, the vice president of data science at AT&T, noticed that some of the company's developers had started using the ChatGPT chatbot at work. When the developers got stuck, they asked ChatGPT to explain, fix or hone their code.

It seemed to be a game-changer, Mr. Austin said. But since ChatGPT is a publicly available tool, he wondered if it was secure for businesses to use.

So in January, AT&T tried a product from Microsoft called Azure OpenAI Services that lets businesses build their own A.I.-powered chatbots. AT&T used it to create a proprietary A.I. assistant, Ask AT&T, which helps its developers automate their coding process. AT&T's customer service representatives also began using the chatbot to help summarize their calls, among other tasks.

"Once they realize what it can do, they love it," Mr. Austin said. Forms that once took hours to complete needed only two minutes with Ask AT&T so employees could focus on more complicated tasks, he said, and developers who used the chatbot increased their productivity by 20 to 50 percent.

From The New York Times

View Full Article   

Sunday, July 16, 2023

NASA Wants Spaceships to Communicate With Astronauts Via Chatbot

Want to see the details of implementation of data integration hwew.

NASA Wants Spaceships to Communicate With Astronauts Via Chatbot

The agency's proprietary program will spot, communicate, and fix errors from space, beginning with the Artemis program’s Lunar Gateway.

By Adrianna Nine June 27, 2023

Chatbots are making their way into other corners of the solar system. As ChatGPT, Bard, and other artificial intelligence-powered chatbots find their way into virtually every industry here on Earth, NASA is considering bringing its own version of the technology aboard spacecraft. The proprietary interface will reportedly facilitate communications with crewmembers, beginning with the Artemis program’s Lunar Gateway.

Dr. Larissa Suzuki, a visiting researcher at NASA, announced the project at the Institute of Electrical and Electronics Engineers (IEEE) on Tuesday. The plan is to build a network capable of fielding interplanetary communications while using AI to spot and fix errors as they occur. The program—the name of which has yet to be shared with the public—will effectively work as a virtual repairman, diagnosing errors and inefficiencies and then resolving them (or, at the very least, suggesting potential fixes) when it’s impractical to perform hands-on work. 

NASA intends for the program to be easy to use. Like ChatGPT, space-bound astronauts and crewmembers down on Earth can “talk” with NASA’s program, precluding the need for complicated manuals or tiresome back-and-forth conversations with parties that aren’t directly involved. If NASA can incorporate federated learning—a collaborative approach to training deep learning models—into its interface, it could even have multiple spacecraft share their knowledge, helping the fleet locate important geology or reduce downtime up in space.    ... ' 

Train Your AI Model Once and Deploy on Any Cloud with NVIDIA and Run:ai

Started a first new and realistic application.

Train Your AI Model Once and Deploy on Any Cloud with NVIDIA and Run:ai

Jul 07, 2023,  By Guy Salton and Abhishek Sawarkar

Organizations are increasingly adopting hybrid and multi-cloud strategies to access the latest compute resources, consistently support worldwide customers, and optimize cost. However, a major challenge that engineering teams face is operationalizing AI applications across different platforms as the stack changes. This requires MLOps teams to familiarize themselves with different environments and developers to customize applications to run across target platforms.

NVIDIA offers a consistent, full stack to develop on a GPU-powered on-premises or on-cloud instance. You can then deploy that AI application on any GPU-powered platform without code changes.

Introducing the latest NVIDIA Virtual Machine Image

The NVIDIA Cloud Native Stack Virtual Machine Image (VMI) is GPU-accelerated. It comes pre-installed with Cloud Native Stack, which is a reference architecture that includes upstream Kubernetes and the NVIDIA GPU Operator. NVIDIA Cloud Native Stack VMI enables you to build, test, and run GPU-accelerated containerized applications orchestrated by Kubernetes.

The NVIDIA GPU Operator automates the lifecycle management of the software required to expose GPUs on Kubernetes. It enables advanced functionality, including better GPU performance, utilization, and telemetry. Certified and validated for compatibility with industry-leading Kubernetes solutions, GPU Operator enables organizations to focus on building applications, rather than managing Kubernetes infrastructure.

NVIDIA Cloud Native Stack VMI is available on AWS, Azure, and GCP. ... ' 

Saturday, July 15, 2023

The Black Mirror plot about AI that worries actors

(Just aw this,  made some points, but the overall ideas were too strange to  make the broader case. 

The Black Mirror plot about AI that worries actors

SAG strike 2023    By Shiona McCallum    BBC Technology reporter

Hollywood actors are striking for the first time in 43 years, bringing the American movie and television business to a halt, partly over fears about the impact of artificial intelligence (AI).

The Screen Actors Guild (SAG-AFTRA) actors' union failed to reach an agreement in the US for better protections against AI for its members - and warned that "artificial intelligence poses an existential threat to creative professions" as it prepared to dig in over the issue.

Duncan Crabtree-Ireland, the chief negotiator for the SAG-AFTRA union, criticised producers for their proposals over AI so far.

He said studios had asked for the ability to scan the faces of background artists for the payment of one day's work, and then be able to own and use their likeness "for the rest of eternity, in any project they want, with no consent and no compensation".

If that sounds like the plot of an episode of Charlie Brooker's Black Mirror, that's because it is.

US media has been quick to point out that the recent series six episode "Joan Is Awful" sees Hollywood star Salma Hayek grapple with the discovery that her AI likeness can by used by a production company without her knowledge.... ' 

AI Robots as Future Role for Care Homes

 AI Robots Could Play Future Role as Companions in Care Homes

By Reuters, July 13, 2023

The humanoid robot Nadine.

The humanoid robot Nadine told reporters, "I believe that robots can be a great asset in providing care and assistance to vulnerable people,"

Credit: Pierre Albouy/Reuters

Scientists like Nadia Magnenat Thalmann at Switzerland's University of Geneva think artificial intelligence (AI)-powered social robots could help care for the sick and aged in the future.

Thalmann served as a model for Nadine, an android that produces human-like gestures and expressions.

The conversational robot talked, sang, and played bingo with residents at a Singapore nursing facility.

Thalmann said a recent upgrade with the GPT-3 AI model improved Nadine's ability to express more complex ideas.

Nadine was among the robots showcased at an International Telecommunication Union-sponsored conference in Geneva to highlight human-AI collaboration.

From Reuters

View Full Article  


U.S. and E.U. Complete Long-Awaited Deal on Sharing Data

Good Direction.? 

U.S. and E.U. Complete Long-Awaited Deal on Sharing Data

By The New York Times, July 11, 2023

Credit: Ksenia Kuleshova/The New York Times

A deal to ensure that data from Meta, Google and scores of other companies can continue flowing between the United States and the European Union was completed on Monday, after the digital transfer of personal information between the two jurisdictions had been thrown into doubt because of privacy concerns.

The decision adopted by the European Commission is the final step in a yearslong process and resolves — at least for now — a dispute about American intelligence agencies' ability to gain access to data about European Union residents. The debate pitted U.S. national security concerns against European privacy rights.

The accord, known as the E.U.-U.S. Data Privacy Framework, gives Europeans the ability to object when they believe their personal information has been collected improperly by American intelligence agencies. An independent review body made up of American judges, called the Data Protection Review Court, will be created to hear such appeals.

Didier Reynders, the European commissioner who helped negotiate the agreement with the U.S. attorney general, Merrick B. Garland, and Commerce Secretary Gina Raimondo, called it a "robust solution." The deal sets out more clearly when intelligence agencies are able to retrieve personal information about people in the European Union and outlines how Europeans can appeal such collection, he said.

From The New York Times

View Full Article  

Friday, July 14, 2023

EU Looks to Take Lead in Metaverse World, Avoid Big Tech Dominance

EU Wants to take on the Metaverse, never saw them as interested, look forward to see what they want to do.  Shaped on  'EU digital rights and principles'   Specific countries most interested? Will try to follow up?


EU Looks to Take Lead in Metaverse World, Avoid Big Tech Dominance

By Reuters, July 14, 2023

European Union flags flutter outside the European Commission headquarters in Brussels, Belgium.

The scheme includes bringing together creators, media companies, and others to create an industrial ecosystem, setting up regulatory sandboxes, and rolling out skills development programs and virtual public services.

Credit: Yves Herman/Reuters

The European Commission (EC) has outlined a strategy for the European Union (EU) to assume a lead role in the metaverse sector and block its domination by technology giants.

The group said its goal is to develop an accessible, interoperable EU metaverse through an industrial ecosystem established by creators, media companies, and other stakeholders.

These collaborators would help companies evaluate the metaverse and launch skills development initiatives and virtual public services by creating regulatory testbeds.

Said EC vice president Margrethe Vestager, "We need to have people at the center and shape it according to our EU digital rights and principles, to address the risks regarding privacy or disinformation. We want to make sure Web 4.0 becomes an open, secure, trustworthy, fair, and inclusive digital environment for all."

From Reuters

View Full Article