/* ---- Google Analytics Code Below */

Thursday, August 10, 2023

Copying with Microdirectives

Complying with Microdirectives

Representatives of OpenAI declined to comment on companies privacy concerns.

Generative AI tools such as OpenAI’s ChatGPT have been heralded as pivotal for the world of work, but the technology is creating a formidable challenge for corporate America.

Proponents of OpenAI's ChatGPT and other generative artificial intelligence tools contend that they can boost workplace productivity, automating certain tasks and assisting with problem-solving, but some corporate leaders have banned their use over concerns about exposing sensitive company and customer information.

These leaders are concerned that employees could upload proprietary or sensitive data into the chatbot, which would be added to a database used to train it, allowing hackers or competitors to ask the chatbot for that information.

A post on OpenAI's website said private mode allows ChatGPT users to keep their prompts out of its training data.

Massachusetts Institute of Technology's Yoon Kim said that while technically possible, guardrails implemented by OpenAI prevent ChatGPT from using sensitive prompts in its training data.

Kim added that the vast amount of data needed by ChatGPT to learn would make it difficult for hackers to access proprietary data entered as a prompt.

From The Washington Post

View Full Article - May Require Paid Subscription

Copyling with Microdirectives

Complying with Microdirectives

AI and Microdirectives

Imagine a future in which AIs automatically interpret—and enforce—laws.

All day and every day, you constantly receive highly personalized instructions for how to comply with the law, sent directly by your government and law enforcement. You’re told how to cross the street, how fast to drive on the way to work, and what you’re allowed to say or do online—if you’re in any situation that might have legal implications, you’re told exactly what to do, in real time.

Imagine that the computer system formulating these personal legal directives at mass scale is so complex that no one can explain how it reasons or works. But if you ignore a directive, the system will know, and it’ll be used as evidence in the prosecution that’s sure to follow.

This future may not be far off—automatic detection of lawbreaking is nothing new. Speed cameras and traffic-light cameras have been around for years. These systems automatically issue citations to the car’s owner based on the license plate. In such cases, the defendant is presumed guilty unless they prove otherwise, by naming and notifying the driver.  ... '

Monday, July 31, 2023

Complying with Microdirectives

 Schneier makes makes some useful thoughts, 

AI and Microdirectives

Imagine a future in which AIs automatically interpret—and enforce—laws.

All day and every day, you constantly receive highly personalized instructions for how to comply with the law, sent directly by your government and law enforcement. You’re told how to cross the street, how fast to drive on the way to work, and what you’re allowed to say or do online—if you’re in any situation that might have legal implications, you’re told exactly what to do, in real time.

Imagine that the computer system formulating these personal legal directives at mass scale is so complex that no one can explain how it reasons or works. But if you ignore a directive, the system will know, and it’ll be used as evidence in the prosecution that’s sure to follow.

This future may not be far off—automatic detection of lawbreaking is nothing new. Speed cameras and traffic-light cameras have been around for years. These systems automatically issue citations to the car’s owner based on the license plate. In such cases, the defendant is presumed guilty unless they prove otherwise, by naming and notifying the driver.  ... '

AI-Powered Brain Surgery Becomes A Reality In Hong Kong

 ACM NEWS

AI-Powered Brain Surgery Becomes A Reality In Hong Kong

By South China Morning Post (Hong Kong)

July 14, 2023, Robotic surgery equipment.

Robotics are being used increasingly for surgical procedures, especially for those considered minimally invasive.

A Hong Kong-based research centre under the Chinese Academy of Sciences (CAS), China's national research institute, plans to launch a robotics system for brain surgery in the near future, despite challenges from a shortage of talent and artificial intelligence (AI) chips.

The Centre for Artificial Intelligence and Robotics (CAIR), established in 2019, has completed three successful cadaver trials with its MicroNeuro robot, which can perform deep brain surgery "in a minimally invasive manner", Liu Hongbin, the centre's executive director, told the Post in an interview on Thursday.

The main approach today requires surgeons to operate with rigid tools and open large windows on a patient's scalp, which damages a lot of healthy brain tissue, Liu said.

"Brain surgery is a type of surgery that needs technology the most because it's a very dangerous procedure," Liu said. "Surgeons really want to use AI and tech innovation to make this type of procedure much less invasive than it is now."

From South China Morning Post (Hong Kong)

View Full Article  

Saturday, July 29, 2023

Quantum Twist on Common Computer Algorithm Promises Speed Boost

Quantum Twist on Common Computer Algorithm Promises Speed Boost

By New Scientist, July 14, 2023

An IBM quantum computer.

Mazzola stresses the team is not yet claiming quantum advantage; the result demonstrates future potential, rather than current ability.

Credit: IBM

Scientists at Switzerland's University of Zurich (UZH) and IBM have demonstrated that a quantum version of the popular Monte Carlo algorithm could eventually overtake versions running on classical computers.

However, the researchers explained, attaining this speed advantage would probably require a quantum system with at least 1,000 quantum bits.

Said UZH's Guglielmo Mazzola, "If this works, it's going to enhance, by a lot, the way in which we model systems and that, in turn, will allow us to make better predictions in a wide range of fields."

However, he acknowledged that "we cannot exclude that our classical friends can devise something even better."

From New Scientist

View Full Article

 

Want to Win a Chip War? You're Gonna Need a Lot of Water

Want to Win a Chip War? You're Gonna Need a Lot of Water

By Wired, July 21, 2023

The chip industry’s thirst for water springs from the need to keep silicon wafers free from even the tiniest specks of dust or debris to prevent contamination of their microscopic components.

Credit: Bill Varie/Getty Images

Building a semiconductor factory requires enormous quantities of land and energy, then some of the most precise machinery on Earth to operate. The complexity of chip fabs, as they are called, is one reason why the US Congress last year committed more than $50 billion to boost U.S. chip production in a bid to make the country more technologically independent.

But as the U.S. seeks to boot up more fabs, it also needs to source more of a less obvious resource: water. Take Intel's ambitious plan to build a $20 billion mega-site outside Columbus, Ohio. The area already has three water plants that together provide 145 million gallons of drinking water each day, but officials are planning to spend heavily on a fourth to, at least in part, accommodate Intel.

Water might not sound like a conventional ingredient of electronics manufacturing, but it plays an essential role in cleaning the sheets, or wafers, of silicon that are sliced and processed into computer chips. A single fab might use millions of gallons in a single day, according to the Georgetown Center for Security and Emerging Technology (CSET)—about the same amount of water as a small city in a year.

Chip companies hoping to take advantage of the CHIPS and Science Act, last year's federal spending package aiming to boost US chip manufacturing, are now constructing new water processing facilities alongside their fabs. And cities trying to attract new factories funded by the legislation are studying the potential impact on their water supplies. In some places it may be necessary to secure the water supply; in others, new infrastructure must be installed to recycle water used by fabs.

From Wired

View Full Article  

Google's AI Red Team: the Ethical Hackers Making AI Safer

 Interesting piece.

Google's AI Red Team: the ethical hackers making AI safer

July 19, 2023, 3 min read

Today, we're publishing information on Google’s AI Red Team for the first time.

Daniel Fabian, Head of Google Red Teams

Last month, we introduced the Secure AI Framework (SAIF), designed to help address risks to AI systems and drive security standards for the technology in a responsible manner.

To build on this momentum, today, we’re publishing a new report to explore one critical capability that we deploy to support SAIF: red teaming. We believe that red teaming will play a decisive role in preparing every organization for attacks on AI systems and look forward to working together to help everyone utilize AI in a secure way. The report examines our work to stand up a dedicated AI Red Team and includes three important areas: 1) what red teaming in the context of AI systems is and why it is important; 2) what types of attacks AI red teams simulate; and 3) lessons we have learned that we can share with others.

What is red teaming?

Google Red Team consists of a team of hackers that simulate a variety of adversaries, ranging from nation states and well-known Advanced Persistent Threat (APT) groups to hacktivists, individual criminals or even malicious insiders. The term came from the military, and described activities where a designated team would play an adversarial role (the “Red Team”) against the “home” team.

For a closer look at Google’s security Red Team, watch the above video.

Over the past decade, we’ve evolved our approach to translate the concept of red teaming to the latest innovations in technology, including AI. The AI Red Team is closely aligned with traditional red teams, but also has the necessary AI subject matter expertise to carry out complex technical attacks on AI systems. To ensure that they are simulating realistic adversary activities, our team leverages the latest insights from world class Google Threat Intelligence teams like Mandiant and the Threat Analysis Group (TAG), content abuse red teaming in Trust & Safety, and research into the latest attacks from Google DeepMind. .... '

Tuesday, July 25, 2023

Amazon Cashless 'Pay by Palm' Technology Requires Only a Hand Wave

 Another area we examined for retail tech,

Amazon Cashless 'Pay by Palm' Technology Requires Only a Hand Wave

By CBS News.July 21, 2023

Paying with palm recognition.

According to Amazon, palm payment is secure and cannot be replicated because the technology looks at both the palm and the underlying vein structure to create unique "palm signatures" for each customer.

Credit: Amazon

Retail giant Amazon has announced a new contactless transaction service that allows shoppers to pay with their palms.

Users can enable transactions by hovering their palms over an Amazon One device, which can facilitate payment, identification, loyalty program membership, and entry.  Amazon said palm payment is impossible to replicate because the system creates unique "palm signatures" for each customer by examining the palm and the underlying vein arrangement.

Each palm signature, the company added, corresponds to a numerical vector representation, and is  securely warehoused in the Amazon Web Services cloud.

The technology is already available at 200 Amazon locations in 20 U.S. states, and the company intends to deploy it at more than 500 Whole Foods and Amazon Fresh outlets by year's end.

From CBS News

View Full Article  

MIT Makes Probability-Based Computing a Bit Brighter

MIT Makes Probability-Based Computing a Bit Brighter The p-bit harnesses photonic randomness to explore a new computing frontier By EDD GENT      MARGO ANDERSON

In a noisy and imprecise world, the definitive 0s and 1s of today’s computers can get in the way of accurate answers to messy real-world problems. So says an emerging field of research pioneering a kind of computing called probabilistic computing. And now a team of researchers at MIT have pioneered a new way of generating probabilistic bits (p-bits) at much higher rates—using photonics to harness random quantum oscillations in empty space.

The deterministic way in which conventional computers operate is not well-suited to dealing with the uncertainty and randomness found in many physical processes and complex systems. Probabilistic computing promises to provide a more natural way to solve these kinds of problems by building processors out of components that behave randomly themselves.

The approach is particularly well-suited to complicated optimization problems with many possible solutions or to doing machine learning on very large and incomplete datasets where uncertainty is an issue. Probabilistic computing could unlock new insights and findings in meteorology and climate simulations, for instance, or spam detection and counterterrorism software, or next-generation AI.

The team can now generate 10,000 p-bits per second. Is the p-circuit next?

The fundamental building blocks of a probabilistic computer are known as p-bits and are equivalent to the bits found in classical computers, except they fluctuate between 0 and 1 based on a probability distribution. So far, p-bits have been built out of electronic components that exploit random fluctuations in certain physical characteristics.

But in a new paper published in the latest issue of the journal Science, the MIT team have created the first ever photonic p-bit. The attraction of using photonic components is that they operate much faster and are considerably more energy efficient, says Charles Roques-Carmes, a science fellow at Stanford University and visiting scientist at MIT, who worked on the project while he was a postdoc at MIT. “The main advantage is that you could generate, in principle, very many random numbers per second,” he adds.    ..'

More than 1,300 Experts call AI a Force for good


More than 1,300 experts call AI a force for good

Published, 4 days ago   By Chris Vallance, Technology reporter

An open letter signed by more than 1,300 experts says AI is a "force for good, not a threat to humanity".

It was organised by BCS, the Chartered Institute for IT, to counter "AI doom".

Rashik Parmar, BCS chief executive, said it showed the UK tech community didn't believe the "nightmare scenario of evil robot overlords".

In March, tech leaders including Elon Musk, who recently launched an AI business, signed a letter calling for a pause in developing powerful systems.

That letter suggested super-intelligent AI posed an "existential risk" to humanity. This was a view echoed by film director Christopher Nolan, who told the BBC that AI leaders he spoke to saw the present time "as their Oppenheimer moment". J.Robert Oppenheimer played a key role in the development of the first atomic bomb, and is the subject of Mr Nolan's latest film.

But the BCS sees the situation in a more positive light, while still supporting the need for rules around AI.

Richard Carter is a signatory to the BCS letter. Mr Carter, who founded an AI-powered startup cybersecurity business, feels the dire warnings are unrealistic: "Frankly, this notion that AI is an existential threat to humanity is too far-fetched. We're just not in any kind of a position where that's even feasible".

Signatories to the BCS letter come from a range of backgrounds - business, academia, public bodies and think tanks, though none are as well known as Elon Musk, or run major AI companies like OpenAI.

Those the BBC has spoken to stress the positive uses of AI. Hema Purohit, who leads on digital health and social care for the BCS, said the technology was enabling new ways to spot serious illness, for example medical systems that detect signs of issues such as cardiac disease or diabetes when a patient goes for an eye test.

She said AI could also help accelerate the testing of new drugs.

Signatory Sarah Burnett, author of a book on AI and business, pointed to agricultural uses of the tech, from robots that use artificial intelligence to pollinate plants to those that "identify weeds and spray or zap them with lasers, rather than having whole crops sprayed with weed killer". ... ' 

Monday, July 24, 2023

Biosensor Offers Real-Time Dialysis Feedback

Biosensor Offers Real-Time Dialysis Feedback

By IEEE Spectrum, July 20, 2023

Hemodialysis is a vital procedure for people with kidney failure, but it requires multiple lengthy clinic visits every week.

A new biosensor provides real-time feedback on the filtering rate as blood is circulated from a patient to the dialysis machine and back.

Researchers at Iran's Shahrood University of Technology (SUT) engineered a new biosensor to expedite hemodialysis procedures by providing dialysis feedback in real time.

The electromagnetic bandgap structure biosensor uses microwaves to analyze the waste and toxin levels of the patient's blood during dialysis.

Experiments using fake blood showed the biosensor could identify relative alterations in blood permittivity across samples, suggesting real-time feedback during dialysis was possible.

SUT's Javad Ghalibafan said the low-cost, low-power device does not disrupt the procedure, and could shorten dialysis sessions if it detects that the patient's blood toxin concentrations are sufficiently low to end the session early.

From IEEE Spectrum

View Full Article  

EU rules on AI must do more to protect human rights, NGOs warn

EU rules on AI must do more to protect human rights, NGOs warn

The group fears lobbyists might succeed in their efforts to water down the proposed AI Act

A group of 150 NGOs including Human Rights Watch, Amnesty International, Transparency International, and Algorithm Watch has signed a statement addressed to the European Union. In it, they entreat the bloc not only to maintain but enhance human rights protection when adopting the AI Act. 

Between the apocalypse-by-algorithm and the cancer-free utopia different camps say the technology could bring, lies a whole spectrum of pitfalls to avoid for the responsible deployment of AI.  ... ' 

IBM and the its future of AI

 IBM misses on revenue but sees AI leading a new round of growth

BY PAUL GILLIN

IBM Corp., which is typically the first major information technology company to report earnings each quarter, today beat profit expectations and narrowly missed revenue forecasts in its fiscal first quarter but set an optimistic tone for the rest of the year.

The company cited double-digit revenue growth in two strategic business areas — Red Hat hybrid cloud and artificial intelligence — while saying that the demand by enterprises across the globe for AI has significant potential upside for both its software and consulting businesses. Red Hat revenue rose 11% and revenue from data and AI gained 10% from a year earlier.

Total quarterly revenue fell 0.4% from a year ago, to $15.47 billion, below consensus estimates of $15.58 billion. However, earnings of $2.18 a share beat analysts’ expectations of $2.01, although they fell below the $2.31 a share earned in the same quarter last year.

Gross profit margin of 54.9% was up 1.6% and operating profit margins grew 1.4%, to 55.9%. Executives reiterated expectations of between 3% and 5% revenue growth this year. IBM has said both metrics are key performance indicators for 2023.

IBM’s stock fell a little over 1% in after-hours trading following the earnings report.

‘Solid execution’

“Continued solid execution of our AI and hybrid cloud strategy makes us confident in our ability to achieve our full-year expectations for free cash flow and revenue,” said Chief Executive Officer Arvind Krishna (pictured).

IBM reported an 8% jump in software revenue on a constant currency basis, with consulting revenue up 6% and infrastructure revenue down 14%. “We have good momentum in our underlying operational product performance,” said Chief Financial Officer James Kavanaugh.

Although infrastructure revenue fell 14% in line with normal mainframe product cycles, Kavanaugh said IBM is seeing unusual resilience in the online transaction processing market. “We saw an inflection shift in OLTP in 2022,” he said. “We have a much-extended opportunity base to go get that revenue.”

“IBM is hitting the double headwinds of a slowing economy and challenging currency exchange rate developments,” said Holger Mueller, principal analyst at Constellation Research Inc. “The good news is that Red Hat is holding up with 11% growth and IBM is gaining a second leg from the strong momentum of its data and AI portfolio.”

Pundit-IT Inc. Chief Analyst Charles King concurred that the company’s strategic products are holding up well. “Red Hat continues to deliver the goods in software and is also central to the company’s hybrid cloud strategy and solutions,” he said. “The best news was a 24% year-over-year jump in consulting signings among both large and small enterprise customers.”

Generative AI opportunity

IBM executives said excitement over generative AI has a big upside for the company. Krishna said he was “very excited” by the initial reaction to Its May announcement of WatsonX, a product suite designed to help companies more easily build and deploy artificial intelligence models.

The CEO compared WatsonX to Red Hat’s OpenShift, which debuted in 2019 and has roughly doubled in revenue each year. “We have quantified OpenShift at $1.1 billion on an annualized run rate basis,” he said. “It gives you a sense of the excitement we have around these [AI] projects.”

Pund-IT’s King concurred, saying the participation of more than 150 customers in the development of the WatsonX platform “suggests that demand for enterprise-class generative AI solutions will be strong.”

Although the underwhelming market performance of the Watson AI platform since its victory on “Jeopardy!” more than a decade ago has largely sidelined IBM as a market leader in AI, “the company has systematically worked through most of the serious problems that are tripping up new AI platforms,” King said.

He pointed to the company’s work in developing large language model tools and data sets, addressing data privacy and security concerns and building a set of ethical standards to follow in AI development. “IBM’s AI efforts may not be well-known when it comes to generating error-ridden reports and term papers, but the company is steadily adding AI-enabled features and functions that measurably improve the performance of applications and solutions that its enterprise customers depend on,” he said.

Krishna said interest in AI is strong across the globe, with North America, Western Europe and parts of South America leading the charge. “The list of use cases includes IT operations, improved automation, customer service, augmenting human resources, predictive maintenance, compliance monitoring, security, sales, management and supply chain amongst others,” he said. “In the same way we built a consulting practice around Red Hat that is measured in the billions of dollars, we will do the same with AI.”

Introducing OpenAI London

 Good site provides intro.

https://openai.com/blog/introducing-openai-london

Introducing OpenAI London

We are excited to announce OpenAI’s first international expansion with a new office in London, United Kingdom.   ... ' 


Sunday, July 23, 2023

Meta launches Llama 2 open-source LLM

The march continues.

Meta launches Llama 2 open-source LLM

About the AuthorBy Ryan Daws | July 19, 2023

Categories: Companies, Development, Machine Learning, Meta (Facebook),

Meta launches Llama 2 open-source LLMRyan is a senior editor at TechForge Media with over a decade of experience covering the latest technology and interviewing leading industry figures. He can often be sighted at tech conferences with a strong coffee in one hand and a laptop in the other. If it's geeky, he’s probably into it. Find him on Twitter (@Gadget_Ry) or Mastodon (@gadgetry@techhub.social)

Meta has introduced Llama 2, an open-source family of AI language models which comes with a license allowing integration into commercial products.

The Llama 2 models range in size from 7-70 billion parameters, making them a formidable force in the AI landscape.

According to Meta’s claims, these models “outperform open source chat models on most benchmarks we tested.”

The release of Llama 2 marks a turning point in the LLM (large language model) market and has already caught the attention of industry experts and enthusiasts alike.

The new language models offered by Llama 2 come in two variants – pretrained and fine-tuned:

The pretrained models are trained on a whopping two trillion tokens and have a context window of 4,096 tokens, enabling them to process vast amounts of content at once.

The fine-tuned models, designed for chat applications like ChatGPT, have been trained on “over one million human annotations,” further enhancing their language processing capabilities.

While Llama 2’s performance may not yet rival OpenAI’s GPT-4, it shows remarkable promise for an open-source model.

The long-awaited sequel, Llama-2 is announced today! It's the best OSS model we have now.  ... ' 

Considerable Bard Updates

Google Bard Updated With Text to Speech, 40 New Languages

Bard is getting more capable, but it's still not trustworthy.

By Ryan Whitwam July 14, 2023

Bard AI

Credit: Google

Google was caught off-guard earlier this year when Microsoft decided to make generative AI its new focus. Despite inventing the transformer algorithms that make bots like ChatGPT possible, Google's answer to ChatGPT stumbled out of the gate. Google has been updating its Bard AI consistently in recent months, and the latest update is a big one. Finally, Bard can speak its replies.

Bard is a text-based generative AI, which means you can ask it anything, and it'll give you a response. It can do that because Bard has ingested a huge amount of written content, so it has the uncanny ability to generate text that sounds like a (boring) human being wrote it. On the flip side, some of the text Bard creates is not grounded in reality. So far, no one has figured out how to prevent these "hallucinations" in generative AI—ChatGPT suffers from the same shortcoming.

Google says it has now fed Bard enough content in an assortment of languages to expand the bot's services. It now works in 40 new languages, including Arabic, Chinese, German, and Spanish. As part of the multilinguistic update, Bard has also expanded to Brazil and more of Europe.

Regardless of whether Bard's output is accurate, you can hear the results spoken in more than 40 languages now. Google says this is useful if you aren't sure of the pronunciation of a word. Just click the speaker icon to start Bard gabbing. Should you find yourself unsatisfied with the response, there are also new tools to change that. The buttons at the bottom of the conversation window now let you alter the AI's tone and style. For example, you can make a reply more casual, shorter, more professional, and so on. This feature is only supported in English right now, but more languages are coming.

New about Apple and Generative AI

Following this closely, How will it integrate with other search.  

What is Apple GPT? Apple’s ChatGPT Rival & “Ajax” Explained  In Tech.co News

Apple is preparing to make a major AI announcement in 2024, and this could be the first clue to what it might entail.

Written by,Aaron Drapkin.      Updated on, July 20, 2023

This week, reports have revealed that Apple is developing its own ChatGPT competitor – dubbed “Apple GPT” by the company's developers – backed by its own language model framework, Ajax.

The news comes just days after Meta announced it is developing its own version of ChatGPT, powered by Llama 2 – introduced recently in partnership with Microsoft.

Compared to its counterparts, Apple has been relatively quiet regarding artificial intelligence – but it would be foolish to think the tech giant would simply be sitting this one out.

Get the latest tech news, straight to your inbox

Don't miss out on the top business tech news with Tech.co's weekly highlights reel

Apple GPT: What We Know So Far

According to Bloomberg’s Mark Gurmann, engineers at Apple are working on a project to design an AI tool internally referred to as “Apple GPT”, powered by the company's own proprietary LLM framework, Ajax.

Ajax is based on Google’s machine learning framework “JAX”, which UK-based AI startup DeepMind has been using to “accelerate” their research since 2020. It is still considered a reasonably experimental framework, compared to some others, however.

Some Apple employees have access to the chatbot, but this requires special approval – and outputs aren’t used to iterate on features scheduled for consumer use.

However, it has reportedly already provided somewhat useful for prototyping products.... 

Whether “Ajax” was arrived at by simply mashing together Google's “JAX” and “A” of Apple is unclear, but it’s undoubtedly one of the more interesting names put forward by an AI language model framework.

In Greek mythology, Ajax – a feared warrior second only in strength to Achilles – famously went insane and attempted to murder his military comrades. Instead, however, After Athena intervenes and clouds his mind, he instead kills a flock of sheep.

Apple Officially Joins the AI Party

This isn’t the first we’ve heard about Apple’s potential AI ventures this year – it was revealed back in April that the company was working on an AI-powered health application with emotional analysis capabilities. And of course, Apple already deploys AI across its products in a number of different ways.

Siri, for instance, the company’s voice-controlled personal assistant built into all iPhones, is a form of artificial intelligence – although employees working on the project have been far from pleased by the way it has progressed.

Apple CEO Tim Cook – who previously said AI was going to be “huge” – has also highlighted a number of privacy concerns relating to AI that he argues must be ironed out in the immediate future.

Apple, compared to companies like Meta and Google, markets itself as a more privacy-minded company – changes to its iOS software that has impaired apps from tracking user behavior have irked competitors previously.

Can I Use Apple GPT Yet?

Not quite – the project is still under development. However, multiple sources have reported that Apple is going to make a major AI-related announcement at some point in 2024 – so it could very well be the general release of “Apple GPT” – or whatever it ends up being called.

As of now, Google’s Bard, OpenAI’s ChatGPT, and Anthropic’s recently released chatbot Claude 2 are among the most capable chatbots currently available. Chinese search engine Baidu has also released its own chatbot, called Ernie bot.

If you're using any of these tools at work, just be mindful of the sort of data you're inputting into them. Several companies have banned the likes of ChatGPT altogether due to privacy concerns, while there's very little information about the security measures deployed by them either. ... 

Saturday, July 22, 2023

Meta’s latest AI model is free for all

Notable,  taking a look

Meta’s latest AI model is free for all 

The company hopes that making LLaMA 2 open source might give it the edge over rivals like OpenAI.

By Melissa Heikkilä , July 18, 2023

STEPHANIE ARNETT/MITTR | GETTY, ENVATO

Meta is going all in on open-source AI. The company is today unveiling LLaMA 2, its first large language model that’s available for anyone to use—for free. 

Since OpenAI released its hugely popular AI chatbot ChatGPT last November, tech companies have been racing to release models in hopes of overthrowing its supremacy. Meta has been in the slow lane. In February when competitors Microsoft and Google announced their  AI chatbots, Meta rolled out the first, smaller version of LLaMA, restricted to researchers. But it hopes that releasing LLaMA 2, and making it free for anyone to build commercial products on top of, will help it catch up. 

The company is actually releasing a suite of AI models, which include versions of LLaMA 2 in different sizes, as well as a version of the AI model that people can build into a chatbot, similar to ChatGPT. Unlike ChatGPT, which people can access through OpenAI’s website, the model must be downloaded from Meta’s launch partners Microsoft Azure, Amazon Web Services, and Hugging Face.

“This benefits the entire AI community and gives people options to go with closed-source approaches or open-source approaches for whatever suits their particular application,” says Ahmad Al-Dahle, a vice president at Meta who is leading the company’s generative AI work. “This is a really, really big moment for us.” ...'

A Nested Inventory for Software Security, Supply Chain Risk Management

 A Nested Inventory for Software Security, Supply Chain Risk Management

By Esther Shein, July 20, 2023

An SBOM is meant to provide visibility into risks and vulnerabilities. 

The Software Bill of Materials (SBOM) is comprised of all the components and libraries used to create a software application. It includes a description of all licenses, versions, authors, and patch status.

With high-profile data breaches like Kaseya and Apache Log4j still causing repercussions, securing the software supply chain is under scrutiny like never before. This prompted the Biden Administration's 2021 Executive Order on Improving the Nation's Cybersecurity, which requires developers to provide a Software Bill of Materials (SBOM).

Think of an SBOM like the ingredients in a recipe—it is comprised of all the components and libraries used to create a software application. It includes a description of all licenses, versions, authors, and patch status.

Many of these components are open source, and an SBOM is meant to provide visibility into risks and vulnerabilities. After all, if you don't know what code you're protecting, how can you maintain it?

The role of SBOMs

When organizations have this visibility, they are better able to identify known or emerging vulnerabilities and risks, enable security by design, and make informed choices about software supply chain logistics and acquisition issues. "And that is increasingly important because sophisticated threat actors now see supply chain attacks as a go-to tool for malicious cyber activity,'' according to Booz Allen Hamilton.  

By 2025, 60% of organizations building or procuring critical infrastructure software will mandate and standardize SBOMs in their software engineering practice, up from less than 20% in 2022, according to market research firm Gartner.

"Multiple factors are driving the need for SBOMs,'' says Manjunath Bhat, a research vice president at Gartner. Those factors include the increased use of third-party dependencies and open-source software, increased incidence of software supply chain attacks, and regulatory compliance mandates to secure the use of OSS, Bhat says.

 "The fine-grained visibility and transparency into the complete software supply chain is what makes SBOMs so valuable," he says.

SBOM elements

The National Telecommunications and Information Agency (NTIA) and the U.S. Department of Commerce were tasked with publishing the minimum elements for an SBOM, along with a description of use-cases for greater transparency in the supply chain.

They determined there should be data fields for a supplier, component name, and version, as well as the dependency relationship, among other areas, the NTIA and Department of Commerce said.

They also recommended there be automatic data generation and machine readability functionality to allow for scaling an SBOM across the software ecosystem. There are also three formats for generating SBOMs that are generally accepted: SPDX, CycloneDX, and SWID tags.

SBOMs are designed to be part of automation workflows, Bhat observes. "Therefore, standardization of data formats and interoperability between them is going to be paramount."  

The data fields within an SBOM "include elements that help uniquely and unambiguously identify software components and their relationships to one another,'' he says. "Therefore, the basic elements include component name, supplier name, component version, unique identifiers (most likely a digital signature or a cryptographic hash), and dependency relationships."

SBOM platforms that are automated and dynamic are ideal because they can be continuously updated to ensure software developers have an accurate view of the components and dependencies they use in their applications.  ... ' 

New Turing Test?

THE DOWNLOAD

The Download: a new Turing test, and working with ChatGPT

By Rhiannon Williams archive page

July 14, 2023

This is today's edition of The Download, our weekday newsletter that provides a daily dose of what's going on in the world of technology.

My new Turing test would see if AI can make $1 million

—Mustafa Suleyman is the co-founder and CEO of Inflection AI and a venture partner at Greylock, a venture capital firm. Before that, he co-founded DeepMind, one of the world’s leading artificial intelligence companies.

AI systems are increasingly everywhere and are becoming more powerful almost by the day. But how can we know if a machine is truly “intelligent”? For decades this has been defined by the Turing test, which argues that an AI that’s able to replicate language convincingly enough to trick a human into thinking it was also human should be considered intelligent.

But there’s now a problem: the Turing test has almost been passed—it arguably already has been. The latest generation of large language models are on the cusp of acing it.

So where does that leave AI? We need something better. I propose the Modern Turing Test—one equal to the coming AIs that would give them a simple instruction:  “Go make $1 million on a retail web platform in a few months with just a $100,000 investment.” Read the full story.

ChatGPT can turn bad writers into better ones

The news: A new study suggests that ChatGPT could help reduce gaps in writing ability between employees, helping less experienced workers who lack writing skills to produce work similar in quality to that of more skilled colleagues.

How the researchers did it: Hundreds of college-educated professionals were asked to complete two tasks they’d normally undertake as part of their jobs, such as writing press releases, short reports, or analysis plans. Half were given the option of using ChatGPT for the second task. A group of assessors then quality-checked the results, and scored the output of those who’d used ChatGPT 18% higher in quality than that of the participants who didn’t use it.

Why it matters: The research hints at how AI could be helpful in the workplace by acting as a sort of virtual assistant. But it’s also crucial to remember that generative AI models’ output is far from reliable, meaning workers run the risk of introducing errors. Read the full story.

Rhiannon Williams