/* ---- Google Analytics Code Below */

Wednesday, May 31, 2023

City Council Votes to Accept Controversial LAPD Robot Dog

Not what I expected,  but very much needed,    but how long will it last?

City Council Votes to Accept Controversial LAPD Robot Dog

Los Angeles Times

Brittny Mejia; Libor Jany; David Zahniser, May 23, 2023

The Los Angeles City Council voted to accept the donation of almost $280,000 from the Los Angeles Police Foundation to fund the purchase of a robot dog for use by the Los Angeles Police Department. The department will be required to issue quarterly reports detailing where and why the device was deployed, the outcome of each deployment, and whether any issues occurred. The 70-pound robot dog, called Spot, can climb stairs, open doors, and navigate difficult terrain. Spot is controlled via a tablet-like device, features 360-degree cameras to record its surroundings, and transmits real-time data to officers. The department said the robot would be used only in situations involving the SWAT team and to keep officers out of harm's way.

Full Article   

TeslaBots Emergent Next year?

Just watched an interview with Musk.   Using come of the tech from the car.

Tesla Robot: News, Rumors, and Estimated Price, Release Date, and Specs

Yes, it's real. Here's the prototype of the $20,000 Tesla robot.   Companions, Help-mates and more?

By Tim Fisher  In Lifewire, Updated on May 17, 2023

The Latest News

Tesla CEO Elon Musk has confirmed that a humanoid robot called Optimus is under development, with the goal of eventually being able to do "anything that humans don’t want to do." While it may seem unrealistic, a prototype of the robot has already been unveiled.

When Will the Tesla Robot Be Released?

The Tesla Bot was first announced at Tesla's AI Day 2021, and the prototype (what Musk calls their "rough, development robot", pictured below) was revealed on September 30, at AI Day 2022. As bizarre as it sounds to have a live-in robot at your disposal to perform "repetitive or boring" tasks for you, it is a real product the company is working on.

One indicator this is something they're committing to invest in is that they're actively looking for help making it. There are several job listings on Tesla's website for engineers, managers, architects, and more to work on the Optimus team, so unlike the Tesla Phone and other ideas that have remained concepts, this appears to be a project they're really considering.

Assuming the Tesla robot will actually be available one day, there's still no telling when that might be. Are Musk and the team behind the robot interested in bringing it to life? It looks that way. But even if they are, managing expectations about a real release is important.

Like many companies with grand ideas, Tesla has a history of pushing back launch dates and making it seem like a really cool product is just around the corner. One example of this is the Tesla snake charger advertised in 2015, which several years later, Musk is still saying we'll see one day.

But if it means anything, Musk is on record saying he's hopeful that production for the first version of Optimus will commence in 2023. Long term, he says the robot "will be more valuable than the car." It'll likely start off as a factory product that assists in the production line and ultimately help with labor shortages, before maybe one day moving into our homes.

Lifewire's Release Date Estimate

At AI Day 2022, the prototype of Optimus was showcased for the first time. Although it is possible that robotic assistance in factories could be available in the near future, we doubt that the robot is ready for use in residential settings.

Tesla Robot Price Rumors

At AI Day 2022, Musk said "it is expected to cost much less than a car," and went on to guess "probably less than $20,000."

This sounds reasonable, at least for a first model. A robot meant to do anything on its own, even if it's menial tasks, will obviously carry a hefty price tag. With variation (if there will be any), depending on the model you choose, we can see this fluctuating a bit. We might even see leasing options.

Elon Musk even suggests that the price will fall in the future:

Perhaps in less than a decade, people will be able to buy a robot for their parents as a birthday gift. ... ' 

Statement on AI Risk

 Considerable statement and agreement,   Signed by many worldwide.  top academics in China. 


Statement on AI Risk

Hassabis, Altman and AGI Labs Unite - AI Extinction Risk Statement [ft. Sutskever, Hinton + Voyager]

54,971 views  May 30, 2023

The leaders of almost all of the world's top AGI Labs have united to put out a statement on AI Extinction Risk, and how mitigating it should be a global priority. This video covers not just the statement and the signatories, including names as diverse as Geoffrey Hinton, Ilya Sutskever, Sam Harris and Lex Fridman, but also goes deeper into the 8 Examples of AI Risk outlined at the same time by the Center for AI Safety.

Top academics from China join in, while Meta demurs, claiming autoregressive LLMs will 'never be given agency'. I briefly cover the Voyager paper, in which GPT 4 is given agency to play Minecraft, and does so at SOTA levels. 

Statement: https://www.safe.ai/statement-on-ai-risk

Further:  https://www.safe.ai/ai-risk   8 risk types

Natural Selection Paper: https://arxiv.org/pdf/2303.16200.pdf5

Yann LeCun on 20VC w/ Harry Stebbings:   

 • Yann LeCun: Meta’...  

Voyager Agency Paper: https://arxiv.org/pdf/2305.16291.pdf

Karpathy Tweet: https://twitter.com/karpathy/status/1...

Hassabis Benefit Speech:   

 • Fei-Fei Li & Demi...  

Stanislav Petrov: https://en.wikipedia.org/wiki/Stanisl...

Bengio Blog: https://yoshuabengio.org/2023/05/07/a...

https://www.patreon.com/AIExplained   .... ' 

Developing Wireless Sensor System for Continuous Monitoring of Bridge Deformation

 Towards infrastructure sensing, monitoring and maintenance.

Researchers develop wireless sensor system for continuous monitoring of bridge deformation   by Drexel University

Researchers in Drexel University's College of Engineering have developed a solar-powered, wireless sensor system that can continually monitor bridge deformation and could be used to alert authorities when the bridge performance deteriorates significantly. With more than 46,000 bridges across the country considered to be in poor condition, according to the American Society of Civil Engineers, a system like this could be both an important safety measure, as well as helping to triage repair and maintenance efforts.

The system, which measures bridge deformation and runs continuously on photovoltaic power, was unveiled in a recent edition of the IEEE Journal of Emerging and Selected Topics in Industrial Electronics in a paper authored by Drexel College of Engineering researchers, Ivan Bartoli, Ph.D., Mustafa Furkan, Ph.D., Fei Lu, Ph.D., and Yao Wang, a doctoral student in the College.

"With as much aging infrastructure as there is in the U.S. we need a way to keep a close eye on these critical assets 24/7," said Bartoli, who leads the Intelligent Infrastructure Alliance in the College of Engineering. "This is an urgent need, not just to prevent calamitous and often tragic failures, but to understand which bridges should take priority for maintenance and replacement, so that we can efficiently and sustainably approach the preservation and improvement of our infrastructure."

More than 40% of America's 617,000 bridges are more than 50 years old. While they are built to last, they must also be inspected regularly—every two years, according to Bartoli, who is a professor in the College.

Tuesday, May 30, 2023

NVIDIA and ServiceNow

Good direction.

ServiceNow and NVIDIA Announce Partnership to Build Generative AI Across Enterprise 

Built on ServiceNow Platform With NVIDIA AI Software and DGX Infrastructure, Custom Large Language Models to Bring Intelligent Workflow Automation to Enterprises

May 17, 2023

 Knowledge 2023—ServiceNow and NVIDIA today announced a partnership to develop powerful, enterprise-grade generative AI capabilities that can transform business processes with faster, more intelligent workflow automation.

Using NVIDIA software, services and accelerated infrastructure, ServiceNow is developing custom large language models trained on data specifically for its ServiceNow Platform, the intelligent platform for end-to-end digital transformation. 

This will expand ServiceNow’s already extensive AI functionality with new uses for generative AI across the enterprise — including for IT departments, customer service teams, employees and developers — to strengthen workflow automation and rapidly increase productivity. 

ServiceNow is also helping NVIDIA streamline its IT operations with these generative AI tools, using NVIDIA data to customize NVIDIA® NeMo™ foundation models running on hybrid-cloud infrastructure consisting of NVIDIA DGX™ Cloud and on-premises NVIDIA DGX SuperPOD™ AI supercomputers.

“IT is the nervous system of every modern enterprise in every industry,” said Jensen Huang, founder and CEO of NVIDIA. “Our collaboration to build super-specialized generative AI for enterprises will boost the capability and productivity of IT professionals worldwide using the ServiceNow platform.”

“As adoption of generative AI continues to accelerate, organizations are turning to trusted vendors with battle-tested, secure AI capabilities to boost productivity, gain a competitive edge, and keep data and IP secure,” said CJ Desai, president and chief operating officer of ServiceNow. “Together, NVIDIA and ServiceNow will help drive new levels of automation to fuel productivity and maximize business impact." 

Harnessing Generative AI to Reshape Digital Business 

ServiceNow and NVIDIA are exploring a number of generative AI use cases to simplify and improve productivity across the enterprise by providing high accuracy and higher value in IT. 

This includes developing intelligent virtual assistants and agents to help quickly resolve a broad range of user questions and support requests with purpose-built AI chatbots that use large language modls and focus on defined IT tasks. 

Integrating LLM into the Wolfram Language

Outline of examples of LLM interaction as 


Examples of computational Chemistry


AI Will Augment SEO

AI Will Augment SEO,    By R. Colin Johnson

Commissioned by CACM Staff, May 24, 2023

"I fully expect that future personalized AI-enhanced search engines will allow you to query any corpus of information by requesting that the results be limited to a specific basket of domains," said Kevin Lee, CEO of the eMarketing Association.

Search engine optimization (SEO) is an Internet marketing technique that analyzes how the algorithms in a search engine work, then add metadata (for instance keywords, links, and other embedded content) to boost the page's ranking on the search engine results page. According to business marketing company Clutch, there are over 15,000 search engine optimization (SEO) companies in the U.S. alone.

However, without using artificial intelligence (AI) to rank Web page content (the text and images), SEO companies are left to boost rankings on the basis of metadata alone, according to researchers at Germany's Hamburg University of Applied Sciences, who claim non-optimized but high-quality content may thus be outranked (appearing lower on the list of search results) by search-engine-optimized content of lower quality, but attached to more alluring meta-data.

"To improve search result ranking, SEO companies add metadata to Web pages that match queries to the ranking criteria of the search engine's results," said Sebastian Schultheiß, lead SEO researcher at the Hamburg University of Applied Sciences. "However, a Web page that complies with SEO criteria does not necessarily provide content of higher quality from the user's perspective. For example, if the content of a Web page is one-sided and therefore not objective, this lowers the information quality of the page from the user's point of view but has no effect on the search engine's ranking. As a result, Web pages with content of lower information quality can receive better rankings than a page of higher quality but lacking SEO."

Such regrettable outcomes were measured by Hamburg University researchers using real test subjects, as documented in a paper presented at last year's ACM Conference on Human Information Interaction and Retrieval (CHIIR). The content was confined to Web pages with medical data; the researchers found users chose to click on results at or near the top of the results stack, regardless of their medical efficacy.

"The potential danger is that Web pages with inappropriate content, but which make intensive use of SEO and are thus ranked higher, will be chosen over Web pages with higher-quality content and no SEO measures. Our results show that users consistently select those items prominently placed by SEO," said Schultheiß.

In a paper he presented at this year's CHIIR '23 conference, Schultheiß described how he is expanding this line of research to include not just the influence of SEO, but also the wider field of search engine marketing (SEM), which manipulates not just the metadata, but the content itself. Schultheiß also aims to research paid search marketing (PSM) services, where search engines permit marketers to pay cash to lift Web pages near the top of search results. Schultheiß also is expanding the scope of his research to the fields of health, politics, and the environment, to see if lower-quality content there is also attracting users' focus due to SEO, SEM, and PSM.

AI to the Rescue

The solution to raising the quality of search engine results, according to Google senior vice president in charge of search Prabhakar Raghavan, is artificial intelligence (AI). "With artificial intelligence," he said at the latest Google I/O meeting, "we are transforming search to be more helpful than ever before." For instance, later this year, Raghavan said Google will add a new way to scan store shelves that overlays metadata about the products on them in your camera screen.

SEO expert Kevin Lee, who is CEO of the eMarketing Association, agrees that AI is the solution to obtaining higher-quality results from search engines. According to Lee, having intelligent algorithms built into search engines will allow them to "read" Web page content, look up related information (such as user reviews), and improve the resulting search engine ranking on the basis of "qualitative" matches to a user's query, rather than depending solely on SEO, SEM, and PSM.

Want to Keep AI From Sharing Secrets? Train It Yourself

Have come up with companies thinking this.

Want to Keep AI From Sharing Secrets? Train It Yourself MosaicML delivers a secure platform for hosted AI  MATTHEW S. SMITH

On 11 March 2023, Samsung’s Device Solutions division permitted employee use of ChatGPT. Problems ensued. A report in The Economist Korea, published less than three weeks later, identified three cases of “data leakage.” Two engineers used ChatGPT to troubleshoot confidential code, and an executive used it for a transcript of a meeting. Samsung changed course, banning employee use, not of just ChatGPT but of all external generative AI.

Samsung’s situation illustrates a problem facing anyone who uses third-party generative AI tools based on a large language model (LLM). The most powerful AI tools can ingest large chunks of text and quickly produce useful results, but this feature can easily lead to data leaks.

“That might be fine for personal use, but what about corporate use? […] You can’t just send all of your data to OpenAI, to their servers,” says Taleb Alashkar, chief technology officer of the computer vision company AlgoFace and MIT Research Affiliate.

Naïve AI users hand over private data

Generative AI’s data privacy issues boil down to two key concerns.

AI is bound by the same privacy regulations as other technology. Italy’s temporary ban of ChatGPT occurred after a security incident in March 2023 that let users see the chat histories of other users. This problem could affect any technology that stores user data. Italy lifted its ban after OpenAI added features to give users more control over how their data is stored and used.

But AI faces other unique challenges. Generative AI models aren’t designed to reproduce training data and are generally incapable of doing so in any specific instance, but it’s not impossible. A paper titled “Extracting Training Data from Diffusion Models,” published in January 2023, describes how Stable Diffusion can generate images similar to images in the training data. The Doe vs. GitHub lawsuit includes examples of code generated by Github Copilot, a tool powered by an LLM from OpenAI, that match code found in training data.

A photograph of a woman named Ann Graham Lotz next to an AI-generated image of Ann Graham Lotz created with Stable Diffusion. The comparison shows that the AI generator image is significantly similar to the original image, which was included in the AI model's training data.Researchers discovered that Stable Diffusion can sometimes produce images similar to its training data. EXTRACTING TRAINING DATA FROM DIFFUSION MODELS

This leads to fears that generative AI controlled by a third party could unintentionally leak sensitive data, either in part or in whole. Some generative AI tools, including ChatGPT, worsen this fear by including user data in their training set. Organizations concerned about data privacy are left with little choice but to bar its use.

“Think about an insurance company, or big banks, or [Department of Defense], or Mayo Clinic,” says Alashkar, adding that “every CIO, CTO, security principal, or manager in a company is busy looking over those policies and best practices. I think most responsible companies are very busy now trying to find the right thing.”

Efficiency holds the answer to private AI

AI’s data privacy woes have an obvious solution. An organization could train using its own data (or data it has sourced through means that meet data-privacy regulations) and deploy the model on hardware it owns and controls. But the obvious solution comes with an obvious problem: It’s inefficient. The process of training and deploying a generative AI model is expensive and difficult to manage for all but the most experienced and well-funded organizations.  ... ' 

Now Modern Males Are Behind

 Good piece from Irving, I thought I was the only one who noticed this,  excerpt with many links within.   

Why Men and Boys Are Falling Behind

A few weeks ago, Richard Reeves was Ezra Klein’s guest in his NY Times podcast “The Men — and Boys — Are Not Alright.” Reeves is a writer and Senior Fellow at the Brookings Institution, where he’s been studying inequality, poverty, social mobility, and family policy. His book, — Of Boys and Men: Why the Modern Male Is Struggling, Why It Matters, and What to Do about It, published in September of 2022, — is based on his research on the growing gender gaps in education and employment.

“We’re used to thinking about gender inequality as a story of insufficient progress for women and girls,” wrote Klein in the podcast’s introduction. “There’s a good reason for that: Men have dominated human societies for centuries, and myriad inequalities — from the gender pay gap to the dearth of female politicians and chief executives — persist to this day.”

“But Reeves’ core argument is that there’s no way to fully understand inequality in America today without understanding the ways that men and boys — particularly those from disadvantaged backgrounds — are falling behind. And they’re falling behind in ways that are tough on families, in ways that are tough on marriages, ways that are tough on children. And it gets much, much worse when you go down the income ladder.”

Early in their discussion, Reeves pointed out that for most of history, gender equality was intrinsically synonymous with the cause of women and girls. But, “the facts are there in a bunch of places where boys and men are really struggling now.” This relatively recent change is the reason why it’s taken us so long to gather the evidence, and “muster the courage to address this issue. Updating our view of the world as the evidence changes is very difficult.”

Reeves cited a few concrete examples. First, there’s a big gender gap in high school grade point average (GPA), — a very good predictor of important economic outcomes. The data show that two thirds of students with the top 10% of GPA are girls, while two thirds of the students with the bottom 10% of GPA are boys. In addition, girls are 6% more likely to graduate on time than boys.

 A second data point is school performance in grades three through eight. Reeves cited a study led by Stanford sociologist Sean Reardon that found that “girls are at least 3/4 of a grade level ahead in English and dead even in math. And in the poorer school districts, they’re a grade level ahead in English and about a 1/3 of a grade level ahead in math.” These results may not be surprising because the evidence shows that boys develop later than girls.  ... '   (more with links) 

LangChain intro at Work

Taking a look at LangChain, see below, with link to detail.

Getting Started with LangChain: A Beginner’s Guide to Building LLM-Powered Applications

A LangChain tutorial to build anything with large language models in Python

From Towards Data Science,  by Leonie Monigatti   ... 

https://github.com/hwchase17/langchain  (technical)

Quantum Advantage

 Quantum Advantage

Disentangling Hype from Practicality: On Realistically Achieving Quantum Advantage

By Torsten Hoefler, Thomas Häner, Matthias Troyer

Communications of the ACM, May 2023, Vol. 66 No. 5, Pages 82-87    10.1145/3571725

Operating on fundamentally different principles than conventional computers, quantum computers promise to solve a variety of important problems that seemed forever intractable on classical computers. Leveraging the quantum foundations of nature, the time to solve certain problems on quantum computers grows more slowly with the size of the problem than on classical computers—this is called quantum speedup. Going beyond quantum supremacy,2 which was the demonstration of a quantum computer outperforming a classical one for an artificial problem, an important question is finding meaningful applications (of academic or commercial interest) that can realistically be solved faster on a quantum computer than on a classical one. We call this a practical quantum advantage, or quantum practicality for short.

There is a maze of hard problems that have been suggested to profit from quantum acceleration: from cryptanalysis, chemistry and materials science, to optimization, big data, machine learning, database search, drug design and protein folding, fluid dynamics and weather prediction. But which of these applications realistically offer a potential quantum advantage in practice? For this, we cannot only rely on asymptotic speedups but must consider the constants involved. Being optimistic in our outlook for quantum computers, we identify clear guidelines for quantum practicality and use them to classify which of the many proposed applications for quantum computing show promise and which ones would require significant algorithmic improvements to become practical and relevant.

To establish reliable guidelines, or lower bounds for the required speedup of a quantum computer, we err on the side of being optimistic for quantum and overly pessimistic for classical computing. Despite our overly optimistic assumptions, our analysis shows a wide range of often-cited applications is unlikely to result in a practical quantum advantage without significant algorithmic improvements. We compare the performance of only a single classical chip fabricated like the one used in the NVIDIA A100 GPU that fits around 54 billion transistors15 with an optimistic assumption for a hypothetical quantum computer that may be available in the next decades with 10,000 error-corrected logical qubits, 10μs gate time for logical operations, the ability to simultaneously perform gate operations on all qubits and all-to-all connectivity for fault tolerant two-qubit gates .... 

Researchers from UC Berkeley Introduce Gorilla LLM

 And more implementations. 

Researchers from UC Berkeley Introduce Gorilla: A Finetuned LLaMA-based Model that Surpasses GPT-4 on Writing API Calls

By Tanya Malhotra

A recent breakthrough in the field of Artificial Intelligence is the introduction of Large Language Models (LLMs). These models enable us to understand language more concisely and, thus, make the best use of Natural Language Processing (NLP) and Natural Language Understanding (NLU). These models are performing well on every other task, including text summarization, question answering, content generation, language translation, and so on. They understand complex textual prompts, even texts with reasoning and logic, and identify patterns and relationships between that data.

Though language models have shown incredible performance and have developed significantly in recent times by demonstrating their competence in a variety of tasks, it still remains difficult for them to use tools through API calls in an efficient manner. Even famous LLMs like GPT-4 struggle to generate precise input arguments and frequently recommend inappropriate API calls. To address this issue, Berkeley and Microsoft Research researchers have proposed Gorilla, a finetuned LLaMA-based model that beats GPT-4 in terms of producing API calls. Gorilla helps in choosing the appropriate API, improving LLMs’ capacity to work with external tools to carry out particular activities.   .... ' 

Monday, May 29, 2023

WPP Partners With NVIDIA to Build Generative AI-Enabled Content Engine for Digital Advertising

Despite all pausing and hand wringing,   AI enabled Generation Ad production goes on. All resonsibe? 

WPP Partners With NVIDIA to Build Generative AI-Enabled Content Engine for Digital Advertising

Groundbreaking Engine Built on NVIDIA AI and Omniverse Connects Creative 3D and AI Tools From Leading Software Makers to Revolutionize Brand Content, Experiences at Scale

COMPUTEX—NVIDIA and WPP today announced they are developing a content engine that harnesses NVIDIA Omniverse™ and AI to enable creative teams to produce high-quality commercial content faster, more efficiently and at scale while staying fully aligned with a client’s brand.

The new engine connects an ecosystem of 3D design, manufacturing and creative supply chain tools, including those from Adobe and Getty Images, letting WPP’s artists and designers integrate 3D content creation with generative AI. This enables their clients to reach consumers in highly personalized and engaging ways, while preserving the quality, accuracy and fidelity of their company’s brand identity, products and logos.

NVIDIA founder and CEO Jensen Huang unveiled the engine in a demo during his COMPUTEX keynote address, illustrating how clients can work with teams at WPP, the world’s largest marketing services organization, to make large volumes of brand advertising content such as images or videos and experiences like 3D product configurators more tailored and immersive.

“The world’s industries, including the $700 billion digital advertising industry, are racing to realize the benefits of AI,” Huang said. “With Omniverse Cloud and generative AI tools, WPP is giving brands the ability to build and deploy product experiences and compelling content at a level of realism and scale never possible before.”

“Generative AI is changing the world of marketing at incredible speed,” said Mark Read, CEO of WPP. “Our partnership with NVIDIA gives WPP a unique competitive advantage through an AI solution that is available to clients nowhere else in the market today. This new technology will transform the way that brands create content for commercial use, and cements WPP’s position as the industry leader in the creative application of AI for the world’s top brands.”

An Engine for Creativity

The new content engine has at its foundation Omniverse Cloud — a platform for connecting 3D tools, and developing and operating industrial digitalization applications. This allows WPP to seamlessly connect its supply chain of product-design data from software such as Adobe’s Substance 3D tools for 3D and immersive content creation, plus computer-aided design tools to create brand-accurate, photoreal digital twins of client products.

WPP uses responsibly trained generative AI tools and content from partners such as Adobe and Getty Images so its designers can create varied, high-fidelity images from text prompts and bring them into scenes. This includes Adobe Firefly, a family of creative generative AI models, and exclusive visual content from Getty Images created using NVIDIA Picasso, a foundry for custom generative AI models for visual design.

With the final scenes, creative teams can render large volumes of brand-accurate, 2D images and videos for classic advertising, or publish interactive 3D product configurators to NVIDIA Graphics Delivery Network, a worldwide, graphics streaming network, for consumers to experience on any web device.

In addition to speed and efficiency, the new engine outperforms current methods, which require creatives to manually create hundreds of thousands of pieces of content using disparate data coming from disconnected tools and systems.  .... ' 

Nvidia taps into Israeli innovation to build generative AI cloud supercomputer

Nvidia taps into Israeli innovation to build generative AI cloud supercomputer

Chip giant says Israel-1 supercomputer valued at several hundred million dollars is a ‘major investment’ that will boost next-generation AI workloads


Nvidia's HGX supercomputing platform. (Courtesy)

US gaming and computer graphics giant Nvidia said Monday that it will build the nation’s most powerful generative AI cloud supercomputer called Israel-1 which will be based on a new locally developed high-performance ethernet platform.

Valued at several hundred million dollars, Israel-1, which Nvidia said would be one of the world’s fastest AI supercomputers, is expected to start early production by the end of 2023.

“AI is the most important technology force in our lifetime,” said Gilad Shainer, Senior Vice President of high performance computing (HPC) and networking at Nvidia. “Israel-1 represents a major investment that will help us drive innovation in Israel and globally.”

AI processes analyze enormous datasets and require both ultra-fast computing performance and massive memory. The rise of generative AI applications and workloads like OpenAI’s ChatGPT present new challenges for networks inside data centers. As a result of the major changes AI cloud systems need to be trained using huge amounts of data.

Announced at the Computex tech exhibition starting this week, Israel-1 will be based on Nvidia’s newly launched Spectrum-X networking platform, a high-performance ethernet architecture purposely built for generative AI workloads. Developed in Israel the platform is tailored to enable data center around the world transition to AI and accelerated computing, using a new class of ethernet connection that is build from the ground up for AI.   .. ' 

OpenLLaMA is a fully open-source LLM, now ready for business

Brought to my attention.   in   the-encoder.com

OpenLLaMA is a fully open-source LLM, now ready for business

OpenLLaMA is an open-source reproduction of Meta’s LLaMA language model and can be used commercially.

Since the unveiling of Meta’s LLaMA family of large language models and the subsequent leak, the development of open-source chatbots has exploded. Models such as Alpaca, Vicuna, and OpenAssistant use Meta’s models as the basis for their various forms of instruction tuning.

However, LLaMA models are licensed for research use only, which prevents commercial use of those models.

OpenLLaMA reproduces Meta’s language models

Alternatives based on other freely available models do not match the quality of Meta’s models, as LLaMA follows Deepmind’s Chinchilla scaling laws and has been trained on particularly large amounts of data.

Sound Vibrations Can Encode, Process Data Like Quantum Computers

Remarkable ...

Sound Vibrations Can Encode, Process Data Like Quantum Computers

By New Scientist, May 25, 2023

University of Arizona researchers demonstrated that trapping sound in a simple mechanical device can imitate certain properties of quantum computers.

The researchers built an object that could act like a qubit by gluing together three aluminium rods over half a meter long each, then generated vibrations at one end and detected them at the other.

They observed that information could be input in the "phi-bits" (localized "chunks" of sound produced in the rods) by tuning the sound, and that the phi-bits could be forced into a superposition (a mixture of their individual states).

The researchers used the system to perform simple computations, as well as producing quantum-like states.

From New Scientist

View Full Article -    May Require Paid Subscription

Can China overtake the US in the AI Marathon?

ChatGPT: Can China overtake the US in the AI marathon?

By Derek Cai & Annabelle Liang in  BBC News

Artificial intelligence has emerged as enough of a concern that it made it onto what was already a packed agenda at the G7 summit at the weekend.

Concerns about AI's harmful impact coincide with the US' attempts to restrict China's access to crucial technology.

For now, the US seems to be ahead in the AI race. And there is already the possibility that current restrictions on semiconductor exports to China could hamper Beijing's technological progress.

But China could catch up, according to analysts, as AI solutions take years to be perfected. Chinese internet companies "are arguably more advanced than US internet companies, depending on how you're measuring advancement," Kendra Schaefer, head of tech policy research at Trivium China tells the BBC.

However, she says China's "ability to manufacture high-end equipment and components is an estimated 10 to 15 years behind global leaders."

The Silicon Valley factor

The US' biggest advantage is Silicon Valley, arguably the world's supreme entrepreneurial hotspot. It is the birthplace of technology giants such as Google, Apple and Intel that have helped shape modern life.

Innovators in the country have been helped by its unique research culture, says Pascale Fung, director of the Center for Artificial Intelligence Research at the Hong Kong University of Science and Technology.

Researchers often spend years working to improve a technology without a product in mind, Ms Fung says.

OpenAI, for example, operated as a non-profit company for years as it researched the Transformers machine learning model, which eventually powered ChatGPT.

"This environment never existed in most Chinese companies. They would build deep learning systems or large language models only after they saw the popularity," she adds. "This is a fundamental challenge to Chinese AI."

US investors have also been supportive of the country's research push. In 2019, Microsoft said it would put $1bn (£810,000) in to OpenAI.

"AI is one of the most transformative technologies of our time and has the potential to help solve many of our world's most pressing challenges," Microsoft chief executive Satya Nadella said.

China's edge

China, meanwhile, benefits from a larger consumer base. It is the world's second-most populous country, home to roughly 1.4 billion people.

It also has a thriving internet sector, says Edith Yeung, a partner at the Race Capital investment firm.  ... '  (more)

Light-Field Sensor for 3D Scene Construction with Unprecedented Angular Resolution

Light-Field Sensor for 3D Scene Construction with Unprecedented Angular Resolution

By NUS News (Singapore),   May 18, 2023

A large scale angle-sensing structure comprising nanocrystal phospors, a key component of the sensor, illuminated under ultraviolet light.

At the core of the novel light-field sensor are inorganic perovskite nanocrystals—compounds that have excellent optoelectronic properties.

National University of Singapore scientists created a three-dimensional (3D) light-field sensor that can reconstruct scenes with ultra-high angular resolution using a novel angle-to-color conversion framework.

The device features an angular measurement range exceeding 80 degrees, high angular resolution which can potentially be less than 0.015 degrees for smaller sensors, and a 0.002-nanometer-to-550-nanometer spectral response range.

Inorganic perovskite nanocrystals form the heart of the sensor, which can detect 3D light fields across the X-ray to visible light spectrum due to the crystals' controllable nanostructures.

The researchers patterned the crystals onto a transparent thin-film substrate mated to a color charge-coupled device, which transforms incoming optical signals into color-coded output for use in 3D image reconstruction.

Proof-of-concept experiments showed the sensor could accurately reconstruct images of objects 1.5 meters (4.9 feet) away.

From NUS News (Singapore)

View Full Article  

A Boost for the Quantum Internet

 A Boost for the Quantum Internet

Universitat Innsbruck (Austria)

May 23, 2023

Researchers at Austria's University of Innsbruck transmitted quantum information with a quantum repeater node operating at telecommunication networks' standard frequency. The repeater node features two calcium ions contained in an ion trap within an optical resonator, and single-photon conversion to the standard telecom wavelength. The researchers were able to transmit quantum information over a 50-kilometer (31-mile)-long optical fiber, with the quantum repeater positioned halfway between the transmission and reception points. The researchers said they already have calculated the design upgrades that will be required to transfer data across distances of 800 kilometers (nearly 500 miles).  ....'

Demystifying AI

Seems interesting,  Agree there is mush understanding. More at the link.  Will read.

Dataiku Ebook

Demystifying AI.. 

Where to Draw the Line Between Myth and Reality

First things first: Why exactly has AI become such a nebulous term in modern society? 

In this ebook, we answer that question and share the five most common assumptions about AI. Then, we cut through the noise, equipping organizations with the insights they need to avoid falling into the trap of mistaking myth for reality.   .... ' 

Sunday, May 28, 2023

Quantum Computers Compared

Useful, see link to full article linked to below there are 24 processors!

 Scientists at the U.S. Department of Energy (DOE)'s Los Alamos National Laboratory compared leading quantum computers using the Quantum Computing User Program at DOE's Oak Ridge National Laboratory.

The researchers reviewed 24 quantum processors and ranked their performance numbers against those from vendors like IBM and Quantinuum.

They used as a metric quantum volume, which estimates the degree to which a quantum processor can perform a specific type of random complex quantum circuit.

Outcomes indicated most processors performed close to promoted quantum volume, but rarely at the top numbers vendors touted.

The researchers found higher quantum performance tended to correspond with more intensive quantum circuit compilation, in which classical programming elements are translated into quantum computer commands.

From Oak Ridge National Laboratory

View Full Article    

How Generational Differences Affect Consumer Attitudes Towards Ads

As a big advertiser, something we thought about. 

Meta Research

How generational differences affect consumer attitudes towards ads

By: Melanie Beer Zeldin

Our research study, in collaboration with CrowdDNA, aims to understand people's relationship with social media ads across different social media platforms.

Advertising has historically been a way for advertisers to position a product or service as the star of an ad and deliver a message to as many people as possible. Technological advances in the past 50 years have empowered consumers and completely changed the way we live, shop and consume content. It changed how social media is used, and how ads are perceived at mind-boggling speed. Today, ads have a two-way relationship with their audience, empowering consumers to use their purchasing power and voices to both buy into brands and challenge those acting irresponsibly. The focus of ads has moved from product and services to putting people at the heart of a campaign.

GenZ, defined as those born between 1995 and 2010, grew up connected 24/7. Real life and virtual life are fluidly connected without a distinction between them. Relationships are built online and in real life. For this generation, social media fulfills a variety of needs and use cases, but not always with a specific purpose. Living a mixed reality allows for a fluid expression of multiple versions of their identities that mirror different aspects of their personalities.

“It [targeted ads] doesn’t bother me. I’m going to go on social media either way, so its better than what I’m seeing is relevant."

GenZ | UK | Social Media User

In contrast, Baby Boomers, defined as those born between 1946 and 1964, grew up without the internet and experienced its evolution. There is a distinct compartmentalization of real life and virtual life. Relationships are built in real life and extended into the online world. Social media is used with a specific purpose and to reconnect with real life relations. This generation has a single identity that is represented on social media and is a more or less accurate reflection of themselves in real life.

These different backgrounds fundamentally affect how both generations experience social media and value advertising on social media. While GenZ considers data usage for ads to be normal, Baby Boomers remember an ad-free internet and are more suspicious. Rather than a cultural shift towards greater privacy concerns among consumers, our research found that there are generational nuances to data tolerance in these markets that informs attitudes about privacy. For GenZ, consenting to data use is the ‘new normal’ and navigating the online world is the norm. For Baby Boomers, privacy concerns have always been present but online data collection and use is a new concept.  ... ' 

Demolishing the Journalism Industry Using AI?

Demolishing the Journalism Industry Using AI?

By Futurism, May 12, 2023

Google's new search interface, built on a model trained on unpaid-for human output, will swallow even more human-made content and spit it back out to information-seekers, while taking clicks away from the publishers generating that content.

Remember back in 2018, when Google removed "don't be evil" from its code of conduct?

It's been living up to that removal lately. At its annual I/O in San Francisco this week, the search giant finally lifted the lid on its vision for AI-integrated search — and that vision, apparently, involves cutting digital publishers off at the knees.

Google's new AI-powered search interface, dubbed "Search Generative Experience," or SGE for short, involves a feature called "AI Snapshot." Basically, it's an enormous top-of-the-page summarization feature. Ask, for example, "why is sourdough bread still so popular?" — one of the examples that Google used in their presentation — and, before you get to the blue links that we're all familiar with, Google will provide you with a large language model (LLM) -generated summary. Or, we guess, snapshot.

"Google's normal search results load almost immediately," The Verge's David Pierce explains. "Above them, a rectangular orange section pulses and glows and shows the phrase 'Generative AI is experimental.' A few seconds later, the glowing is replaced by an AI-generated summary: a few paragraphs detailing how good sourdough tastes, the upsides of its prebiotic abilities, and more."

From Futurism

View Full Article   

More on IBM and AI Today

 We worked with IBM for years,  Saw the kind of methods they were pressing that connected their work with Watson and game playing, but never got the impression that that could provide the kind of general management of general corporate data management we were exploring  Seem to making those moves now. 

IBM Consulting recently revealed its Center of Excellence (CoE) for generative AI, aiming to advance artificial intelligence (AI) capabilities and capitalize on the transformative potential of generative AI for business outcomes. Operating parallel with IBM Consulting’s global AI and Automation practice, the CoE encompasses an extensive network of over 21,000 skilled data and AI consultants who have completed over 40,000 enterprise client engagements.  In Venturebeat

The company stated that the Center of Excellence (CoE)’s primary objectives include enhancing customer experiences, transforming core business processes and facilitating innovative business models.

The Center of Excellence (CoE) will leverage IBM’s expertise in enterprise-grade AI, including the recently announced IBM watsonx and cutting-edge technology from IBM’s esteemed ecosystem of business partners, to actively expedite clients’ business transformations. It will also develop new solutions and assets with clients and partners.

“Our Center of Excellence for generative AI has over 1,000 consultants globally with generative AI expertise who are helping clients drive productivity in IT operations and core business processes like HR or marketing, elevate their customer experiences and create new business models,” Glenn Finch, global managing partner, data and technology transformation at IBM Consulting, told VentureBeat. “It stands alongside IBM Consulting’s existing data and AI practice and will focus on solving client challenges using the full generative AI technology stack, including foundation models and 50+ domain-specific classical machine learning accelerators.”  ... ' 

Saturday, May 27, 2023

Space Logistics

Northrup Grumman advances:

What is SpaceLogistics?

SpaceLogistics, a wholly owned subsidiary of Northrop Grumman, provides cooperative space logistics and in-orbit satellite servicing to geosynchronous satellite operators using its fleet of commercial servicing vehicles—the Mission Extension Vehicle, the Mission Robotic Vehicle and the Mission Extension Pods.

Pioneering a New Market in Space

SpaceLogistics currently provides in-orbit satellite servicing to geosynchronous satellite operators using the Mission Extension Vehicle (MEV)™ which docks with customers’ existing satellites providing the propulsion and attitude control needed to extend their lives. This enables satellite operators to activate new markets, drive asset value and protect their franchises.  ... ' 

Mission Extension Vehicle

The Mission Extension Vehicle-1 (MEV-1), the industry’s first satellite life extension vehicle, completed its first docking to a client satellite, Intelsat IS-901 on February 25, 2020. MEV is designed to dock to geostationary satellites whose fuel is nearly depleted. Once connected to its client satellite, MEV uses its own thrusters and fuel supply to extend the satellite’s lifetime. When the customer no longer desires MEV’s service, the spacecraft will undock and move on to the next client satellite. The second Mission Extension Vehicle (MEV-2) launched August 15, 2020 with the Northrop Grumman-built Galaxy 30 satellite. MEV-2 docked with the Intelsat IS-1002 satellite on April 12, 2021.  ... '

Further Update on Elon Musk' s Brain Chip

Neuralink: Why is Elon Musk’s brain chip firm in the news?

By Shiona McCallum  Technology reporter in the BBC

Elon Musk's brain chip firm Neuralink has said that the US Food and Drug Administration (FDA) has approved its first human clinical trial, a critical milestone after earlier struggles to gain approval.

The FDA nod "represents an important first step that will one day allow our technology to help many people," Neuralink said in a tweet.

It did not elaborate on the aims of the study, saying only that it was not recruiting yet, and more details would be available soon.  (... much more ) ... ' 

Europe's First 3D-Printed School Takes Shape in Ukraine

 Europe's First 3D-Printed School Takes Shape in Ukraine

Radio Free Europe/Radio Free Liberty (Czech Republic)

May 25, 2023

Humanitarian group Team4UA organized the building of Europe's first three-dimensionally (3D)-printed primary school in the western Ukraine city Lviv, using technology from Danish 3D-printing construction company COBOD International. The school will combine 3D-printed spaces and manually built sections. Project organizers said one goal is to import several 3D printers and to incorporate the rubble of destroyed buildings into the concrete mix for the school. They hope the school becomes a template for building similar facilities across Ukraine as part of the massive reconstruction effort. The 3D-printed section of the school is scheduled to be completed by early June.  ... ' 

Are ChatGPT's Good at 'Not'?

Are ChatGPT's Good at 'Not'  in QuantaMagazine

Max G. Levy,    Contributing Writer, May 12, 2023

Nora Kassner suspected her computer wasn’t as smart as people thought. In October 2018, Google released a language model algorithm called BERT, which Kassner, a researcher in the same field, quickly loaded on her laptop. It was Google’s first language model that was self-taught on a massive volume of online data. Like her peers, Kassner was impressed that BERT could complete users’ sentences and answer simple questions. It seemed as if the large language model (LLM) could read text like a human (or better).

But Kassner, at the time a graduate student at Ludwig Maximilian University of Munich, remained skeptical. She felt LLMs should understand what their answers mean — and what they don’t mean. It’s one thing to know that a bird can fly. “A model should automatically also know that the negated statement — ‘a bird cannot fly’ — is false,” she said. But when she and her adviser, Hinrich Schütze, tested BERT and two other LLMs in 2019, they found that the models behaved as if words like “not” were invisible.

Since then, LLMs have skyrocketed in size and ability. “The algorithm itself is still similar to what we had before. But the scale and the performance is really astonishing,” said Ding Zhao, who leads the Safe Artificial Intelligence Lab at Carnegie Mellon University.

But while chatbots have improved their humanlike performances, they still have trouble with negation. They know what it means if a bird can’t fly, but they collapse when confronted with more complicated logic involving words like “not,” which is trivial to a human.

“Large language models work better than any system we have ever had before,” said Pascale Fung, an AI researcher at the Hong Kong University of Science and Technology. “Why do they struggle with something that’s seemingly simple while it’s demonstrating amazing power in other things that we don’t expect it to?” Recent studies have finally started to explain the difficulties, and what programmers can do to get around them. But researchers still don’t understand whether machines will ever truly know the word “no.”

Nora Kassner in a blue shirt against a black background.

Nora Kassner has tested popular chatbots and found they typically can’t understand the concept of negation.

Courtesy of Nora Kassner

Making Connections

 It’s hard to coax a computer into reading and writing like a human. Machines excel at storing lots of data and blasting through complex calculations, so developers build LLMs as neural networks: statistical models that assess how objects (words, in this case) relate to one another. Each linguistic relationship carries some weight, and that weight — fine-tuned during training — codifies the relationship’s strength. For example, “rat” relates more to “rodent” than “pizza,” even if some rats have been known to enjoy a good slice.

In the same way that your smartphone’s keyboard learns that you follow “good” with “morning,” LLMs sequentially predict the next word in a block of text. The bigger the data set used to train them, the better the predictions, and as the amount of data used to train the models has increased enormously, dozens of emergent behaviors have bubbled up. Chatbots have learned style, syntax and tone, for example, all on their own. “An early problem was that they completely could not detect emotional language at all. And now they can,” said Kathleen Carley, a computer scientist at Carnegie Mellon. Carley uses LLMs for “sentiment analysis,” which is all about extracting emotional language from large data sets — an approach used for things like mining social media for opinions.

So new models should get the right answers more reliably. “But we’re not applying reasoning,” Carley said. “We’re just applying a kind of mathematical change.” And, unsurprisingly, experts are finding gaps where these models diverge from how humans read.

No Negatives.. Unlike humans, LLMs process language by turning it into math. This helps them excel at generating text — by predicting likely combinations of text — but it comes at a cost.

“The problem is that the task of prediction is not equivalent to the task of understanding,” said Allyson Ettinger, a computational linguist at the University of Chicago. Like Kassner, Ettinger tests how language models fare on tasks that seem easy to humans. In 2019, for example, Ettinger tested BERT with diagnostics pulled from experiments designed to test human language ability. The model’s abilities weren’t consistent. For example:

He caught the pass and scored another touchdown. There was nothing he enjoyed more than a good game of ____. (BERT correctly predicted “football.”)

The snow had piled up on the drive so high that they couldn’t get the car out. When Albert woke up, his father handed him a ____. (BERT incorrectly guessed “note,” “letter,” “gun.”)

And when it came to negation, BERT consistently struggled.  ... '

Rodney Brooks Talks about AI

Comments here a little late, but good points.

Just Calm Down About GPT-4 Already And stop confusing performance with competence, says Rodney Brooks      By Glenn  Zorpette in Spectrum IEEE

Rapid and pivotal advances in technology have a way of unsettling people, because they can reverberate mercilessly, sometimes, through business, employment, and cultural spheres. And so it is with the current shock and awe over large language models, such as GPT-4 from OpenAI.

It’s a textbook example of the mixture of amazement and, especially, anxiety that often accompanies a tech triumph. And we’ve been here many times, says Rodney Brooks. Best known as a robotics researcher, academic, and entrepreneur, Brooks is also an authority on AI: he directed the Computer Science and Artificial Intelligence Laboratory at MIT until 2007, and held faculty positions at Carnegie Mellon and Stanford before that. Brooks, who is now working on his third robotics startup, Robust.AI, has written hundreds of articles and half a dozen books and was featured in the motion picture Fast, Cheap & Out of Control. He is a rare technical leader who has had a stellar career in business and in academia and has still found time to engage with the popular culture through books, popular articles, TED Talks, and other venues.

“It gives an answer with complete confidence, and I sort of believe it. And half the time, it’s completely wrong.”

—Rodney Brooks, Robust.AI

IEEE Spectrum caught up with Brooks at the recent Vision, Innovation, and Challenges Summit, where he was being honored with the 2023 IEEE Founders Medal. He spoke about this moment in AI, which he doesn’t regard with as much apprehension as some of his peers, and about his latest startup, which is working on robots for medium-size warehouses.

Rodney Brooks on…

Will GPT-4 and other large language models lead to an artificial general intelligence in the foreseeable future?

Will companies marketing large language models ever justify the enormous valuations some of these companies are now enjoying?

When are we going to have full (level-5) self-driving cars?

What are the most attractive opportunities now in warehouse robotics?

You wrote a famous article in 2017, “The Seven Deadly Sins of AI Prediction.“ You said then that you wanted an artificial general intelligence to exist—in fact, you said it had always been your personal motivation for working in robotics and AI. But you also said that AGI research wasn’t doing very well at that time at solving the basic problems that had remained intractable for 50 years. My impression now is that you do not think the emergence of GPT-4 and other large language models means that an AGI will be possible within a decade or so.

Rodney Brooks: You’re exactly right. And by the way, GPT-3.5 guessed right—I asked it about me, and it said I was a skeptic about it. But that doesn’t make it an AGI.

The large language models are a little surprising. I’ll give you that. And I think what they say, interestingly, is how much of our language is very much rote, R-O-T-E, rather than generated directly, because it can be collapsed down to this set of parameters. But in that “Seven Deadly Sins” article, I said that one of the deadly sins was how we humans mistake performance for competence.

If I can just expand on that a little. When we see a person with some level performance at some intellectual thing, like describing what’s in a picture, for instance, from that performance, we can generalize about their competence in the area they’re talking about. And we’re really good at that. Evolutionarily, it’s something that we ought to be able to do. We see a person do something, and we know what else they can do, and we can make a judgement quickly. But our models for generalizing from a performance to a competence don’t apply to AI systems.

The example I used at the time was, I think it was a Google program labeling an image of people playing Frisbee in the park. And if a person says, “Oh, that’s a person playing Frisbee in the park,” you would assume you could ask him a question, like, “Can you eat a Frisbee?” And they would know, of course not; it’s made of plastic. You’d just expect they’d have that competence. That they would know the answer to the question, “Can you play Frisbee in a snowstorm? Or, how far can a person throw a Frisbee? Can they throw it 10 miles? Can they only throw it 10 centimeters?” You’d expect all that competence from that one piece of performance: a person saying, “That’s a picture of people playing Frisbee in the park.” .... ' 

Friday, May 26, 2023

Neuralink, Elon Musk's Brain Implant is Approved for Clinical Trials

 Had not followed this before, will continue ...

Neuralink says it has the FDA’s OK to start clinical trials

Company isn't enrolling patients yet, but it has cleared a major hurdle.

 - John Timmer  in   Arstechnica

In December 2022, founder Elon Musk gave an update on his other, other company, the brain implant startup Neuralink. As early as 2020, the company had been saying it was close to starting clinical trials of the implants, but the December update suggested those were still six months away. This time, it seems that the company was correct, as it now claims that the Food and Drug Administration (FDA) has given its approval for the start of human testing.

Neuralink is not ready to start recruiting test subjects, and there are no details about what the trials will entail. Searching the ClinicalTrials.gov database for "Neuralink" also turns up nothing. Typically, the initial trials are small and focused entirely on safety rather than effectiveness. Given that Neuralink is developing both brain implants and a surgical robot to do the implanting, there will be a lot that needs testing.

It's likely that these will focus on the implants first, given that other implants have already been tested in humans, whereas an equivalent surgical robot has not.

The news is undoubtedly a relief for both the staff of the company and its owner Musk, given that Neuralink has had several negative interactions with federal regulators of late. It's a bad sign when having an earlier bid to start clinical trials rejected by the FDA was the least of the company's problems. The company has also been accused of being abusive toward its research animals and violating transportation rules by shipping implants contaminated with monkey tissue and pathogens.

Typically, when the FDA rejects an application for clinical trials, it is willing to communicate in detail why it found the plan for trials insufficient. It's a positive sign for Neuralink that the company was able to address the concerns of federal regulators in a relatively short period. ... '

Ethereum Closes Security Hole with Energy-Saving Update

Interesting example of Security problem.

Ethereum Closes Security Hole with Energy-Saving Update

By New Scientist,May 26, 2023.

Running an Ethereum node allows a user to create transactions and broadcast them across the network without relying on a third party.

An update rolled out by the Ethereum cryptocurrency reduced the energy needed to produce it by 99.99% by transitioning from "proof of work" to "proof of stake," and also fixed a security flaw in the Go Ethereum software used to run Ethereum nodes.

Massimiliano Taverna at ETH Zurich in Switzerland explained that combining the attacks would have reduced the required computing resources to launch the attacks to only 5 graphics processing units.

Ethereum Classic developers patched the vulnerability after being notified by the researchers, but the researchers said the Ethereum POW cryptocurrency has not been updated.

From New Scientist

May Require Paid Subscription    

AI Catalyzes Gene Activation Research, Uncovers Rare DNA Sequences

More amazement via AI. 

AI Catalyzes Gene Activation Research, Uncovers Rare DNA Sequences

By UC San Diego Today,  May 25, 2023

Investigating DNA sequences.

Using machine learning, the researchers discovered the downstream core promoter region (DPR), a “gateway” DNA activation code that’s involved in the operation of up to a third of our genes.

The University of California, San Diego (UCSD)'s James T. Kadonaga and colleagues have used artificial intelligence (AI) to advance gene activation research by identifying "synthetic extreme" DNA sequences.

The researchers trained machine learning models on 200,000 established DNA sequences, then tested 50 million DNA sequences with the models to compare downstream core promoter region (DPR) gene activation elements in humans and fruit flies, exposing custom-tailored DPR sequences specific to both species.

Said Kadonaga, "There are countless practical applications of this AI-based approach. The synthetic extreme DNA sequences might be very rare, perhaps one-in-a-million—if they exist they could be found by using AI."

From UC San Diego Today

View Full Article  

Expeditionary Cyberspace Operations

Good piece in Schneier: 

Expeditionary Cyberspace Operations   

Cyberspace operations now officially has    a physical dimension, meaning that the United States has official military doctrine about cyberattacks that also involve an actual human gaining physical access to a piece of computing infrastructure.  ... '

Return to the Office Has Stalled

Clearly the case, the future?

The Return to the Office Has Stalled

By The Wall Street Journal May 23, 2023

Hybrid work has become the norm for many employers.

When average city office-occupancy rates at the start of the year surpassed 50% for the first time since the pandemic, many viewed the milestone as a sign that employees were resuming their former work habits.

But those office-usage rates have barely budged as most companies have settled into a hybrid work strategy that shows little sign of fading.

About 58% of companies allow employees to work a portion of their week from home, according to Scoop Technologies. The number of companies that require employees to be in the office full time has actually declined to 42%, from 49% three months ago, Scoop said.

"Employees are saying we are going to push really, really hard against being required to be in the office five days a week," said Robert Sadow, Scoop's chief executive and co-founder. "Most companies in the current labor market have been reluctant to push [back] that hard."

From The Wall Street Journal  

Google Begins Opening Access to Generative AI in Search

 More from recent conference

Google begins opening access to generative AI in search

If you signed up for the Search Labs waitlist, keep an eye on your inbox.

Will Shanklin|May 25, 2023 1:55 PM   in Engadget

Google’s take on AI-powered search begins rolling out today. The company announced this morning that it’s opening access to Google Search Generative Experience (SGE) and other Search Labs in the US. If you haven’t already, you’ll need to sign up for the waitlist and sit tight until you get an email announcing it’s your turn.

Revealed at Google I/O 2023 earlier this month, Google SGE is the company’s infusion of conversational AI into the classic search experience. If you’ve played with Bing AI, expect a familiar — yet different — product. Cherlynn Low noted in Engadget’s SGE preview that Google’s AI-powered search uses the same input bar you’re used to rather than a separate chatbot field like in Bing. Next, the generative AI results will appear in a shaded section below the search bar (and sponsored results) but above the standard web results. Meanwhile, on the top right of the AI results is a button letting you expand the snapshot, and it adds cards showing the sourced articles. Finally, you can ask follow-up questions by tapping a button below the results.

Google describes the snapshot as “key information to consider, with links to dig deeper.” Think of it like a slice of Bard injected (somewhat) seamlessly into the Google search you already know.

In addition, Google is opening access to other Search Labs, including Code Tips and Add to Sheets (both are US-only for now). Code Tips “harnesses the power of large language models to provide pointers for writing code faster and smarter.” It lets aspiring developers ask how-to questions about programming languages (C, C++, Go, Java, JavaScript, Kotlin, Python and TypeScript), tools (Docker, Git, shells) and algorithms. Meanwhile, as its name suggests, Add to Sheets lets you insert search results directly into Google’s spreadsheet app. Tapping a Sheets icon to the left of a search result will pop up a list of your recent documents; choose one to which you want to attach the result.  . ... '

Securing IOT Sensors

Consider hw many IOTs you have.

Standards to Secure the Sensors That Power IoT

By Logan Kugler

Communications of the ACM, June 2023, Vol. 66 No. 6, Pages 14-16 10.1145/3591215

The use of Internet of Things (IoT) sensors has exploded in popularity in recent years as cheap, effective IoT sensors make it possible to connect devices that do everything from regulating smart home features to monitoring health and fitness using wearable devices.

IoT sensors also are increasingly making their way into business use-cases. In the industrial IoT, sensors are used in many different contexts, including to control and monitor machinery and to regulate core infrastructure systems.

IoT device and sensor usage has accelerated even more with advances in 5G connectivity and the shift to remote work, says Willi Nelson, chief information security officer for Operational Technologies at Fortinet, a cybersecurity firm. In fact, the number of IoT devices in use is projected to nearly triple to 29 billion in 2030 compared to 9.7 billion today, according to data from Statista.

Yet as IoT adoption increases, IoT sensors and devices also are becoming more popular targets for cybercriminals.

"They remain a prime target of cybercriminals as a fast path to gain access to enterprise networks," says Nelson. Fortinet found 93% of companies using IoT sensors in some capacity had one or more cybersecurity intrusions in the past year. A full 78% had experienced three or more, and these attacks increasingly are targeting industrial IoT operations, too.

That is because IoT is a fundamentally different technology than existing systems—a technology with plenty of attack surfaces. Each sensor and device connected to an IoT network presents a possible security risk, opening up an attack vector into an individual or company's hardware, software, and/or data.

In theory, IoT security standards are supposed to mitigate cybersecurity risks by encouraging companies to follow best security practices when designing and deploying IoT sensors and devices.

However, in practice, the standards available to manufacturers and companies using IoT technology do not always offer sufficient protection, are not always designed specifically for IoT, and are not always followed.

Despite the vulnerability of IoT devices, quite shockingly, there is no single standard for IoT security.

No Standard Set of Standards

IoT sensors carry a variety of unique risks because they are connected to larger sensitive networks. Medical IoT devices handle sensitive and often legally protected patient and hospital data. Industrial IoT sensors connect to other critical manufacturing equipment. IoT sensors in energy offer a gateway into critical private and public power infrastructures.

One prominent example is the damage caused by the malware named "Mirai" in 2016. Mirai infected computers and devices, which in turn targeted IoT devices and sensors. Once infected, IoT devices were used to temporarily take down many popular websites, including Twitter, Netflix, and Airbnb.  ... ' 

Thursday, May 25, 2023

Mojo Lang, New programming language

New to me, a superset of Python 

Mojo Lang: The New Programming Language

Introducing Mojo Lang, the new programming language designed as a superset of Python.

By Nisha Arya, KDnuggets on May 12, 2023 in Programming

Just when we thought things couldn’t shake up the tech industry anymore, welcome the new programming language that has been designed as a superset of the Python programming language.

Python still ranks high as one of the most popular programming languages due to its ability to create complex applications using simple and readable syntax. However, if you use Python, you know its biggest challenge is speed. Speed is an important element of programming, therefore does Python's great ability to produce complex applications with easy syntax dismiss its lack of speed? Unfortunately no. 

There are other programming languages such as C or C++, which have incredible speed, and higher performance in comparison to Python. Although Python is the most widely used programming language for AI, if speed is what you’re looking for, the majority of people stick with C, Rust, or C++.

But that may all change, with the new programming language Mojo Lang.

What is Mojo Lang?

The creator of Mojo Lang, Chris Latner, the creator of the Swift programming language and the LLVM Compiler Infrastructure has taken the usability of Python and merged it with the performance of the C programming language. This has unlocked a new level of programming for all AI developers with unparalleled programmability of AI hardware and the extensibility of AI models.

In comparison to Python, PyPy is ??22x, Scalar C++ is 5000x, and Mojo Lang is 35000x faster. 

Mojo Lang is a language that has been designed to program on AI hardware, such as GPUs running CUDA. It is able to achieve this by using Multi-Level Intermediate Representation (MLIR) to scale hardware types, without complexity. 

Mojo Lang is a superset of Python, which means that it does not require you to learn a new programming language. Handy, right? The base language is fully compatible with Python and allows you to interact with the Python ecosystem and make use of libraries such as NumPy.   ... '

ChatGPT Plugin Connects to Blockchain

 Not expected ......  Actually gave a talk that mentioned both last month.  

Solana Labs integrates ChatGPT plugin to connect blockchain to AI    BY KYT DOTSON

Solana Labs, the developer of the Solana blockchain, announced Tuesday that it has released a ChatGPT plugin that connects its blockchain technology with artificial intelligence that will allow users to query information from the network.

ChatGPT is an artificial intelligence chatbot from OpenAI LP that can answer questions from users in conversational language. The company made it possible for developers to extend it with plugins in March, which makes it possible for the bot to pull data from external sources.

With this announcement, Solana has become the first Layer 1 blockchain to integrate AI capabilities. Using the plugin, users will be able to query information directly from the blockchain using conversational prompts in order to understand Solana data and protocols. That’s aimed at enabling developers and amateurs to get a better understanding of information about the computing infrastructure, underlying data and decentralized finance projects.

“Every developer building consumer-oriented apps should be thinking about how their app is going to be interacted with through an AI model because this is a new paradigm for telling computers what to do,” said Anatoly Yakovenko, co-founder and chief executive of Solana Labs, said in an interview published on Solana’s website.

Yakovenko added that by using AI, people can almost feel like they’re talking to another human, which can completely change how they digest information. “AI will make Solana more usable and understandable,” he said

The plugin was initially teased by Solana in late April, noting that it would be an open-source implementation allowing users to manipulate data on the blockchain. It currently connects directly to a Solana blockchain node and users can buy nonfungible tokens, transfer tokens, examine transactions on the chain, examine account data and look at NFT prices.

Nonfungible tokens are a type of crypto asset built on blockchain technology that provides provable ownership of digital items, such as artwork and video game items. They are cryptographic tokens that can be bought, sold and traded. NFTs are often part of collections, which people often buy, sell and trade similar to collectible trading cards.

Right now, the AI can describe objects on-chain, such as if it’s part of an NFT set, but it won’t yet automate the creation of NFT collections. Yakovenko said a future goal is to add the ability to write to the blockchain, but the plugin is not there yet. Developers interested in taking the plugin for a spin can grab its current iteration from the GitHub repository.

One of Solana’s claims to value in the blockchain space is that it is a high-speed, low-fee blockchain that makes it easy to deploy and use NFTs on. With the addition of an AI interface for users to talk to and understand the network, Yakovenko hopes, it will attract more developers.   .... ' 

Google AI can now Answer your Questions about Uncaptioned Images

Google AI can now answer your questions about uncaptioned images

Google Maps will also display wheelchair-accessible places.

Jon Fingas|@jonfingas|May 18, 2023 11:38 AM in engadget

Google's latest accessibility features include a potentially clever use of AI. The company is updating its Lookout app for Android with an "image question and answer" feature that uses DeepMind-developed AI to elaborate on descriptions of images with no captions or alt text. If the app sees a dog, for example, you can ask (via typing or voice) if that pup is playful. Google is inviting a handful of people with blindness and low vision to test the feature, with plans to expand the audience "soon."

It will also be easier to get around town if you use a wheelchair — or a stroller, for that matter. Google Maps is expanding wheelchair-accessible labels to everyone, so you'll know if there's a step-free entrance before you show up. If a location doesn't have a friendly entrance, you'll see an alert as well as details for other accommodations (such as wheelchair-ready seating) to help you decide whether or not a place is worth the journey.

Google Maps with wheelchair-accessible place label

A handful of minor updates could still be helpful. Live Caption for calls lets you type back responses that are read aloud to recipients. Chrome on desktop (soon for mobile) now spots URL typos and suggests alternatives. As announced, Wear OS 4 will include faster and more consistent text-to-speech when it arrives later in the year.

Google has been pushing hard on AI in recent months, and launched a deluge of features at I/O 2023. The Lookout upgrade might be one of the most useful, though. While AI descriptions are helpful, the Q&A feature can provide details that would normally require another human's input. That could boost independence for people with vision issues. ... ' 

Adobe to Integrate AI into Photoshop

 Having used Adobe capabilities I think this could be a very big thing. 

Adobe to Integrate AI into Photoshop amid Fears of Job Losses, Mass Faking of Images

By The Guardian (U.K.)  May 24, 2023

Adobe is integrating its generative AI product, Firefly, into Photoshop, its flagship image editing software. 

A potential response when AI-enhanced Adobe Photoshop is asked for an image of a “long-haired dachshund with long flowing rainbow hair.”

Software giant Adobe has announced it will integrate generative AI into its widely used Photoshop program, while downplaying fears the move will lead to job losses and mass fakes.

The brand most associated with image editing will incorporate the generative AI product Adobe Firefly, which launched as a beta six weeks ago, creating a tool the company says will become a "co-pilot" to graphic design rather than a replacement for humans.

Using the "generative fill" feature, Photoshop users will be able to add to, expand or remove unwanted items from images using a text prompt similar to those used by Dall-E and Midjourney, such as "long haired dachshund with long flowing rainbow hair."

From The Guardian (U.K.)  Full Article

AI, Neural Nets for Boundary Design.

Applications for agriculture work here as well as ,  archeology and even city design and redesign


April 29, 2023 

Using Neural Networks for Field Boundaries Detection

Today, agriculture faces many challenges, many of which are related to climate change. At the same time, the industry has a severe impact on the climate and the environment, as it is a source of pollution and greenhouse gas emissions. 

Neural Networks Boundaries Detection

Innovative technologies, including geospatial data analytics are helping to develop solutions that reduce negative impacts on wildlife and biodiversity. Defining field boundaries is one such solution. The decision is intended to protect uncultivated land from agricultural inputs, including pesticides and fertilizers.

The technological basis for automatically determining field boundaries is AI and machine learning, one of the most critical elements is artificial neural networks. They are designed to the likeness of the structure of the human brain, and the range of their application in agriculture and other industries is quite broad. 

This technology is used to predict yields, track diseases and pests, control weeds, and more. AI helps in optimizing many farm processes, decision-making and management. Special software is required to implement machine learning methods on a farm because a massive amount of data should be processed. The development of digital technologies and precision farming leads to more and more growers turning to tools based on artificial intelligence.

Deep Learning with Artificial Neural Networks

Machine learning aims to allow machines to learn from data and extract information without being explicitly programmed. Such algorithms can analyse and interpret large amounts of data. Deep learning algorithms (a field of machine learning) are complex and are applied as practical tools for image recognition. The most popular of these algorithms are convolutional neural networks.

In the geospatial data analytics market, companies use AI and machine learning to develop their products and create valuable features. EOS Data Analytics provides AI-powered satellite imagery analyticsand uses innovations to build its software products. EOSDA Crop Monitoring is a precision farming platform that helps growers to make data-based decisions, enhance farm management and decrease the negative impact of agriculture on the environment. 

EOSDA solutions also help clients solve various prediction and classification problems due to the ability of the neural network to detect patterns. Data scientists train neural networks on large sets of images to recognize and distinguish between objects on the Earth’s surface. It enables the classification of crop types, the study of land cover, and obtaining information about soil and vegetation health from satellite imagery.

Main Stages in the Building Process of a Neural Network

The development of a neural network consists of three main stages. The first step is to create an image database to train the network. It is important to note that the fundamental step in machine learning is data collection. The process will be smooth with proper preparation, but it is possible to carry out further actions effectively. It is vital to have many high-quality images to provide the network with what it will observe during its application. Like geospatial data analysis in agriculture, this complex process brings many benefits.

Then it would help if you chose the network architecture. Models with proven effectiveness already exist and are used as a basis for further application development. The third step is network training, on which its specialization and, accordingly, the tasks to be performed will depend. The first three stages are the most time-consuming.    ... 

Europe Takes Regulatory Aim at ChatGPT


Europe takes aim at ChatGPT with what might soon be the West’s first A.I. law. Here’s what it means

Ryan Browne

A key committee of lawmakers in the European Parliament have approved a first-of-its-kind artificial intelligence regulation — making it closer to becoming law.

The approval marks a landmark development in the race among authorities to get a handle on AI, which is evolving with breakneck speed. The law, known as the European AI Act, is the first law for AI systems in the West. China has already developed draft rules designed to manage how companies develop generative AI products like ChatGPT.

The law takes a risk-based approach to regulating AI, where the obligations for a system are proportionate to the level of risk that it poses.

The rules also specify requirements for providers of so-called “foundation models” such as ChatGPT, which have become a key concern for regulators, given how advanced they’re becoming and fears that even skilled workers will be displaced.

What do the rules say?

The AI Act categorizes applications of AI into four levels of risk: unacceptable risk, high risk, limited risk and minimal or no risk.

Unacceptable risk applications are banned by default and cannot be deployed in the bloc.

They include:

AI systems using subliminal techniques, or manipulative or deceptive techniques to distort behavior

AI systems exploiting vulnerabilities of individuals or specific groups

Biometric categorization systems based on sensitive attributes or characteristics

AI systems used for social scoring or evaluating trustworthiness

AI systems used for risk assessments predicting criminal or administrative offenses

AI systems creating or expanding facial recognition databases through untargeted scraping

AI systems inferring emotions in law enforcement, border management, the workplace, and education

Several lawmakers had called for making the measures more expensive to ensure they cover ChatGPT.

To that end, requirements have been imposed on “foundation models,” such as large language models and generative AI.

Developers of foundation models will be required to apply safety checks, data governance measures and risk mitigations before making their models public.

They will also be required to ensure that the training data used to inform their systems do not violate copyright law.

“The providers of such AI models would be required to take measures to assess and mitigate risks to fundamental rights, health and safety and the environment, democracy and rule of law,” Ceyhun Pehlivan, counsel at Linklaters and co-lead of the law firm’s telecommunications, media and technology and IP practice group in Madrid, told CNBC.

“They would also be subject to data governance requirements, such as examining the suitability of the data sources and possible biases.”

It’s important to stress that, while the law has been passed by lawmakers in the European Parliament, it’s a ways away from becoming law.

Why now?

Privately held companies have been left to develop AI technology at breakneck speed, giving rise to systems like Microsoft

-backed OpenAI’s ChatGPT and Google’s Bard.

Google on Wednesday announced a slew of new AI updates, including an advanced language model called PaLM 2, which the company says outperforms other leading systems on some tasks.

Novel AI chatbots like ChatGPT have enthralled many technologists and academics with their ability to produce humanlike responses to user prompts powered by large language models trained on massive amounts of data.

But AI technology has been around for years and is integrated into more applications and systems than you might think. It determines what viral videos or food pictures you see on your TikTok or Instagram feed, for example.

The aim of the EU proposals is to provide some rules of the road for AI companies and organizations using AI.

Metaverse education blossoms in South Korea, Japan, Taiwan

Still wondering the ultimate value of the Meta.   Have seen simple examples, but not the strongest. 

Metaverse education blossoms in South Korea, Japan, Taiwan

Teaching taps VR even as emerging tech's mass adoption remains in doubt

South Korea's Pohang University of Science and Technology wants to become a "metaversity" with digital classrooms.   © Photo courtesy of Pohang University of Science and Technology

DYLAN LOH, Nikkei staff writer

May 8, 2023 11:29 JST

SINGAPORE -- Educators in Asia are dipping their toes into the metaverse, the much-hyped virtual reality where humans can interact socially in cyberspace, even as emerging technology in this space grapples with finding its place in the real world.

From South Korea to Taiwan, schools and other organizations are tapping the metaverse as a tool for instruction, experimenting with VR applications to bring teaching beyond the classroom and devise new ways of imparting knowledge and skill.

Pohang University of Science and Technology in South Korea is working to become a "metaversity" where classrooms are digitalized into the metaverse, offering training courses in cyberspace.

The university, known as POSTECH, serves 1,400 undergraduate and 2,500 graduate students who work with 450 faculty members and 820 researchers in fields like energy, materials, basic science, information communications technology and health.

"Virtual reality technology can be applied in fields that are difficult to access in reality, such as outer space and the nanoworld," Moo Hwan Kim, the university's president, told Nikkei Asia. "In the long run, it will be able to replace classes that require more hands-on experiences or training in dangerous environments."

POSTECH said it invests $300,000 a year to buy equipment and develop educational programs for students and has pooled $500,000 to build classrooms that tap the metaverse.

In Japan, at N and S high schools, the largest online high schools in the country, 7,000 students learn through VR headsets.

Director Riichiro Sono told Nikkei that the organization took to the metaverse to conduct lessons without physical constraints while providing an immersive environment for individual learning.

The schools surveyed the VR participants last year and found a satisfaction rate of 98.5%, he said, but noted that "it can take time for users to get accustomed to a VR environment" and that "the additional weight of a VR headset can be a deterrent" for some users.  ... '