/* ---- Google Analytics Code Below */

Friday, June 16, 2023

A Critical Look at AI-Generated Software

Considerable article, excerpt and more below, Good cautions.

A Critical Look at AI-Generated Software

AI-Generated Code exaines

•IEEE Spectrum by Jaideep Vaidya / June 11, 2023 

Most recently, OpenAI launched ChatGPT, a large-language-model chatbot that is capable of writing code with a little prompting in a conversational manner.

In many ways, we live in the world of The Matrix. If Neo were to help us peel back the layers, we would find code all around us. Indeed, modern society runs on code: Whether you buy something online or in a store, check out a book at the library, fill a prescription, file your taxes, or drive your car, you are most probably interacting with a system that is powered by software.

And the ubiquity, scale, and complexity of all that code just keeps increasing, with billions of lines of code being written every year. The programmers who hammer out that code tend to be overburdened, and their first attempt at constructing the needed software is almost always fragile or buggy—and so is their second and sometimes even the final version. It may fail unexpectedly, have unanticipated consequences, or be vulnerable to attack, sometimes resulting in immense damage.

Consider just a few of the more well-known software failures of the past two decades. In 2005, faulty software for the US $176 million baggage-handling system at Denver International Airport forced the whole thing to be scrapped. A software bug in the trading system of the Nasdaq stock exchange caused it to halt trading for several hours in 2013, at an economic cost that is impossible to calculate. And in 2019, a software flaw was discovered in an insulin pump that could allow hackers to remotely control it and deliver incorrect insulin doses to patients. Thankfully, nobody actually suffered such a fate.

These incidents made headlines, but they aren’t just rare exceptions. Software failures are all too common, as are security vulnerabilities. Veracode’s most recent survey on software security, covering the last 12 months, found that about three-quarters of the applications examined contained at least one security flaw, and nearly one-fifth had at least one flaw regarded as being of high severity.

What can be done to avoid such pitfalls and more generally to prevent software from failing? An influential 2005 article in IEEE Spectrum identified several factors, which are still quite relevant. Testing and debugging remain the bread and butter of software reliability and maintenance. Tools such as functional programming, code review, and formal methods can also help to eliminate bugs at the source. Alas, none of these methods has proven absolutely effective, and in any case they are not used consistently. So problems continue to mount.

Meanwhile, the ongoing AI revolution promises to revamp software development, making it far easier for people to program, debug, and maintain code. GitHub Copilot, built on top of OpenAI Codex, a system that translates natural language to code, can make code recommendations in different programming languages based on the appropriate prompts. And this is not the only such system: Amazon CodeWhisperer, CodeGeeX, GPT-Code-Clippy, Replit Ghostwriter, and Tabnine among others, also provide AI-powered coding and code completion [see "Robo-Helpers," below].”

Bad Programming Advice from ChatGPT   ... '  ------- 

No comments: