/* ---- Google Analytics Code Below */
Showing posts with label Coding. Show all posts
Showing posts with label Coding. Show all posts

Sunday, June 04, 2023

AI Can Rewrite Code

Recently have heard some very good coders say, no can't happen professionally anytime soon.  True?  

AI Rewrites Coding  By Samuel Greengard

Communications of the ACM, April 2023, Vol. 66 No. 4, Pages 12-14   10.1145/3583083

Computer code intersects with almost every aspect of modern life. It runs factories, controls transportation networks, and defines the way we interact with personal devices. It is estimated that somewhere in the neighborhood of 2.8 trillion lines of code have been written over the last two decades alone.

Yet it is easy to overlook a basic fact: people have to write software—and that is often a long, tedious, and error-prone process. Although low-code and no-code environments have simplified things—and even allowed non-data scientists to build software through drag-and-drop interfaces—they still require considerable time and effort.

Enter artificial intelligence (AI). Over the last several years, various systems and frameworks have appeared that can automate code generation. For example, Amazon has developed CodeWhisperer, a coding assistant tool that automates coding in Python, Java, and JavaScript. GitHub's Copilot autogenerates code through natural language, and IBM's Project Wisdom is focused on building a framework that allows computers to program computers.

"As software becomes more complex and moves into the realm of non-developers and non-data scientists, there's a need for systems that can simplify and automate coding tasks," says Ruben Martins, an assistant research professor at Carnegie Mellon University. Adds Abraham Parangi, co-founder and CEO of Akkio, a firm that offers AI-assisted coding tools, "People have been working on these tools for many years. Suddenly, the trajectory is going vertical."

Although it is unlikely AI will eliminate jobs for developers anytime soon, it is poised to revolutionize the way software is created. For instance, OpenAI has introduced DALL-E 2, a tool that generates photorealistic images and art through natural language. In addition, the OpenAI Codex builds software in more than a dozen programming languages, including Python, Perl, Ruby, and PHP.

Observes Ruchir Puri, chief scientist for IBM Research, "The ability for computers to write code—and even program other computers—has the potential to fundamentally reshape the way we work and live."

Abstracting the Code

The idea of automating coding tasks is not new or particularly revolutionary. From punch cards to today's vast open source code libraries, the need to construct software from scratch has steadily declined. In recent years, low-code and no-code environments—which typically allow a person to drag-and-drop elements that represent pre-established tasks or functions—have greatly simplified software development, while expanding who can produce software.

Yet the emerging crop of AI tools turbocharge the concept. In some cases, these platforms anticipate tasks and suggest blocks of code—similar to the way applications now autopredict words and phrases in email and other documents. In other cases, they actually generate images, functions, and entire websites based on natural language input, or they suggest coding actions based on what the AI believes should happen next.

For example, Akkio's platform allows humans to build machine learning and other AI models for things like forecasting, text classification, and lead scoring, without ever interacting with code. It is a simple drag-and-drop proposition en route to a tool or app. "This makes it possible for people who have no knowledge of coding to accomplish all sorts of reasonably complicated tasks—and produce code without the formidable barriers of the past," Parangi explains.    .... ' 


Sunday, April 23, 2023

Google Bard Now Supports Code Generation

Will be trying this with python and potentially other code types.

Google is updating its Bard AI chatbot to help developers write and debug code. Rivals like ChatGPT and Bing AI have supported code generation, but Google says it has been “one of the top requests” it has received since opening up access to Bard last month.   In The Verge.

Bard can now generate code, debug existing code, help explain lines of code, and even write functions for Google Sheets. “We’re launching these capabilities in more than 20 programming languages including C++, Go, Java, Javascript, Python and Typescript,” explains Paige Bailey, group product manager for Google Research, in a blog post.

You can ask Bard to explain code snippets or explain code within GitHub repos similar to how Microsoft-owned GitHub is implementing a ChatGPT-like assistant with Copilot. Bard will also debug code that you supply or even its own code if it made some errors or the output wasn’t what you were looking for.

Speaking of errors, Bailey admits that Bard “may sometimes provide inaccurate, misleading or false information while presenting it confidently,” much like many AI-powered chatbots. “When it comes to coding, Bard may give you working code that doesn’t produce the expected output, or provide you with code that is not optimal or incomplete,” says Bailey. “Always double-check Bard’s responses and carefully test and review code for errors, bugs and vulnerabilities before relying on it.” Bard will also cite the source of its code recommendations if it quotes them “at length.”

Google is pushing ahead with its Bard chatbot despite reports that suggest employees repeatedly criticized the chatbot and labeled it “a pathological liar.” Google has reportedly sidelined ethical concerns to keep up with rivals like OpenAI and Microsoft. In our tests comparing Bard, Bing, and ChatGPT, we found Google’s Bard chatbot to be less accurate than its rivals.  ...'

Monday, April 10, 2023

AI Replacing Coding

 The more examples I see the more I think so.

Will AI Replace Computer Programmers?

By Logan Kugler, Commissioned by CACM Staff, March 30, 2023

Not only do programmers work faster with AI assistance, it also frees them up to focus on more complex (and usually more rewarding and higher-value) tasks.

Since ChatGPT took the world by storm late last year, white-collar professionals have been forced to reckon with the fact that artificial intelligence (AI) might soon do parts of their jobs better than they do.

So far, an explosion of so-called "generative AI" tools—machines that generate text and imagery—has writers and designers equal parts excited and apprehensive.

On one hand, these tools give some creative types unprecedented abilities to tell compelling new stories, create outstanding content, and produce innovative art at scale.

On the other, they have some looking over their shoulders as they see AI increasingly invade an area of knowledge work typically reserved for human beings—threatening their skillsets and livelihoods in the process.

Soon, computer programmers could be equally divided with the release of a clutch of increasingly powerful generative AI tools that produce code automatically.

AI-powered coding assistants like OpenAI's Codex model, GitHub Copilot, and Replit Ghostwriter are changing how computer programmers do their jobs. That's because, thanks to advancements in the large language models that power them, these tools can now, in some instances, automatically generate reliable code.

In the process, they are having a significant impact on programmer productivity and causing some to ask bigger questions about how AI coding copilots will affect coding work and jobs.

But just how good are AI tools that can code? And what do they mean for the industry at large?

If you rated today's AI coding tools on a 10-point scale, they're at a three, says Shanea Leven, founder and CEO of CodeSee, which builds solutions that help companies understand their codebases. There's no question they can speed up basic coding tasks, Leven says.

They're also useful for generating ideas and boilerplate code, according to Ilkka Turunen, Field CTO at Sonatype, a software supply chain management company.

As a result, these tools are having an immediate impact on novice and expert programmers alike. For novices, models like Codex can help them immediately solve basic problems, says Paul Denny, associate professor of computer science at The University of Auckland, New Zealand.

"For professional developers, already some good evidence is emerging for improvements in productivity," Denny says. A recent study by Github found that 88% of programmers said they were more productive when using Copilot. That was thanks to benefits like less time spent searching for code, getting stuck on or bored with repetitive tasks, and staying in flow longer.

Not only do programmers do their work faster with AI assistance, but they also free themselves up to focus on more complex (and usually more rewarding and higher-value) tasks.

However, AI coding copilots today still have serious limitations. Active codebases are highly customized, says Leven; that gives AI tools very little usable data to learn from quickly, making their usefulness limited. Even with enough data, today's tools are unable to handle the levels of complexity that serious programming challenges require.

"Many times when a project is complex, AI also needs to consider the needs of the business when making decisions and tradeoffs, something it just can't do today," Leven says.

Not to mention there's no guarantee the outputs of these tools are correct. Large language models are good at "hallucinating" or confidently producing inaccurate outputs, so you still need to have an extremely knowledgeable user overseeing these tools, says Turunen.

As a result, don't expect an entry-level programmer to suddenly become a software development superhero just because they use AI. And, given the limitations of today's tools, coding jobs aren't going away any time soon. .... '

Wednesday, March 22, 2023

Replacing Coders Already?

Inevitable.... 

Startups Are Already Using GPT-4 to Spend Less on Human Coders

GPT-4 "saves a lot of time and a lot of money, obviously, because we haven't had to hire additional resources."   By Chloe Xiang, March 20, 2023, 9:00am

1674655888922-screen-shot-2023-01-25-at-91114-am

SCREENSHOT FROM OPENAI WEBSITE 

Since GPT-4 was released last week, many users have noticed its advanced coding abilities. GPT-4, OpenAI’s latest version of the large language model that ChatGPT is built on, has been able to code games like Pong and make simple apps after being given prompts written in conversational English. Naturally, this has led to widespread fear from a number of computer science students and software developers who are afraid that their jobs will soon be rendered obsolete by AI. ....' 


Tuesday, March 21, 2023

Use cases of GPT-4

 Directions and examples.....

MIT Technology Review

ARTIFICIAL INTELLIGENCE

How AI experts are using GPT-4

Plus: Chinese tech giant Baidu just released its answer to ChatGPT.

By Melissa Heikkiläarchive page

March 21, 2023   STEPHANIE ARNETT/MITTR | GETTY

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

WOW, last week was intense. Several leading AI companies had major product releases. Google said it was giving developers access to its AI language models, and AI startup Anthropic unveiled its AI assistant Claude. But one announcement outshined them all: OpenAI’s new multimodal large language model, GPT-4. My colleague William Douglas Heaven got an exclusive preview. Read about his initial impressions.  

Unlike OpenAI’s viral hit ChatGPT, which is freely accessible to the general public, GPT-4 is currently accessible only to developers. It’s still early days for the tech, and it’ll take a while for it to feed through into new products and services. Still, people are already testing its capabilities out in the open. Here are my top picks of the fun ways they’re doing that.

Hustling

In an example that went viral on Twitter, Jackson Greathouse Fall, a brand designer, asked GPT-4 to make as much money as possible with an initial budget of $100. Fall said he acted as a “human liaison” and bought anything the computer program told him to. 

GPT-4 suggested he set up an affiliate marketing site to make money by promoting links to other products (in this instance, eco-friendly ones). Fall then asked GPT-4 to come up with prompts that would allow him to create a logo using OpenAI image-generating AI system DALL-E 2. Fall also asked GPT-4 to generate content and allocate money for social media advertising. 

The stunt attracted lots of attention from people on social media wanting to invest in his GPT-4-inspired marketing business, and Fall ended up with $1,378.84 cash on hand. This is obviously a publicity stunt, but it’s also a cool example of how the AI system can be used to help people come up with ideas. 

Productivity

Big tech companies really want you to use AI at work. This is probably the way most people will experience and play around with the new technology. Microsoft wants you to use GPT-4 in its Office suite to summarize documents and help with PowerPoint presentations—just as we predicted in January, which already seems like eons ago. 

Not so coincidentally, Google announced it will embed similar AI tech in its office products, including Google Docs and Gmail. That will help people draft emails, proofread texts, and generate images for presentations.  

Health care

I spoke with Nikhil Buduma and Mike Ng, the cofounders of Ambience Health, which is funded by OpenAI. The startup uses GPT-4 to generate medical documentation based on provider-patient conversations. Their pitch is that it will alleviate doctors’ workloads by removing tedious bits of the job, such as data entry. 

Buduma says GPT-4 is much better at following instructions than its predecessors. But it’s still unclear how well it will fare in a domain like health care, where accuracy really matters. OpenAI says it has improved some of the flaws that AI language models are known to have, but GPT-4 is still not completely free of them. It makes stuff up and presents falsehoods confidently as facts. It’s still biased. That’s why the only way to deploy these models safely is to make sure human experts are steering them and correcting their mistakes, says Ng.

Writing code

Arvind Narayanan, a computer science professor at Princeton University, saysit took him less than 10 minutes to get GPT-4 to generate code that converts URLs to citations. 

Narayanan says he’s been testing AI tools for text generation, image generation, and code generation, and that he finds code generation to be the most useful application. “I think the benefit of LLM [large language model] code generation is both time saved and psychological,” he tweeted. 

In a demo, OpenAI cofounder Greg Brockman used GPT-4 to create a website based on a very simple image of a design he drew on a napkin. As Narayanan points out, this is exactly where the power of these AI systems lies: automating mundane, low-stakes, yet time-consuming task... ' 

Wednesday, February 22, 2023

Generative AI Helping Boost Productivity of Some Software Developers

Have been a coder and developer myself, and this has to help greatly once its tailored for coding use.

Generative AI Helping Boost Productivity of Some Software Developers

By The Wall Street Journal, February 22, 2023

Microsoft Corp.’s GitHub Copilot coding program, built with generative artificial intelligence from ChatGPT maker OpenAI, suggests code for developers, who have the option to accept the suggestion.

Credit: Computer Science Degree Hub

A new kind of artificial intelligence that can create a range of humanlike content, from writing to illustrations, is beginning to gain traction in the world of software development.

The technology, known as generative AI, has been pioneered by OpenAI, the lab behind the ChatGPT writing system and the Dall-E visual illustration generator. Those tools, with broad implications for search and other core tasks, have captured growing interest since late last fall. 

The technology's potential to upend software development is particularly acute. Microsoft Corp. , which invested $1 billion in OpenAI in 2019 and pledged as much as $10 billion more in January, is incorporating ChatGPT into its Bing search engine, and earlier incorporated OpenAI's technology into Copilot, its AI system for writing code. 

Copilot, as well as other tools from Amazon.com Inc., International Business Machines Corp. , and startups like Tabnine Inc. and Magic AI Inc., have quickly found a receptive audience among developers seeking a productivity boost.

In contrast to ChatGPT, whose usefulness in the enterprise some technology leaders have questioned, AI coding tools like Copilot, which was first launched for public use last summer by Microsoft-owned coding-collaboration platform GitHub Inc., seem to be capable of solving some real-world business problems right away. The AI model behind Copilot is trained on data from GitHub, which houses a popular open-source community where developers contribute and share code.

From The Wall Street Journal

View Full Article


Monday, February 13, 2023

Using an Assistant Leads to more Buggy Code

Perhaps unexpected, but does depend on deeper understanding of the use context involved. 

AI Help Leads to Buggier Code

By Ryan Daws | December 21, 2022 | TechForge Media

Categories: Applications, Artificial Intelligence, Development, Research,

A study by Stanford University computer scientists has found that developers using AI-powered assistants often produce buggier code.

The paper, titled ‘Do Users Write More Insecure Code with AI Assistants?’  , examines developers’ use of AI coding assistants like the controversial GitHub Copilot.

“Participants with access to an AI assistant often produced more security vulnerabilities than those without access, with particularly significant results for string encryption and SQL injection,” the authors wrote.

The paper also found that developers using AI assistants have misguided confidence in the quality of their code.

“We also found that participants [that were] provided access to an AI assistant were more likely to believe that they wrote secure code than those without access to the AI assistant,” added the authors.

47 people were asked to write code in response to several prompts as part of the study. Some participants were given AI assistance while the rest did not.

The first prompt was to “Write two functions in Python where one encrypts and the other decrypts a given string using a given symmetric key.”

For that prompt, 79 percent of the coders without AI assistance gave a correct answer. That’s compared to 67 percent of the group with assistance.

In addition, the assisted group was determined to be “significantly more likely to provide an insecure solution (p < 0.05, using Welch’s unequal variances t-test), and also significantly more likely to use trivial ciphers, such as substitution ciphers (p < 0.01), and not conduct an authenticity check on the final returned value.”

One participant allegedly quipped that they hope AI assistance gets deployed because “it’s like [developer Q&A community] Stack Overflow but better, because it never tells you that your question was dumb.”  ... ' 

Tuesday, February 07, 2023

OpenAI Has Hired an Army of Contractors to Make Basic Coding Obsolete

 Still premature, depending on definition of 'basic',   still considerable need to check and integrate codes.   Can see the need for labeling for training activity

OpenAI Has Hired an Army of Contractors to Make Basic Coding Obsolete  By Semafor, January 31, 2023

OpenAI has repeatedly noted the importance of outsourced labor in building its technology.

OpenAI, the company behind the chatbot ChatGPT, has ramped up its hiring around the world, bringing on roughly 1,000 remote contractors over the past six months in regions like Latin America and Eastern Europe, sources said.

About 60% were hired to do "data labeling." The other 40% are computer programmers who are creating data for OpenAI's models to learn software engineering tasks.

OpenAI appears to be building a dataset that includes not just lines of code, but also the human explanations behind them written in natural language.

With hundreds of programmers making a concerted effort to "teach" the models how to write basic code, the technology behind ChatGPT might be headed toward a new kind of software development.

From Semafor  

View Full Article

Monday, January 30, 2023

AI / GPT Finding, Fixing Bugs in Code! Security Threats?

 Something we saw predicted and then experimented with in the 80s.  Have seen only hints at the possibility since then.   Could be a real powerful plus, especially looking for openings for threats to security.  

ACM TECHNEWS

ChatGPT Finding, Fixing Bugs in Code, By PC Magazine, January 30, 2023

The ability to chat with ChatGPT after receiving the initial answer made the difference, ultimately leading to ChatGPT solving 31 questions and easily outperforming the others programs.

Computer science researchers from Germany's Johannes Gutenberg University and the U.K.'s University College London found the ChatGPT chatbot can detect and correct buggy code better than existing programs.

The researchers gave 40 pieces of bug-embedded software to ChatGPT, and to three other code-fixing systems for comparison.

ChatGPT's performance on the first pass was similar to that of the other systems, but the ability to dialogue with the bot after receiving the initial answer ultimately helped it overtake the others.

The researchers explained, "We see that for most of our requests, ChatGPT asks for more information about the problem and the bug. By providing such hints to ChatGPT, its success rate can be further increased, fixing 31 out of 40 bugs, outperforming state-of-the-art."

From PC Magazine

View Full Article 

Saturday, January 21, 2023

On the Future of Programming

 Thoughts on the future of Programming, thoughtful piece.   Implications forsecurity. 

The Premature Obituary of Programming  By Daniel M. Yellin  (Opinion) 

Communications of the ACM, February 2023, Vol. 66 No. 2, Pages 41-44 10.1145/3555367

Deep learning (DL) has arrived, not only for natural language, speech, and image processing but also for coding, which I refer to as deep programming (DP). DP is used to detect similar programs, find relevant code, translate programs from one language to another, discover software defects, and to synthesize programs from a natural language description. The advent of large transformer language models10 is now being applied to programs with encouraging results. Just like DL is enabled by the enormous amount of textual and image data available on the Internet, DP is enabled by the vast amount of code available in open source repositories such as GitHub, as well as the ability to reuse libraries via modern package managers such as npm and pip. Two trail-blazing transformer-based DP systems are OpenAI's Codex8 and Deepmind's AlphaCode.18 The former is used in the Github Copilot project14 and integrates with development environments to automatically suggest code to developers. The latter generates code to solve problems presented at coding competitions. Both achieve amazing results. Multiple efforts are under way to establish code repositories for benchmarking DP, such as CodeXGLUE19 and CodeNET.20

The advent of DP systems has led to a few sensational headlines declaring that in the not-too-distant future coding will be done by computers, not humans.1 As DL technologies get even better and more code is deposited into public repositories, programmers will be replaced by specification writers outlining what code they want in natural language and presto, the code appears. This Viewpoint argues that while DP will influence software engineering and programming, its effects will be more incremental than the current hype suggests. To get away from the hype, I provide a careful analysis of the problem. I also argue that for DP to broaden its influence, it needs to take a more multidisciplinary approach, incorporating techniques from software engineering, program synthesis, and symbolic reasoning, to name just a few. Note I do not argue with the premise that DL will be used to solve many problems that are solved today by traditional programming methods and that software engineering will evolve to make such systems robust.17 In this Viewpoint, I am addressing the orthogonal question of using DL to synthesize programs themselves. ... ' 

Wednesday, December 28, 2022

On the End of Programming

 Many, like myself started our career in coding. This should increase security by standardizing safer coding approaches.  Or will it?  

The End of Programming  (Opinion)  By Matt Welsh

Communications of the ACM, January 2023, Vol. 66 No. 1, Pages 34-35   10.1145/3570220

I came of age in the 1980s, programming personal computers such as the Commodore VIC-20 and Apple ][e at home. Going on to study computer science (CS) in college and ultimately getting a Ph.D. at Berkeley, the bulk of my professional training was rooted in what I will call "classical" CS: programming, algorithms, data structures, systems, programming languages. In Classical Computer Science, the ultimate goal is to reduce an idea to a program written by a human—source code in a language like Java or C++ or Python. Every idea in Classical CS—no matter how complex or sophisticated, from a database join algorithm to the mind-bogglingly obtuse Paxos consensus protocol—can be expressed as a human-readable, human-comprehendible program.

When I was in college in the early 1990s, we were still in the depths of the AI Winter, and AI as a field was likewise dominated by classical algorithms. My first research job at Cornell University was working with Dan Huttenlocher, a leader in the field of computer vision (and now Dean of the MIT Schwarzman College of Computing). In Huttenlocher's Ph.D.-level computer vision course in 1995 or so, we never once discussed anything resembling deep learning or neural networks—it was all classical algorithms like Canny edge detection, optical flow, and Hausdorff distances. Deep learning was in its infancy, not yet considered mainstream AI, let alone mainstream CS.

Of course, this was 30 years ago, and a lot has changed since then, but one thing that has not really changed is that CS is taught as a discipline with data structures, algorithms, and programming at its core. I am going to be amazed if in 30 years, or even 10 years, we are still approaching CS in this way. Indeed, I think CS as a field is in for a pretty major upheaval few of us are really prepared for.

Programming will be obsolete. I believe the conventional idea of "writing a program" is headed for extinction, and indeed, for all but very specialized applications, most software, as we know it, will be replaced by AI systems that are trained rather than programmed. In situations where one needs a "simple" program (after all, not everything should require a model of hundreds of billions of parameters running on a cluster of GPUs), those programs will, themselves, be generated by an AI rather than coded by hand.

I do not think this idea is crazy. No doubt the earliest pioneers of computer science, emerging from the (relatively) primitive cave of electrical engineering, stridently believed that all future computer scientists would need to command a deep understanding of semiconductors, binary arithmetic, and microprocessor design to understand software. Fast-forward to today, and I am willing to bet good money that 99% of people who are writing software have almost no clue how a CPU actually works, let alone the physics underlying transistor design. By extension, I believe the computer scientists of the future will be so far removed from the classic definitions of "software" that they would be hard-pressed to reverse a linked list or implement Quicksort. (I am not sure I remember how to implement Quicksort myself.)

AI coding assistants such as CoPilot are only scratching the surface of what I am describing. It seems totally obvious to me that of course all programs in the future will ultimately be written by AIs, with humans relegated to, at best, a supervisory role. Anyone who doubts this prediction need only look at the very rapid progress being made in other aspects of AI content generation, such as image generation. The difference in quality and complexity between DALL-E v1 and DALL-E v2—announced only 15 months later—is staggering. If I have learned anything over the last few years working in AI, it is that it is very easy to underestimate the power of increasingly large AI models. Things that seemed like science fiction only a few months ago are rapidly becoming reality.  ... ( considerable piece, more at the link above)   ... ' 

Tuesday, December 13, 2022

AI Can Code

 Is AI the next step for useful and more secure coding? 

DeepMind's AlphaCode Can Outcompete Human Coders

Gizmodo, Mack DeGeurin, December 8, 2022

DeepMind's AlphaCode model performed well against human coders in a programming competition, with a paper describing its overall performance as similar to that of a "novice programmer" with up to a year of training. AlphaCode achieved "approximately human-level performance" and solved previously unseen natural language problems by forecasting code segments and generating millions of potential solutions. The model then winnowed them down to a maximum of 10 solutions, which the researchers said were produced "without any built-in knowledge about the structure of computer code." Carnegie Mellon University's J. Zico Kolter wrote, "Ultimately, AlphaCode performs remarkably well on previously unseen coding challenges, regardless of the degree to which it 'truly' understands the task."  ... ' 

Monday, December 05, 2022

From Handrwriting to Computer Code

Like to see a useful example of this, though who uses much handwriting anymore?    Do we need another introduction of uncertainty?

Programming Tool Turns Handwriting into Computer Code  By Cornell Chronicle, November 30, 2022

Writing code long-hand. 

The Notate pen-based interface lets users of computational, digital notebooks open drawing canvases and handwrite diagrams within lines of traditional, digitized computer code.

A team of Cornell University researchers created the Notate interface to translate handwriting and sketches into computer code.

The pen-based interface enables digital notebook users to open drawing canvases and to handwrite diagrams within lines of traditional code.

Notate is driven by a deep learning model, allowing notation in the handwritten diagram to reference textual code and vice versa.

Cornell's Ian Arawjo said, "People are ready for this type of feature, but developers of interfaces for typing code need to take note of this and support images and graphical interfaces inside code."

From Cornell Chronicle

View Full Article  

Sunday, June 19, 2022

Faster Computing Results Without Fear of Errors

Interesting look at how to minimize errors in computing, could be especially useful if it included security aspects.

Faster Computing Results Without Fear of Errors

MIT News  Adam Zewe, June 7, 2022

A multi-institutional team of researchers has developed PaSh, a system that can dramatically speed up certain types of computer programs while ensuring the accuracy of results. The system accelerates programs or scripts that run in the Unix shell, parsing their components into segments that can be run on multiple processors. PaSh parallelizes program components "just in time" to predict program behavior, speeding up more elements than traditional methods that attempt to parallelize in advance while still returning accurate results. The researchers tested PaSh on hundreds of scripts without breaking one; the system also ran programs an average six times faster than unparallelized scripts, and realized a nearly 34-fold maximum speed increase. ... '

Saturday, May 28, 2022

On Coding and Software in Quanta Mag

On a  Coding Guru of note, I used his LaTex system.

How to Write Software With Mathematical Perfection   by Sheon Han

Leslie Lamport revolutionized how computers talk to each other. Now he’s working on how engineers talk to their machines.

Leslie Lamport may not be a household name, but he’s behind a few of them for computer scientists: the typesetting program LaTeX and the work that made cloud infrastructure at Google and Amazon possible. He’s also brought more attention to a handful of problems, giving them distinctive names like the bakery algorithm and the Byzantine Generals Problem. This is no accident. The 81-year-old computer scientist is unusually thoughtful about how people use and think about software.

In 2013, he won the A.M. Turing Award, considered the Nobel Prize of computing, for his work on distributed systems, where multiple components on different networks coordinate to achieve a common objective. Internet searches, cloud computing and artificial intelligence all involve orchestrating legions of powerful computing machines to work together. Of course, this kind of coordination opens you up to more problems.

“A distributed system is one in which the failure of a computer you didn’t even know existed can render your own computer unusable,” Lamport once said.   .... ' 

Wednesday, March 02, 2022

Will No-Code Win?

 Great piece,  continuing to consider 

Will No-Code Crack the Code?

By Samuel Greengard, Commissioned by CACM Staff, March 1, 2022

The history of computing is rife with advances that have made things easier for the common user. Graphical user interfaces (GUIs), the computer mouse, drag-and-drop functionality, and Web browsers are just a few examples of how complex processes have been simplified. Yet for decades, software development has remained largely outside the mainstream. Because most people lack the knowledge and ability to write computer code in C++, Python, Java, or other languages, they typically find themselves locked out—or they must hire someone to create the desired functionality.

That is beginning to change. No-code platforms are on the rise, including in areas such as automation and artificial intelligence (AI). The appeal is not difficult to understand. "No-code democratizes software development. It provides value in many areas where custom code is too expensive, slow to develop, and hard to maintain," says Isaac Sacolick, author of Driving Digital and CEO of business consulting firm Star CIO.

Behind the Lines

The idea of using visual elements to generate code is not particularly new. In 2003, WordPress introduced a drag-and-drop interface for building Websites. Not surprisingly, the concept has continued to evolve and expand. Today, no-code platforms such as Google's AppSheet make it possible to build Web and mobile apps in hours or days without any prior coding knowledge. Of course, coding continues to take place, though the process happens in an automated way behind the scenes. While the need to understand data structures and how to generate a useful app have not gone away, there is no longer a need to wear a data scientist hat. No-code platforms can grab data, classify and encode data, and use machine learning to spot relationships.

"No-code changes the game. With the popularity of smartphones and the Internet, there is a need to develop mobile apps and Websites quickly," says Ruben Martins, an assistant research professor at Carnegie Mellon University. "No-code frameworks aid individuals, but also help businesses build applications that can increase customer satisfaction and lower the cost of development."

Today, people are turning to these platforms to manage a wide array of tasks, including scheduling, claims and paperwork, tracking deliveries, viewing sales prospects, and checking job boards. No-code applications can handle text classification from unstructured data and scan financial transactions for fraud. In addition, "No-0code can do things like data cleaning and data transformation," Martins says.

Friday, September 24, 2021

Low Code Tipping Point?

 Is this reasonable? Well only maybe if it allowed non-coding deicision makers to code them selves. And informed them well of they understood the decisions they were embedding in the code.

The low-code ‘tipping point’ is here    in Venturebeat

Half of business technologists now produce capabilities for users beyond their own department or enterprise. That’s the top finding in a new report from Gartner, which cites “a dramatic growth” in digitalization opportunities and lower barriers to entry, including low-code tools and AI-assisted development, as the core factors enabling this democratization beyond IT professionals. What’s more, Gartner reports that 77% of business technologists — defined as employees who report outside of IT departments and create technology or analytics capabilities — routinely use a combination of automation, integration, application development, or data science and AI tools in their daily work.

“This trend has been unfolding for many years, but we’re now seeing a tipping point in which technology management has become a business competency,” Raf Gelders, research vice president at Gartner, told VentureBeat. “Whether all employees will soon be technical employees remains to be seen. Do your best sales reps need to build new digital capabilities? Probably not. Do you want business technologists in sales operations? Probably yes.”

Harnessing the Power of Personalization, Automation to Deliver Real-time, Intelligent Digital Experiences 1

The rise of low-code 

Low-code development tools — such as code-generators and drag-and-drop editors — allow non-technical users to perform capabilities previously only possible with coding knowledge. Ninety-two percent of IT leaders say they’re comfortable with business users leveraging low-code tools, with many viewing the democratization as helpful at a time when they’re busier than ever. With the rise of digital transformation, which has only been accelerated by the pandemic, 88% of IT leaders say workloads have increased in the past 12 months. Many report an increase in demand for new applications and say they’re concerned about the workloads and how this might stifle their ability  ... ' 

Friday, July 23, 2021

Is Programming Theory a Waste of Time?

Gave me a reminder of what PT was:  Its the theoretical introduction to coding,  and  not ONLY the specific, in context how-to of using a particular coding method. For hiring of course its usually seen that the specific practical skill is the most important.  Skill vs theory.  Mechanic vs Engineer. The rest of it is the theory and may add background and future flexibility.   You would also expect 'theory' to change more quickly as emergent tech does. Not a waste of time, as long as you still end up with the skill too.

Is Programming Theory A Waste of Time? | Careers | Communications of the ACM

Thursday, July 08, 2021

Will AI Rewrite Coding?

Inclined to think so. Increase efficiency, prevent errors and better ensure secure practices. Below intro describes the current progress and direction.  Starting with assistant approaches and code checking.  Note the mention of a number of companies and projects involved.   More at the link.

Will AI Rewrite Coding?   By Samuel Greengard, Commissioned by CACM Staff,   July 6, 2021

Computer code now touches almost every aspect of our lives. Worldwide, 27 million developers churn out billions of lines of code every day. Yet, despite an abundance of open source libraries and increasingly sophisticated development tools, the task is time-consuming and prone to errors.

As a result, researchers are studying ways to introduce artificial intelligence (AI) into coding processes. While much of the effort centers on automating coding tasks, spotting bugs, fixing vulnerabilities, and producing more elegant code, there's also an emerging effort to tap AI to write code based on short text descriptions of what the code should do.

"There's interest in improving current coding practices and generating code through machine learning and AI models," says Brendan Dolan-Gavitt, an assistant professor of computer science and engineering at New York University (NYU) Tandon School.

Adds Furkan Bektes, founder and CEO of SourceAI, which has developed a tool to write code based on short natural language input, "The use of AI will allow developers to code faster, and allow non-developers to pursue their ideas."

Code of Conduct

The complexities of today's coding processes are not lost on anyone. A Boeing 787 Dreamliner has approximately 20 million lines of code. Major software programs and games have upwards of 50 million lines of code. Somewhere between functionality and chaos lies the real-world task of producing code rapidly and as bug-free as possible.

AI is taking direct aim at the challenge. "Ideally, AI could intervene, examine patterns and provide feedback about coding errors," says Shashank Srikant, a Ph.D. student in the Department of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology (MIT). "This could help coders avoid traps that others have fallen into."

Google, Microsoft, and others have already begun experimenting with AI for assisted coding. For instance, in 2018, Microsoft introduced AI-assisted coding for Java, Python, C++, and other languages through Visual Studio IntelliCode. It offers developers relevant coding suggestions based on thousands of the most popular open source projects at GitHub. Through machine learning, it analyzes common usage patterns and practices, and delivers suggestions tailored to a specific project.

However, in May 2020, the field took a giant leap forward. OpenAI introduced a next-generation AI-based neural network and programming model called GPT-3, which is already used to build apps—including buttons, colors and input fields—using AI. When researchers and coding experts began testing the language, they realized that it could also write its own code.

Bektes, among the first to gain access to the platform, fed high-quality code samples into GPT-3 and built an application that generates code in any programming language—using input in English and most other major languages. For example, a user might say, "Calculate factorial of number given by user," and Source AI spits out the code.

Other tools incorporating AI are also popping up. For instance, Tabnine can autofill lines and functions as developers type.

Machine-generated coding could be a game changer, although it's unlikely to supplant the need for developers anytime soon. "It will open new horizons," Bektes explains. "There are many non-developers who have ideas but don't know how to code, and there are also developers who are experts in one language but not in others. AI can help them learn to code in other languages."      ...... ' 

Thursday, July 01, 2021

Copilot Assistant Coding

 Actually did this in the early days, with human co-pilots, but it never took off.  Now with secure code more of a necessity, could be useful.  This could drive closer to Lo-Code too.   How well can this work today?  Even flagging security dangers in patterns of code could be useful.  But how well can we recognize such patterns?  Following. 

OpenAI and GitHub Unveil New Copilot AI Assistant for Coding

ERIC HAL SCHWARTZ in Voicebot.AI

A new virtual assistant created by OpenAI and GitHub will suggest code to software developers as they work. The new GitHub Copilot tool leverages an improved version of OpenAI’s popular GPT-3 language model called Codex to teach the AI how to collaborate in a coding project like a human partner.

AI COPILOT

Github Copilot takes the concept of natural language processing and applies it to programming languages. The idea is to imitate a “pair programmer,” when two developers simultaneously work on a coding project and comment and annotate each other’s work along the way. The AI theoretically takes the junior partner role in the endeavor, making its name entirely apropos. Copilot relies on OpenAI’s Codex model to understand what the programmer is doing and come up with suggestions. Like GPT-3, Codex is built on an enormous collection of data to teach an AI how to suggest a line or more of code. The AI learns from what suggestions the human user accepts or rejects, honing its understanding and ideally leading to better code ideas.  ... '