By Aneesh Tickoo -April 14, 2023 in MarketTech
Producing accurate code in a single effort for many programming jobs can be challenging. With several applications, including code synthesis from natural languages, programming by examples, and code translation, code creation has long been a problem. Recent big language models, in particular, have substantially improved over earlier deep neural networks. One line of research has developed reranking techniques to choose the best candidate from multiple samples, typically requiring tens of samples. These techniques were inspired by observations that correct code is much more likely to be predicted when various programs are sampled from the model.
It makes intuitive sense that a programmer’s first piece of code is usually inaccurate. Humans often examine the code, check into the execution outcomes, and then make adjustments to fix implementation flaws rather than entirely rejecting faulty code. Previous research has suggested deep learning algorithms to correct the anticipated code, which shows considerable performance improvements on various coding jobs. Nevertheless, these methods call for extra training for the code repair model.
Prior studies suggest that large language models are not yet able to correct code in the absence of external feedback, such as unit tests or human instructions, despite some recent studies showing that these models have the potential to generate feedback messages to critique and refine their outputs for some natural language and reasoning domains. In this study, researchers from Google Research and UCB offer SELF-DEBUGGING, using few-shot prompting to educate the huge language model on debugging its own projected code. SELFDEBUGGING commands the model to run the code, then create a feedback message based on the code and the execution outcome without needing extra model training. ... '
No comments:
Post a Comment