Further look into explorations ... been asked about opinion of the space.
In BLOG@CACM
What Do ChatGPT and AI-based Automatic Program Generation Mean for the Future of Software
By Bertrand Meyer
' ... Since the release of the ChatGPT interactive AI assistant it has been surprising to see some of the snide, passive-aggressive reactions from some (not all) members of the software engineering community, in the style of "it's just inference from bad data". Let's get real, folks, it is truly game-changing. The kind of thing that you witness once in a generation. (The last two times were object-oriented programming and the World-Wide Web.)
Basically, if you need a program element and can describe that need, the assistant will generate it for you. There is no particular restriction on the programming language that you choose, as long as its description and enough examples are available somewhere. The code will be pretty good. (More on the semantics of "pretty" below.) You can ask the assistant for a test suite and various other adornments.
Programmers no longer needed?
Trying this tool seriously is guaranteed to produce a "Wow" effect and for a software engineer or software engineering educator, as the immediately following step, a shock: "Do I still have a job?". At first sight, you don't. Especially if you are a programmer, there is not much that you can do and ChatGPT cannot.
In assessing this observation, it is important to separate the essential from the auxiliary. Any beta release of a new technology is bound to suffer from a few pimples. Instructive in this respect is a look at some of the early reviews of the iPhone (for example those on CNET and on PCMag), lamenting such horrible deficiencies as the lack of Bluetooth stereo. I could complain that the generated code will not compile out-of-the-box, since ChatGPT believes that Eiffel has a "do" keyword for loops (it's loop) and enumerated types introduced by "type" (it doesn't). These bugs do not matter; the tool will learn. What does matter is that if I ask, for example, for a Levenshtein edit distance program in Eiffel, I get something that is essentially right. Plus well-formatted, equipped at the start of every routine (per good Eiffel style rules) with a header comment explaining clearly and correctly the purpose of the routine, and producing the right results. Far beyond the Turing test. (To be more precise: as readers of this blog undoubtedly know, a tool passes the Turing test if a typical user would not be able to determine whether answers come from a human or a program. In this case, actually, you will need to add a delay to the responses of ChatGPT to have it pass the test, since no human could conceivably blurt out such impressive answers in a few seconds.)
What comes after the bedazzlement? The natural question is: "What can I do with this?". The answer -- for a programmer, for a manager -- is not so clear. The problem is that ChatGPT, in spite of its cocky self-assurance (This is your result! It will work! No ifs and buts!) gives you, for a non-trivial problem, an answer that may work but may also almost work. I am no longer talking here about growing pains or bugs that will be fixed, but about essential limitations. ... '
No comments:
Post a Comment