Good starting point I am examining now. Intro here is instructional.
Getting the most out of LLM models as a Data Analyst with Prompt Engineering
Large Language Model (LLM) is on the rise, driven by the popularity of ChatGPT by OpenAI which took the internet by storm. As a practitioner in the data field, I look for ways to best utilize this technology in my work, especially for insightful-yet-practical work as a Data Analyst.
LLMs can solve tasks without additional model training via “prompting” techniques, in which the problem is presented to the model as a text prompt. Getting to “the right prompts” are important to ensure the model is providing high-quality and accurate results for the tasks assigned.
In this article, I will be sharing the principles of prompting, techniques to build prompts, and the roles Data Analysts can play in this “prompting era”.
What is prompt engineering?
Quoting Ben Lorica from Gradient Flow, “prompt engineering is the art of crafting effective input prompts to elicit the desired output from foundation models.” It’s the iterative process of developing prompts that can effectively leverage the capabilities of existing generative AI models to accomplish specific objectives.
Prompt engineering skills can help us understand the capabilities and limitations of a large language model. The prompt itself acts as an input to the model, which signifies the impact on the model output. A good prompt will get the model to produce desirable output, whereas working iteratively from a bad prompt will help us understand the limitations of the model and how to work with it.
Isa Fulford and Andrew Ng in the ChatGPT Prompt Engineering for Developers course mentioned two main principles of prompting:
Principle 1: Write clear and specific instructions
Principle 2: Give the model time to “think”
I think prompting is like giving instructions to a naive “machine kid”.
The child is very intelligent, but you need to be clear about what you need from it (by providing explanations, examples, specified output format, etc) and give it some space to digest and process it (specify the problem-solving steps, ask it to slowly process it). The child, given its exposure, can also be very creative and imaginary in providing answers — which we call a hallucination of the LLM. Understanding the context and providing the right prompt might help in avoiding this problem.
Prompt Engineering Techniques
Prompt engineering is a growing field, with research on this topic rapidly increasing from 2022 onwards. Some of the state-of-the-art prompting techniques commonly used include n-shot prompting, chain-of-thought (CoT) prompting, and generated knowledge prompting.
A sample Python notebook demonstrating these techniques is shared under this GitHub project.
1. N-shot prompting (Zero-shot prompting, Few-shot prompting)
Known for its variation like Zero-shot prompting and Few-shot prompting, the N in N-shot prompting represents the number of “training” or clues given to the model to make predictions.... '(more at link)
No comments:
Post a Comment