Prompt engineering involves using clear and concise phrases to elicit optimal responses from the large language model (LLM). Generally speaking, most users will need to continue to prompt incrementally and refine prompts to yield useful answers. Adding to these refined prompts, called threading, also will yield better results, as it will provide the LLM with some context.
According to the NY Times' On Tech AI newsletter, useful prompts include: