Best of Prompt EngineeringJuly 2024

  1. 1
    Article
    Avatar of freecodecampfreeCodeCamp·2y

    Prompt Engineering Basics – How to Write Effective AI Prompts

    Prompt engineering involves crafting clear, context-rich, and specific input prompts to guide AI models for desired outputs. It's a valuable skill for developers, researchers, and general users to enhance AI-driven tasks such as content creation, technical writing, and customer support. Key elements include clarity, context, constraints, and example usage, enabling efficient communication with AI systems.

  2. 2
    Article
    Avatar of langchainLangChain·2y

    Few-shot prompting to improve tool-calling performance

    Improving LLM applications often involves enhancing tool-calling performance, and few-shot prompting is a key technique to achieve this. In recent experiments, various few-shot techniques were tested across multiple OpenAI and Anthropic models for tasks like query analysis and math problem-solving. Few-shot prompting significantly boosted performance, especially when examples were semantically similar to the task at hand. Results indicated that well-selected few-shot examples can rival the performance of larger models, and the format of prompts has a considerable impact on effectiveness.

  3. 3
    Article
    Avatar of communityCommunity Picks·2y

    pnp/copilot-prompts: Examples of prompts for Microsoft Copilot

    The repository contains a collection of sample prompts for Microsoft Copilot, contributed by the community and Microsoft's product groups. It outlines how to contribute your own prompts, including creating and naming sample folders, adding assets, and updating metadata files. Community participation is encouraged, with guidelines provided for smooth contributions. Prompts are for demonstration purposes and users should review them for their own use cases.

  4. 4
    Article
    Avatar of communityCommunity Picks·2y

    Best practices for LLM optimization for call and message compliance: prompt engineering, RAG, and fine-tuning

    Large Language Models (LLMs) have shown significant improvements in ensuring compliance in medical marketing and sales calls. Salus AI improved LLM accuracy from 80% to 95-100% using five key techniques: prompt engineering and design, pre-processing input text for speaker separation, implementing Retrieval-Augmented Generation (RAG), and fine-tuning models. These optimizations demonstrated that LLMs can outperform traditional rule-based compliance solutions in monitoring regulatory adherence during calls.

  5. 5
    Article
    Avatar of communityCommunity Picks·2y

    How close is AI to replacing product managers?

    Lenny and prompt engineer Mike Taylor explored how close AI is to replacing product managers by testing AI's performance on three challenging PM tasks: developing a product strategy, defining KPIs, and estimating ROI. AI outperformed humans in two out of three tasks, largely due to effective prompt engineering. The study indicates AI's improving capabilities and potential to handle more PM tasks in the future. Blind tests and prompt adjustments were key to realistic assessments of AI’s performance.