Best of Prompt EngineeringMarch 2026

  1. 1
    Video
    Avatar of t3dotggTheo - t3․gg·10w

    gpt-5.4 is really, really good

    GPT-5.4 (released as '5.4 Thinking') is reviewed after a week of hands-on use. Key highlights: 1M token context window, improved reasoning token efficiency, better mid-task steering, and significantly improved browser/computer use and vision capabilities. The model is praised as the best general-purpose AI for coding tasks, with Cursor internally endorsing it. However, it still lags behind Claude Opus and Gemini for front-end UI design. A notable security regression exists: prompt injection via function call return data succeeds ~2% of the time. GPT-5.4 Pro is expensive ($30/$180 per million tokens in/out) and often underperforms standard 5.4. The Codex model line appears to be discontinued in favor of 5.4 as the unified base. Prompting guidance from OpenAI is highlighted as more important than ever given the model's high steerability.

  2. 2
    Video
    Avatar of mattpocockMatt Pocock·7w

    Claude Code tried to improve /init... Is it any better?

    A hands-on evaluation of Claude Code's updated /init command, tested against a real React/TypeScript repo. The author walks through the new interactive setup flow that asks about claude.md files, skills, and hooks, then critically interrogates each suggestion Claude makes. Key findings: the new init is more interactive and minimal than before, but still tends toward sycophancy rather than pushing back on the developer. The author ends up with a nearly empty claude.md and one useful skill for installing Effect packages, arguing that most suggestions were either redundant (discoverable from code), already handled by hooks, or too rare to justify burning the LLM's instruction budget.

  3. 3
    Article
    Avatar of tdsTowards Data Science·8w

    Vibe Coding with AI: Best Practices for Human-AI Collaboration in Software Development

    Explores best practices for human-AI collaboration in software development using vibe coding tools. Key risks identified include garbage-in-garbage-out prompting, poor prompt quality burning through model limits, and AI tendency to over-engineer solutions. Using a RAG system over news articles as a practical example, the author demonstrates a workflow: define clear requirements with test queries, generate architecture before code, validate and stress-test the design with edge cases, have the AI self-critique, and push back on unnecessary complexity. The central principle is a human-in-the-loop cycle where AI accelerates but humans remain the final arbiter on trade-offs, maintainability, and production readiness.

  4. 4
    Article
    Avatar of dailydoseofdsDaily Dose of Data Science | Avi Chawla | Substack·7w

    Anatomy of the .claude/ Folder

    A comprehensive guide to the .claude/ folder structure used by Claude Code. Covers CLAUDE.md (the main instruction file), CLAUDE.local.md for personal overrides, the rules/ folder for modular scoped instructions, commands/ for custom slash commands with shell injection, skills/ for auto-invoked reusable workflows, agents/ for specialized subagent personas with isolated context, and settings.json for permission control. Also explains the global ~/.claude/ directory for cross-project preferences and session memory. Includes a practical step-by-step setup progression and a full folder structure reference.

  5. 5
    Article
    Avatar of pulumiPulumi·9w

    Treating Prompts Like Code: A Content Engineer's AI Workflow

    A solo technical content engineer at Pulumi describes building a modular AI workflow system by treating prompts like code. Facing a one-person docs practice, the author created reusable Claude Code 'skills' (e.g., /docs-review, /pr-review, /shipit, /slack-to-issue) that share a central context file (REVIEW-CRITERIA.md) following DRY principles. The system was wired into CI/CD to automate PR reviews, dramatically improving contribution quality. Key lessons include modularizing prompts, version-controlling them, managing token costs, knowing when to use scripts vs. AI generation, and treating the AI as a conversational collaborator rather than a command executor. The approach turned a personal survival tool into a shared team platform.

  6. 6
    Article
    Avatar of freecodecampfreeCodeCamp·7w

    The Claude Code Handbook: A Professional Introduction to Building with AI-Assisted Development

    A comprehensive handbook covering Claude Code, Anthropic's AI-powered software development agent. It walks through installation, VS Code setup, subscription tiers, prompt discipline, Plan Mode, feature-by-feature development, token economics, and the internal agent loop mechanics. The guide targets both experienced developers looking to multiply their output and non-technical builders wanting to create software without prior coding experience. Key practices covered include using Plan Mode for 80% of sessions, writing Product Requirements Documents, building incrementally, managing context windows, and understanding how Claude reads codebases selectively via tool calls.

  7. 7
    Article
    Avatar of theregisterThe Register·7w

    Telling an AI model that it's an expert makes it worse

    A USC research paper finds that persona-based prompting — telling an LLM it is an expert — actually hurts performance on factual and coding tasks while improving alignment-dependent tasks like safety and writing. On the MMLU benchmark, expert personas reduced accuracy from 71.6% to 68.0%. The explanation is that persona prefixes activate instruction-following mode at the expense of factual recall. The researchers propose PRISM, a gated LoRA mechanism that selectively applies persona-based behavior only where it helps, falling back to the base model for knowledge-dependent tasks. The practical takeaway: for accuracy and facts, skip the persona; for alignment, safety, or structured output, specific persona guidance can help.

  8. 8
    Article
    Avatar of vigetViget·8w

    Using Claude Code More Intentionally

    A practical guide to setting up Claude Code as a proper development collaborator rather than an ad-hoc chat tool. Key strategies include writing a thorough CLAUDE.md as an onboarding document, using .claudeignore to keep context clean, externalizing plans and artifacts to disk for persistence across sessions, building reusable 'skills' (markdown-defined repeatable processes stored in .claude/skills/), integrating CLI tools and MCP servers for structured external access, and using hooks to automatically run tests or commits after Claude finishes tasks. Also covers remote control via the Claude app for supervising long-running jobs and model switching (Haiku/Sonnet/Opus) to balance cost and capability. The core thesis: invest in the environment and infrastructure around Claude Code, not just the prompts.