red line

AI Workflow Best Practices

AI can be frustrating to use and require endless prompting unless you take advantage of the correct workflow. Below, I’ll dive into what I found to be the best way to get the most out of LLM models and tools.
Claude code on a computer screen

There’s too much noise around AI and not enough concrete advice and information. There’s a stream of blog posts where people give up on AI. I’ll show you how to keep using AI successfully without wanting to throw your laptop at the wall. 

But first, a warning. It can be easy to get sucked into the overuse of many tech tools, especially AI, so remember to keep a healthy balance in your life. If you’ve been sitting, staring at or talking to a screen for a few hours or more, stand up, take a break. Go outside. Stretch your legs. The secret with AI workflows is that you must remember to be the adult in the room, to be the one to guide where you’re going. There’s only so much LLMing you can do without melting your mind and losing the ability to be the guide to your AI.

Also, don’t feel overwhelmed by all the hype on YouTube and the internet. Most of the loudest voices are coming from hobbyists doing greenfield projects. A lot of their advice isn’t going to apply to real projects that ship to customers. The guide below does.

Let’s start with some ground rules.

You Are The Adult In The Room

Being the adult means that:

  1. You are accountable
  2. You are the team lead 
  3. You are the solutions architect

Here is rule number one: be the adult in the room. Be the guide. Understand that the process of using AI, just like programming, or many other creative endeavors, is iterative. You’ll give the AI something to do, and then you have to give feedback and adjust. You have to be the team lead. Don’t be the pointy-haired boss who gives vague instructions and walks away.

When I say “give the AI something,” I mean you’re using your tools of choice that pass info to an LLM. Many of the examples in this post were created using Claude Code, but most LLMs and agentic coding tools are interchangeable to some degree.

The first rule should be self-explanatory, but I find that it’s not. LLMs are tools. You are responsible for what your tools do. If you (or your LLM Agent, or your LLM Agent’s Agent) commits code to a repo or causes some shell script to run that destroys things, then you are responsible. You are also responsible for explaining what you did to others. If you ship a new feature that is buggy, you have to be able to answer questions about why you did what you did. You can’t say, “I don’t know what that commit does. Claude did it.”

More Than A Coder

The second rule is a mindset shift for many, especially dyed in wool programmers. You can’t just be a coder anymore. If you’re using LLMs to save time on coding, you’re not taking full advantage of the opportunity. LLMs allow you to solve problems in areas you would never have dreamed to try to solve previously. If you give LLMs direction, they can get a lot done, so now your job is to guide and give direction. This means more thinking about what you are trying to achieve and why and less thinking about the exactly how on every single line.

Higher-Level Thinking Required 

Rule two leads into rule three. Not only are you the lead, but you also have to make sure that all pieces hold together well, it all makes sense, and you are doing something that makes sense. You should already have been doing all these things, but previously it was easier to put your head down and just think about tiny implementation plan details. Now you have to think at a higher level because you have tools to do the lower level work for you.

This new way of building things can be exhausting because so many artifacts get created so easily. You must ensure you don’t veer off course, and the faster you go, the more tiring that becomes. Remember to take breaks and don’t burn out.

Before we go into the specifics of how to work with LLMs, let’s go over some best practices of how to communicate with them.

LLM Best Practices

If you’re already an old-hand at this, feel free to skip ahead. Here are a few quick best practices. I’m not going to do a deep dive, since the focus of this post is workflow and not best practices.

First, read the docs. Find the best practices for your setup. For example, if you’re using Claude Code, they’ve got a great set of best practices. These habits all make sense, but it’s easy to skip them if you’re not used to them. 

To summarize what to do:

  • Give the LLM a way to verify its work
  • Give the LLM guardrails and constraints
  • Be specific. Reference specific files, don’t rely on vague adjectives like “login is broken.”
  • Order the contents of docs and prompts from most to least important
  • Documentation should use progressive disclosure (provide links that can be pulled in conditionally)
  • Reference specific files
  • Configure your LLM
  • Use the LLM to get oriented in a code base
  • Have the LLM ask you questions
  • Pay attention to your context window (literally run /context every now and then)
  • Create PRD, Phases, and Plans. Work from checklists (we’ll get into more detail on these later)

And what not to do:

  • Don’t add a bunch of tasks into one context window
  • Don’t correct the LLM multiple times. Start a new context.
  • Don’t make your Claude.md or docs that go into the context window too long

To summarize, best practices involve two big things: (1) managing expectations and (2) managing context. Manage expectations by giving the LLM a way to test its results and clear boundaries of what it should do. Managing context is a never ending task. If you don’t manage context well, you’ll end up in the “dumb zone.”

Now that we know how to communicate with the LLM, let’s move on to a good workflow.

RPI: Research, Plan, Implement

There are many names for this workflow. Conceptually, it’s an old idea. We, at Solid Digital, use a version of RPI for all our projects. We call it Learn, Strategize, Create, but it’s the same thing. The idea is older than LLMs because first getting oriented, then figuring out what you’re going to do, and finally doing it; it just makes sense. (Going back even further, Aristotle discussed a structured approach to action).

LLMs are so quick at implementation that it can be tempting to dive right in and try to build something with one prompt. You’ll find that the more complex the project you’re working on, and the more defined your requirements for success, the less one-shotting will work. I think a lot of the frustration with LLMs comes from people trying to one-shot solutions with vague prompts. Instead, slow down. First, do your research.

RPI: Research

Research involves getting an understanding of the codebase and what you’re going to work on. I’ll create a directory for the artifacts associated with a particular project in the codebase to make it easier to track my work. For example, if I’m going to be modifying the quote and order creation process, I’ll create a folder for that and ask for a report about the quote and order workflow to be added to the directory. A simplified example prompt for a WordPress project would be something like:

Describe the current order creation process in a report @…

The document should clearly describe the:

  1. Files Involved
    1. Template files
    2. PHP logic files
    3. JS files
    4. AJAX handlers
    5. REST endpoint
    6. Filters or actions involved
  2. Data Flow
    1. Where quote data is initially captured
    2. How it moves between pages
    3. How it is stored and saved (session, cookies, etc)
    4. When and how quotes become orders
    5. Where the order data is stored
  3. Persistence Layer
    1. DB tables involved
    2. User associations
    3. Meta fields used
  4. Quote to Order transition
    1. What event triggers order creation
    2. Where payment of checkout logic executes
    3. What determines confirmation page state

Of course, I’ll only ask for things that are not already in the README, Claude.md, or known to me. I’ll point out any quirks of the system I know about. Remember, the LLM is trained on tons of code, so you don’t need to teach it how to code, but you do need to teach it about your specific codebase. That’s what research is. Research teaches both the LLM and you about the codebase. If you already know about the codebase add what you know to the prompt.

Research can be done with many agents. In Claude Code do research in plan mode, then have Claude write the report to the repo.

You know you’re done with research when you have enough info to write a good plan. For small projects that you are deeply familiar with, the research might happen in your own head. For a big project you’re interacting with for the first time, LLMs greatly speed things up by looking through the code and letting you know what they find. LLMs are very good and efficient at following code. Try pointing Claude at a codebase like Elementor or Magento and asking how a certain piece of functionality works. You may be surprised at the high quality of the response.

RPI: Plan

Most agentic coders have a “plan mode.” This mode forces the LLM to not change code and only noodle about what to do. Use Shift-Tab or /plan to toggle into plan mode with Claude Code.

Make sure to clear context before starting plan mode. Research mode usually uses a lot of context, but the only thing you need to bring into the plan from the research is your research report. If you don’t clear your context, you’ll soon end up in the LLMs dumb zone.

Before the LLM can start planning, you have to be clear on what you want done. If you don’t have a plan, you’re not going be able to have the LLM create a good plan. Garbage in, garbage out. And yes, you might have noticed that this process is starting to sound a whole lot like work. Once you’ve got a valuable goal and a clear idea of how to achieve it, describe what you want, so the LLM can make a plan.

For planning, it’s important to be detailed and precise about the correct things. Describe, as much as possible, what you want done. You’ll have to go into how to get it done in keystone areas, but try to keep the how at a minimum. The LLM is your force multiplier for how.

Include the research report you created in the previous step. I always add the research report last, since the most important thing is what the LLM should do. After describing that, I’ll link it in the report.

An example planning prompt could be something like:

Create an REST API endpoint for the creation of new quotes by external services.

Endpoint Requirements

The request payload must include:

  • All fields
  • Email of the user the quote is for

Behavior

  • Look up user by email
  • If user doesn’t exist
    • Return HTTP 404
    • Include error reason in response
  • If user exists
    • Create quote using the already existing pathway described in the research report
    • Associate quote with user

Response must return

  • User ID
  • Quote ID

Implementation strategy recommendations

  • Use register_rest_route
  • Endpoint should be at …

Security

  • Use API authentication via JWT bearer tokens
  • Validate payload schema
  • Log all requests

You should also prompt the LLM to ask you any questions. You can also ask if there’s anything you missed or if they’ve got any suggestions. How much you prompt for questions/feedback will depend on your level of confidence and familiarity with what you’re planning. I always prompt for questions in case I wrote something unclear or forgot something.

Your LLM will create a plan. Depending on the complexity of the plan you can move forward or refine the plan. If the plan has many steps, make sure to break it into phases. For more complicated plans, it’s a good idea to ask about the percent confidence the LLM has in each step or phase. If the confidence is low, ask how the confidence can be increased. Often, providing more information or making a few decisions will increase confidence and improve your plan.

Once you’ve got your plan, you’re going to have to clear context one more time before implementing. Claude Code already has this built into it as an option after plan creation. If your plan is complex, and it’ll take many iterations to implement, you’ll want to store your plan in the project. Most LLMs automatically store plans in a dot file, but those can be more of a pain to access, especially for other team members, than in the project itself.

At this point, after quite a bit of blood, sweat, and tears, we get to the fun part. Actually building.

RPI: Implementation

If you’re working on anything even slightly complex, you’ll want to use one chat per task. That is, clear your context between implementation steps and don’t get sucked into tangents during a chat, or you’ll end up in the dumb zone.

Claude has a great feature where you can customize the status line of your prompts. I like to add the percent used of the context window, so it’s front and center.

customized status line of Claude Code prompt

During any step of your workflow, but especially implementation, you should often check on what is eating your context window. Turn off MCPs you’re not using, make sure you Claude.md isn’t pulling in too much, etc.

For example, connecting to Chrome with Claude via MCP can use up 3% of your context out of the box, so turn it off if you’re not using it.

connecting to Chrome with Claude via MCP

If you have a good plan, you’ll hardly have to prompt, you’ll just have the LLM move on to the next phase in the plan. As you notice recurring issues, correct them in Claude.md. For example, if the LLM isn’t updating your version file or not updating/writing tests, add to Claude.md that tests must be added for each feature, or that the version should be semantically bumped at the end of each phase.

Keep an eye on the code being written and changed. Use git. Have the LLM use git. Have the LLM break up phases into multiple commits to make it easier to understand. Ask questions from the LLM about choices made or if you don’t understand or disagree with something. It’s a good idea to view all the files changed, glance at all the lines, and deeply read the important ones. 

The older a codebase becomes, the more careful you’ll have to be with the changes you make, so the narrower the context given to the LLM becomes. For example, in a new project you can let the LLM do whatever. It can generate boilerplate, directory structures, and other patterns. If you’re in maintenance mode on a legacy system, then you want to be sure that the LLM understands the existing patterns. 

As you research, plan, and implement, you’ll create many artifacts in addition to the code you write.  There is too much to get into here, but make sure you are familiar with what is available to you via your coding solution. For example Claud has Claude.md, skills directories, memory files, slash commands, etc., most of which can be set globally or on a per-project basis. 

Wrapping Up

The only way you’ll get better at using AI is by using AI. Today’s workflow will be obsolete in six months, so don’t get discouraged. Experiment and play. Structure your efforts. Research, Plan, and Implement is a great way to approach things, especially in brownfield codebases.

Remember to be The Adult In The Room. You are accountable. You are the team lead. You are the solutions architect. High-level oversight is time well spent, and it’ll allow LLMs to work as a force multiplier for you.

AI is a tool that demands an intentional and structured approach. By slowing down, thinking at a higher level, and applying the RPI workflow, you’ll make LLMs part of your daily practice. You’ll achieve greater efficiency and tackle problems that were once out of reach.

Effective website experiences & digital marketing strategies.