Welcome to the Blockchain & AI Forum, where your technology questions are answered. Today’s question: are there practical tips for building an artificial intelligence (AI) agent? Answer—yes. Today I’ll explain, starting with what is an AI agent.

What is an AI Agent. AI agents are poised to become ubiquitous. Says who? All the IT giants, e.g. Amazon, Microsoft, Google, etc., that’s who. Let’s start by defining AI agents. According to IBM, AI agents:
a system or program that is capable of autonomously performing tasks on behalf of a user or another system by designing its workflow and utilizing available tools. AI agents can encompass a wide range of functionalities beyond natural language processing including decision-making, problem-solving, interacting with external environments and executing actions. AI agents can be deployed in various applications to solve complex tasks, including software design, IT automation, code-generation tools and conversational assistants. AI agents use advanced large language models (LLMs) to comprehend and respond to user inputs.
Knowing When to Build an AI Agent. Open AI, the company behind ChatGPT, says building AI agents requires rethinking how existing systems make decisions and handle complexity. Therefore, if you find yourself dealing with the situation below, you have an ideal circumstance for introducing AI agents, says Open AI:
- Complex Decision Making. Workflows involving nuanced judgment, exceptions, or context-sensitive decisions, e.g. refund approval in customer service workflows.
- Difficult to Maintain Rules. Systems that have become unwieldly due to extensive and intricate rule sets, making updates costly or error-prone, e.g. performing vendor security reviews.
- Heavy Reliance on Unstructured Data. Scenarios that involve natural language, extracting meaning from documents, or interacting with users conversationally, e.g. processing a home insurance claim.
AI Agent Design Foundations. Fundamentally, AI agents consists of three core components: 1) model; 2) tools; and 3) instructions. Let’s briefly define each core component. Model refers to the large language model powering the agent’s reasoning and decision making. Tools means the external functions or APIs the agent can use to take action. And instructions are the explicit guidelines and guardrails defining how the agent behaves. Next we examine principles for selecting a model.
Principles for Choosing a Model. Not all AI models are identical. Models vary by capabilities. And, of course, not every task requires the best model. Ideally build your AI agent with the most capable model for the task, factoring in trade-offs related to task complexity, latency, and cost. With those considerations in mind, use the following methodology:
- Set up evaluations to establish a performance baseline.
- Focus on meeting your accuracy target with the best model available.
- Optimize for cost and latency by replacing larger models with smaller ones where possible. If you want an Open AI model, visit this link: https://platform.openai.com/docs/guides/model-selection
What are AI Tools. AI tools extend the capabilities of AI agents by using APIs from underlying applications or systems. Tools should have a standardized definition, enabling flexible, many-to-many relationships between tools and agents. AI agents need three types of tools:
- Data. Data enables AI agents to retrieve context and information necessary for executing workflow.
- Action. Action tools enable agents to interact with systems to take actions, i.e., adding new information, updating records, or sending messages.
- Orchestration. This is where it gets a bit science fiction. Orchestration allows AI agents themselves to serve as tools for one or more AI agents! When to use multiple agents? When the single agent model fails to follow complicated instructions or consistently selects incorrect tools.
Best Practices for AI Agent Instructions. Without high quality instructions, your AI agent will be suboptimal. Below are a few suggestions for creating high quality instructions:
- Use existing documents.
- Prompt the AI Agent to break down the tasks into smaller more manageable steps.
- Define clear actions. In other words, make sure every step corresponds to a specific action.
- Capture edge cases. Not everything fits in a box and sometimes information is missing. Hence, instructions should anticipate common variations and include instructions on how to handle the non-routine with conditional steps.
Guardrails. What are guardrails? Essentially, guardrails are layered defense mechanism. When properly designed, guardrails help manage risk. Consider three factors when constructing guardrails: 1) privacy and safety; 2) add new guardrails based on real world experience; 3) optimize the balance between security and user experience. With that in mind below are possible AI agent guardrails:
- Relevance Classifier. This ensures the AI Agent stays within the intended scope by flagging off-topic queries.
- Safety Classifier. These detect unsafe inputs that attempt to exploit system vulnerabilities.
- PII Filter. PLL filters prevent unnecessary exposure of personally identifiable information.
- Moderation. Moderation guardrails flag harmful or inappropriate inputs.
- Tool Safeguards. With tools safeguard you can assess the risk of each tool available to the AI Agent.
- Rules-Based Protections. The idea behind rules-based protection is to use simple deterministic measures to prevent known threats.
- Output Validation. Ensure responses align with brand values via prompt engineering and content checks.
Until next time,
Yogi Nelson and his AI Agent
