tag

customaize

10 indexed skills · max 10 per page

skills (10)

customaize-agent:context-engineering

neolabhq/context-engineering-kit · AI/ML

0

Context is the complete state available to a language model at inference time. It includes everything the model can attend to when generating responses: system instructions, tool definitions, retrieved documents, message history, and tool outputs. Understanding context fundamentals is prerequisite to effective context engineering.

customaize-agent:prompt-engineering

neolabhq/context-engineering-kit · AI/ML

0

Advanced prompt engineering techniques to maximize LLM performance, reliability, and controllability.

customaize-agent:create-command

neolabhq/context-engineering-kit · AI/ML

0

This meta-command helps create other commands by:

customaize-agent:create-hook

neolabhq/context-engineering-kit · AI/ML

0

Analyze the project, suggest practical hooks, and create them with proper testing.

customaize-agent:test-skill

neolabhq/context-engineering-kit · AI/ML

0

Test skill provided by user or developed before.

customaize-agent:agent-evaluation

neolabhq/context-engineering-kit · AI/ML

0

Evaluation of agent systems requires different approaches than traditional software or even standard language model applications. Agents make dynamic decisions, are non-deterministic between runs, and often lack single correct answers. Effective evaluation must account for these characteristics while providing actionable feedback. A robust evaluation framework enables continuous improvement, catches regressions, and validates that context engineering choices achieve intended effects.

customaize-agent:apply-anthropic-skill-best-practices

neolabhq/context-engineering-kit · AI/ML

0

Apply Anthropic's official skill authoring best practices to your skill.

customaize-agent:create-skill

neolabhq/context-engineering-kit · AI/ML

0

This command provides guidance for creating effective skills.

customaize-agent:thought-based-reasoning

neolabhq/context-engineering-kit · AI/ML

0

Chain-of-Thought (CoT) prompting and its variants encourage LLMs to generate intermediate reasoning steps before arriving at a final answer, significantly improving performance on complex reasoning tasks. These techniques transform how models approach problems by making implicit reasoning explicit.

customaize-agent:test-prompt

neolabhq/context-engineering-kit · AI/ML

0

Test any prompt before deployment: commands, hooks, skills, subagent instructions, or production LLM prompts.