Prompts
Colour: Amber (#f59e0b)
Prompts are structured instructions sent to AI models. They are the core building block of most workflows — defining what the model should do, how it should behave, and what output to produce.
Variables
Prompts support variables using the {{VARIABLE_NAME}} syntax. Place them anywhere in your prompt content and they’ll be detected automatically.
Syntax Rules
- Format:
{{NAME}}— double curly braces around the variable name - Names are alphanumeric plus underscores only (e.g.
{{USER_INPUT}},{{CONTEXT_3}}) - Case-sensitive:
{{Name}}and{{NAME}}are treated as different variables - Variables are purely user-defined — there are no predefined or system variables
How Variables Are Used
- Analysis tab: lists all detected variables with their occurrence count (e.g.
{{USER_INPUT}} ×2if it appears twice) - Test tab: generates an input field for each variable (up to 6). Fill in values and run the test — skrptiq substitutes your values before sending to the LLM
- Unfilled variables: if you leave a variable empty when testing, the literal
{{VARIABLE}}text passes through to the model - Test case persistence: variable values are saved with each test case in the node’s metadata, so they survive closing and reopening the editor
Example
You are a {{ROLE}} assistant. The user's name is {{USER_NAME}}.
Analyse the following text and provide a {{OUTPUT_FORMAT}} summary:
{{INPUT_TEXT}}
This prompt has 4 variables. In the Test tab, you’d see input fields for ROLE, USER_NAME, OUTPUT_FORMAT, and INPUT_TEXT.
Prompt Templates
When creating a new prompt node, you can start from one of six built-in templates:
| Template | Description |
|---|---|
| System Prompt | Role definition with constraints and output format. Sets up the model’s persona, boundaries, and response structure. |
| Few-Shot Prompt | System message paired with example input/output pairs. Teaches the model the expected format through demonstration. |
| Chain-of-Thought | Step-by-step reasoning before the final answer. Forces the model to show its working for complex problems. |
| Extraction Prompt | Pulls structured data from unstructured input. Returns JSON with typed fields and a confidence score. |
| Classification Prompt | Categorises input into predefined classes. Returns the chosen category, confidence, and reasoning. |
| Conversational Prompt | Persona-based conversational agent with tone, knowledge boundaries, and escalation rules. |
Each template comes pre-filled with sensible variable placeholders. Pick the closest match and adapt it to your use case, or start from a blank prompt if none fit.
AI Features
Prompts have access to four AI-powered tools in the editor’s right panel. See AI Features for full details.
- Analysis — Scores your prompt 0-100 against best practices. Shows a checklist of passing and failing criteria, plus token count and variable statistics.
- Test — Create test cases, fill in variables with sample values, run the prompt against an LLM, and auto-generate test inputs.
- Review — Expert AI review of your prompt’s quality, clarity, and effectiveness.
- Refine — AI-suggested improvements you can accept or reject.
Content Type Detection
The editor detects content type automatically — markdown, code blocks, JSON schemas, and plain text are all handled. This affects syntax highlighting and how the content is displayed.
Connections
Prompts connect to other nodes in the graph:
- Used by workflows and skills (via
usesedges) — a workflow or skill references the prompt as part of its execution - References sources (via
referencesedges) — a prompt can link to source material it draws on - Derived from other prompts (via
derived_fromedges) — track prompt lineage when you create variations