PDL is a declarative language designed for developers to create reliable, composable LLM prompts and integrate them into software systems. It provides a structured way to specify prompt templates, enforce validation, and compose LLM calls with traditional rule-based systems.
Minimum installation.
pip install prompt-declaration-language
you can create a PDL file (YAML format):
description: Simple LLM interaction
text:
- "write a hello world example\n"
- model: ollama/granite-code:8b
parameters:
stop_sequences: '!'
temperature: 0
and run it:
pdl <path/to/example.pdl>
- LLM Integration: Compatible with any LLM, including IBM watsonx
- Prompt Engineering:
- Template system for single/multi-shot prompting
- Composition of multiple LLM calls
- Integration with tools (code execution & APIs)
- Development Tools:
- Type checking for model I/O
- Python SDK
- Chat API support
- Live document visualization for debugging
- Control Flow: Variables, conditionals, loops, and functions
- I/O Operations: File/stdin reading, JSON parsing
- API Integration: Native REST API support (Python)
Requires Python 3.11+ (Windows users should use WSL)
# Basic installation
pip install prompt-declaration-language
# Development installation with examples
pip install 'prompt-declaration-language[examples]'
You can run PDL with LLM models in local using Ollama, or other cloud service.
If you use WatsonX:
export WATSONX_URL="https://{region}.ml.cloud.ibm.com"
export WATSONX_APIKEY="your-api-key"
export WATSONX_PROJECT_ID="your-project-id"
If you use Replicate:
export REPLICATE_API_TOKEN="your-token"
VSCode setup for syntax highlighting and validation:
// .vscode/settings.json
{
"yaml.schemas": {
"https://ibm.github.io/prompt-declaration-language/dist/pdl-schema.json": "*.pdl"
},
"files.associations": {
"*.pdl": "yaml",
}
}
In this example we use external content imput.yaml and watonsx as a LLM provider.
description: Template with variables
defs:
USER_INPUT:
read: ../examples/code/data.yaml
parser: yaml
text:
- model: watsonx/ibm/granite-34b-code-instruct
input: |
Process this input: ${USER_INPUT}
Format the output as JSON.
description: Code execution example
text:
- "\nFind a random number between 1 and 20\n"
- def: N
lang: python
code: |
import random
result = random.randint(1, 20)
- "\nthe result is (${ N })\n"
chat interactions:
description: chatbot
text:
- read:
def: user_input
message: "hi? [/bye to exit]\n"
contribute: [context]
- repeat:
text:
- model: ollama/granite-code:8b
- read:
def: user_input
message: "> "
contribute: [context]
until: ${ user_input == '/bye'}
pdl --trace <file.json> <my-example.pdl>
pdl --log <my-logfile> <my-example.pdl>
Upload trace files to the Live Document Viewer for visual debugging.
-
Template Organization:
- Keep templates modular and reusable
- Use variables for dynamic content
- Document template purpose and requirements
-
Error Handling:
- Validate model inputs/outputs
- Include fallback logic
- Log intermediate results
-
Performance:
- Cache frequent LLM calls
- Use appropriate temperature settings
- Implement retry logic for API calls
-
Security:
- Enabling sandbox mode for untrusted code
- Validate all inputs
- Follow API key best practices
See the contribution guidelines for details on:
- Code style
- Testing requirements
- PR process
- Issue reporting