Adding a New Eval¶
Create test cases that measure HolmesGPT's diagnostic accuracy and help track improvements over time.
Test Types¶
- Ask Holmes: Chat-like Q&A interactions
- Investigation: AlertManager event analysis
Quick Start¶
-
Create test folder:
tests/llm/fixtures/test_ask_holmes/99_your_test/
-
Create
test_case.yaml
: -
Create
manifest.yaml
with your test scenario: -
Run test:
Test Configuration¶
Required Fields¶
user_prompt
: Question for Holmesexpected_output
: List of required elements in responsebefore_test
/after_test
: Setup/teardown commands (run withRUN_LIVE=true
)
Optional Fields¶
tags
: List of test markers (e.g.,[easy, kubernetes, logs]
)skip
: Boolean to skip testskip_reason
: Explanation why test is skippedmocked_date
: Override system time for test (e.g.,"2025-06-23T11:34:00Z"
)cluster_name
: Specify kubernetes cluster nameinclude_files
: List of files to include in context (like CLI's--include
flag)runbooks
: Override runbook catalog:toolsets
: Configure toolsets (can also use separatetoolsets.yaml
file):port_forwards
: Configure port forwarding for teststest_env_vars
: Environment variables during test executionmock_policy
: Control mock behavior (always_mock
,never_mock
, orinherit
)conversation_history
: For multi-turn conversation testsexpected_sections
: For investigation tests only
Mock Generation¶
# Generate mocks for one test
ITERATIONS=100 pytest tests/llm/test_ask_holmes.py -k "your_test" --generate-mocks
# Remove any existing mocks for your test and generate them from scratch
pytest tests/llm/test_ask_holmes.py -k "your_test" --regenerate-all-mocks
Mock files are named: {tool_name}_{context}.txt
Advanced Features¶
Toolsets Configuration¶
You can configure which toolsets are available during your test in two ways:
-
Inline in test_case.yaml:
-
Separate toolsets.yaml file (preferred for complex configurations):
Port Forwarding¶
Some tests require access to services that are not directly exposed. You can configure port forwards that will be automatically set up and torn down for your test:
port_forwards:
- namespace: app-01
service: rabbitmq
local_port: 15672
remote_port: 15672
- namespace: app-01
service: prometheus
local_port: 9090
remote_port: 9090
Note: Use unique local ports across all tests to avoid conflicts
Port forwards are:
- Automatically started before any tests run
- Shared across all tests in a session to avoid conflicts
- Always cleaned up after tests complete, even if tests are interrupted
- Run regardless of
--skip-setup
or--skip-cleanup
flags
Important notes:
- Use unique local ports across all tests to avoid conflicts
- Port forwards persist for the entire test session
- If a port is already in use, the test will fail with helpful debugging information
- Use
lsof -ti :<port>
to find processes using a port - Port forwards work with both mock and live (
RUN_LIVE=true
) test modes
Toolset Configuration¶
Create toolsets.yaml
to customize available tools:
toolsets:
prometheus/metrics:
enabled: true
config:
prometheus_url: "http://custom-prometheus:9090"
grafana/dashboards:
enabled: false # Disable specific toolsets
Mock Policy¶
inherit
: Use global settingsnever_mock
: Force live execution (skipped if RUN_LIVE not set)always_mock
: Always use mocks (avoid when possible)
Custom Runbooks¶
runbooks:
catalog:
- description: "DNS troubleshooting"
link: "dns-runbook.md" # Place .md file in test directory
Options:
- No field: Use default runbooks
runbooks: {}
: No runbooks availablerunbooks: {catalog: [...]}
: Custom catalog
Tagging¶
Evals support tags for organization, filtering, and reporting purposes. Tags help categorize tests by their characteristics and enable selective test execution.
Available Tags¶
The valid tags are defined in the test constants file in the repository.
Some examples
logs
- Tests HolmesGPT's ability to find and interpret logs correctlycontext_window
- Tests handling of data that exceeds the LLM's context windowsynthetic
- Tests that use manually generated mock data (cannot be run live)datetime
- Tests date/time handling and interpretation- etc.
Using Tags in Test Cases¶
Add tags to your test_case.yaml
: