skill_id stringlengths 3 31 | name stringlengths 12 35 | description stringclasses 2
values | type stringclasses 5
values | task_prompt stringlengths 1.01k 2.23k | skill_document stringlengths 495 26.8k | test_code stringlengths 3.04k 18.1k | repo_url stringlengths 0 72 | repo_commit stringclasses 2
values | docker_image stringclasses 8
values |
|---|---|---|---|---|---|---|---|---|---|
add-uint-support | Add UInt Support | Restore uint32/uint64 operator support in PyTorch | repair | # Task: Enable Unsigned Integer Support for Target Operators
## Background
Several operators in PyTorch do not currently support unsigned integer types (uint16, uint32, uint64). When users attempt to perform calculations with these tensor types, the system returns an error stating that the type is not implemented.
Modify the underlying code so that the following operators can correctly process unsigned integer types.
**Target Operators:**
- `remainder`
- `gcd`
- `floor_divide`
## Files to Modify
- `aten/src/ATen/native/BinaryOps.cpp` - Add unsigned integer type dispatch
- `aten/src/ATen/native/cpu/BinaryOpsKernel.cpp` - Add kernel implementations for unsigned types
## Requirements
- **Full Coverage**: Ensure `uint16`, `uint32`, and `uint64` are all supported for all three operators
- **Standard Compliance**: Follow PyTorch's current recommended type dispatch patterns. Use the standard macro approach for groups of types rather than listing individual types manually
- **Consistency**: Match the coding patterns already used by neighboring operators in the same files
## Acceptance Criteria
- The code compiles successfully
- `uint16`, `uint32`, and `uint64` work correctly for `remainder`, `gcd`, and `floor_divide` operators
| ---
name: add-uint-support
description: Add unsigned integer (uint) type support to PyTorch operators by updating AT_DISPATCH macros. Use when adding support for uint16, uint32, uint64 types to operators, kernels, or when user mentions enabling unsigned types, barebones unsigned types, or uint support.
---
# Add Unsigned Integer (uint) Support to Operators
This skill helps add support for unsigned integer types (uint16, uint32, uint64) to PyTorch operators by updating their AT_DISPATCH macros.
## When to use this skill
Use this skill when:
- Adding uint16, uint32, or uint64 support to an operator
- User mentions "unsigned types", "uint support", "barebones unsigned types"
- Enabling support for kUInt16, kUInt32, kUInt64 in kernels
- Working with operator implementations that need expanded type coverage
## Quick reference
**Add unsigned types to existing dispatch:**
```cpp
// Before
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES));
// After (method 1: add unsigned types explicitly)
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES));
// After (method 2: use V2 integral types if AT_INTEGRAL_TYPES present)
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_INTEGRAL_TYPES_V2), AT_EXPAND(AT_FLOATING_TYPES));
```
## Type group reference
**Unsigned type groups:**
- `AT_BAREBONES_UNSIGNED_TYPES`: kUInt16, kUInt32, kUInt64
- `AT_INTEGRAL_TYPES_V2`: AT_INTEGRAL_TYPES + AT_BAREBONES_UNSIGNED_TYPES
**Relationship:**
```cpp
AT_INTEGRAL_TYPES // kByte, kChar, kInt, kLong, kShort
AT_BAREBONES_UNSIGNED_TYPES // kUInt16, kUInt32, kUInt64
AT_INTEGRAL_TYPES_V2 // INTEGRAL_TYPES + BAREBONES_UNSIGNED_TYPES
```
## Instructions
### Step 1: Determine if conversion to V2 is needed
Check if the file uses AT_DISPATCH_V2:
**If using old AT_DISPATCH:**
- First convert to AT_DISPATCH_V2 using the at-dispatch-v2 skill
- Then proceed with adding uint support
**If already using AT_DISPATCH_V2:**
- Proceed directly to Step 2
### Step 2: Analyze the current dispatch macro
Identify what type groups are currently in use:
```cpp
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
// body
}), AT_EXPAND(AT_ALL_TYPES), kHalf, kBFloat16);
^^^^^^^^^^^^^^^^^^^^^^^^^
Current type coverage
```
Common patterns:
- `AT_EXPAND(AT_ALL_TYPES)` → includes AT_INTEGRAL_TYPES + AT_FLOATING_TYPES
- `AT_EXPAND(AT_INTEGRAL_TYPES)` → signed integers only
- `AT_EXPAND(AT_FLOATING_TYPES)` → floating point types
### Step 3: Choose the uint addition method
Two approaches:
**Method 1: Add AT_BAREBONES_UNSIGNED_TYPES explicitly**
- Use when: You want to be explicit about adding uint support
- Add `AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES)` to the type list
**Method 2: Substitute AT_INTEGRAL_TYPES with AT_INTEGRAL_TYPES_V2**
- Use when: The dispatch already uses `AT_EXPAND(AT_INTEGRAL_TYPES)`
- More concise: replaces one type group with its superset
- Only applicable if AT_INTEGRAL_TYPES is present
### Step 4: Apply the transformation
**Method 1 example:**
```cpp
// Before
AT_DISPATCH_V2(
dtype,
"min_values_cuda",
AT_WRAP([&]() {
kernel_impl<scalar_t>(iter);
}),
AT_EXPAND(AT_ALL_TYPES),
kBFloat16, kHalf, kBool
);
// After (add unsigned types)
AT_DISPATCH_V2(
dtype,
"min_values_cuda",
AT_WRAP([&]() {
kernel_impl<scalar_t>(iter);
}),
AT_EXPAND(AT_ALL_TYPES),
AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES),
kBFloat16, kHalf, kBool
);
```
**Method 2 example:**
```cpp
// Before
AT_DISPATCH_V2(
dtype,
"integral_op",
AT_WRAP([&]() {
kernel<scalar_t>();
}),
AT_EXPAND(AT_INTEGRAL_TYPES)
);
// After (substitute with V2)
AT_DISPATCH_V2(
dtype,
"integral_op",
AT_WRAP([&]() {
kernel<scalar_t>();
}),
AT_EXPAND(AT_INTEGRAL_TYPES_V2)
);
```
### Step 5: Handle AT_ALL_TYPES vs individual type groups
If the dispatch uses `AT_EXPAND(AT_ALL_TYPES)`:
- `AT_ALL_TYPES` = `AT_INTEGRAL_TYPES` + `AT_FLOATING_TYPES`
- To add uint: add `AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES)` to the list
If the dispatch separately lists INTEGRAL and FLOATING:
```cpp
// Before
AT_EXPAND(AT_INTEGRAL_TYPES), AT_EXPAND(AT_FLOATING_TYPES)
// After (Method 2 preferred)
AT_EXPAND(AT_INTEGRAL_TYPES_V2), AT_EXPAND(AT_FLOATING_TYPES)
```
### Step 6: Verify all dispatch sites
Check the file for ALL dispatch macros that need uint support:
- Some operators have multiple dispatch sites (CPU, CUDA, different functions)
- Apply the transformation consistently across all sites
- Ensure each gets the same type coverage updates
### Step 7: Validate the changes
Check that:
- [ ] AT_DISPATCH_V2 format is used (not old AT_DISPATCH)
- [ ] Unsigned types are added via one of the two methods
- [ ] All relevant dispatch sites in the file are updated
- [ ] Type groups use `AT_EXPAND()`
- [ ] Arguments are properly formatted and comma-separated
## Common patterns
### Pattern 1: AT_ALL_TYPES + extras
```cpp
// Before
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES), kHalf, kBFloat16);
// After
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kHalf, kBFloat16);
```
### Pattern 2: Separate INTEGRAL + FLOATING
```cpp
// Before
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_INTEGRAL_TYPES), AT_EXPAND(AT_FLOATING_TYPES));
// After
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_INTEGRAL_TYPES_V2), AT_EXPAND(AT_FLOATING_TYPES));
```
### Pattern 3: Old dispatch needs conversion first
```cpp
// Before (needs v2 conversion first)
AT_DISPATCH_ALL_TYPES_AND2(kHalf, kBFloat16, dtype, "op", [&]() {
kernel<scalar_t>();
});
// After v2 conversion
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES), kHalf, kBFloat16);
// After adding uint support
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kHalf, kBFloat16);
```
## Multiple dispatch sites example
For a file with multiple functions:
```cpp
void min_values_kernel_cuda(TensorIterator& iter) {
AT_DISPATCH_V2(iter.dtype(), "min_values_cuda", AT_WRAP([&]() {
impl<scalar_t>(iter);
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf);
// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
// Added uint support
}
void min_launch_kernel(TensorIterator &iter) {
AT_DISPATCH_V2(iter.input_dtype(), "min_cuda", AT_WRAP([&]() {
gpu_reduce_kernel<scalar_t>(iter);
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf);
// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
// Added uint support here too
}
```
## Decision tree
Use this decision tree to determine the approach:
```
Is the file using AT_DISPATCH_V2?
├─ No → Use at-dispatch-v2 skill first, then continue
└─ Yes
└─ Does it use AT_EXPAND(AT_INTEGRAL_TYPES)?
├─ Yes → Replace with AT_EXPAND(AT_INTEGRAL_TYPES_V2)
└─ No → Add AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES) to type list
```
## Edge cases
### Case 1: Dispatch with only floating types
If the operator only supports floating point types, don't add uint support:
```cpp
// Leave as-is - floating point only operator
AT_DISPATCH_V2(dtype, "float_op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_FLOATING_TYPES), kHalf);
```
### Case 2: Complex types present
Unsigned types work alongside complex types:
```cpp
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES),
AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES),
AT_EXPAND(AT_COMPLEX_TYPES),
kHalf, kBFloat16);
```
### Case 3: Already has uint support
Check if uint types are already present:
- If `AT_INTEGRAL_TYPES_V2` is used → already has uint support
- If `AT_BAREBONES_UNSIGNED_TYPES` is already in list → already has uint support
- Skip the file if uint support is already present
## Workflow
When asked to add uint support:
1. Read the target file
2. Check if using AT_DISPATCH_V2:
- If not → use at-dispatch-v2 skill first
3. Identify all dispatch macro sites
4. For each dispatch:
- Analyze current type groups
- Choose method (add BAREBONES_UNSIGNED or upgrade to V2)
- Apply transformation with Edit tool
5. Show the user the changes
6. Explain what was modified
## Important notes
- Always check if v2 conversion is needed first
- Apply changes consistently across all dispatch sites in the file
- Method 2 (AT_INTEGRAL_TYPES_V2) is cleaner when applicable
- Method 1 (explicit AT_BAREBONES_UNSIGNED_TYPES) is more explicit
- Unsigned types are: kUInt16, kUInt32, kUInt64 (not kByte which is uint8)
- Some operators may not semantically support unsigned types - use judgment
## Testing
After adding uint support, the operator should accept uint16, uint32, and uint64 tensors. The user is responsible for functional testing. | """
Unit Test for UInt32/64 Operator Support in PyTorch
"""
import torch
import pytest
class TestUIntOperators:
"""Tests for uint32 and uint64 operator support."""
@pytest.fixture(params=["uint32", "uint64"])
def dtype(self, request):
"""Parametrized fixture: uint32 and uint64."""
dtype_map = {
"uint32": torch.uint32,
"uint64": torch.uint64,
}
return dtype_map[request.param]
# =========================================================================
# Supported group: these 3 operators typically support uint32/64 in PyTorch
# =========================================================================
def test_bitwise_and(self, dtype):
"""Test bitwise_and operation (already supported)."""
a = torch.tensor(0b1100, dtype=dtype) # 12
b = torch.tensor(0b1010, dtype=dtype) # 10
result = torch.bitwise_and(a, b)
expected = torch.tensor(0b1000, dtype=dtype) # 8
assert torch.equal(result, expected), f"bitwise_and failed for {dtype}"
def test_mul(self, dtype):
"""Test multiplication operation (already supported)."""
a = torch.tensor(3, dtype=dtype)
b = torch.tensor(4, dtype=dtype)
result = torch.mul(a, b)
expected = torch.tensor(12, dtype=dtype)
assert torch.equal(result, expected), f"mul failed for {dtype}"
def test_eq(self, dtype):
"""Test equality comparison operation (already supported)."""
a = torch.tensor(5, dtype=dtype)
b = torch.tensor(5, dtype=dtype)
result = torch.eq(a, b)
expected = torch.tensor(True)
assert torch.equal(result, expected), f"eq failed for {dtype}"
# =========================================================================
# Unsupported group: these 3 operators typically do not support uint32/64 (need to be fixed)
# =========================================================================
def test_remainder(self, dtype):
"""Test remainder operation (support pending)."""
a = torch.tensor(10, dtype=dtype)
b = torch.tensor(3, dtype=dtype)
result = torch.remainder(a, b)
expected = torch.tensor(1, dtype=dtype)
assert torch.equal(result, expected), f"remainder failed for {dtype}"
def test_gcd(self, dtype):
"""Test GCD (greatest common divisor) operation (support pending)."""
a = torch.tensor(12, dtype=dtype)
b = torch.tensor(8, dtype=dtype)
result = torch.gcd(a, b)
expected = torch.tensor(4, dtype=dtype)
assert torch.equal(result, expected), f"gcd failed for {dtype}"
def test_floor_divide(self, dtype):
"""Test floor_divide operation (support pending)."""
a = torch.tensor(10, dtype=dtype)
b = torch.tensor(3, dtype=dtype)
result = torch.floor_divide(a, b)
expected = torch.tensor(3, dtype=dtype)
assert torch.equal(result, expected), f"floor_divide failed for {dtype}"
| zhangyiiiiii/swe-skills-bench-pytorch:latest | ||
fix | React Code Fix & Linter | See task file for detailed mission requirements. | fix | # Task: Fix ESLint Violations in TypeScript Codebase
## Background
The upgradle project uses TypeScript + ESLint for code quality enforcement. Currently, the `src/` directory contains multiple ESLint rule violations that need to be addressed:
- `no-unused-vars`
- `@typescript-eslint/no-explicit-any`
- `eqeqeq` (strict equality)
## Objective
Scan and fix all lint errors in `.ts` files under the `src/` directory to ensure the codebase passes linting checks.
## Scope
- **Files to modify**: `src/**/*.ts` (all TypeScript files in src directory)
- **Files to preserve**: Do NOT modify any test files
- **Repo requirements**: Ensure a `package.json` exists with `lint` and `test` scripts and a `src/` directory containing one or more `.ts` files so the test harness can run.
## Requirements
- Fix all ESLint error-level violations
- Maintain existing functionality (all existing tests must continue to pass)
- Follow TypeScript best practices
- Replace `any` types with proper type definitions where possible
- Use strict equality (`===`) instead of loose equality (`==`)
- Remove or properly use unused variables
## Acceptance Criteria
- `npm run lint` exits with code 0 (no error-level reports)
- No new lint warnings introduced
| ---
name: fix
description: Use when you have lint errors, formatting issues, or before committing code to ensure it passes CI.
---
# Fix Lint and Formatting
## Instructions
1. Run `yarn prettier` to fix formatting
2. Run `yarn linc` to check for remaining lint issues
3. Report any remaining manual fixes needed
## Common Mistakes
- **Running prettier on wrong files** - `yarn prettier` only formats changed files
- **Ignoring linc errors** - These will fail CI, fix them before committing
| """
Test for 'fix' skill — React Code Fix & Linter
Validates that the Agent scanned and fixed all ESLint violations in the upgradle
TypeScript codebase so that `npm run lint` passes cleanly.
"""
import os
import subprocess
import glob
import re
import pytest
from _dependency_utils import ensure_npm_dependencies
@pytest.fixture(scope="module", autouse=True)
def _ensure_repo_dependencies():
ensure_npm_dependencies(TestFix.REPO_DIR)
class TestFix:
"""Verify ESLint violations in upgradle src/ have been fixed."""
REPO_DIR = "/workspace/upgradle"
# ------------------------------------------------------------------
# L1: basic file / project integrity
# ------------------------------------------------------------------
def test_src_directory_exists(self):
"""src/ directory must exist in the repository."""
assert os.path.isdir(
os.path.join(self.REPO_DIR, "src")
), "src/ directory is missing"
def test_package_json_exists(self):
"""package.json must exist at repo root."""
assert os.path.isfile(
os.path.join(self.REPO_DIR, "package.json")
), "package.json is missing"
def test_ts_files_exist_in_src(self):
"""At least one .ts file must exist under src/."""
ts_files = glob.glob(
os.path.join(self.REPO_DIR, "src", "**", "*.ts"), recursive=True
)
assert len(ts_files) >= 1, "No .ts files found under src/"
# ------------------------------------------------------------------
# L2: functional lint verification
# ------------------------------------------------------------------
def test_npm_run_lint_exit_code(self):
"""npm run lint must exit with code 0 (no error-level reports)."""
result = subprocess.run(
["npm", "run", "lint"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=300,
)
assert result.returncode == 0, (
f"npm run lint failed (rc={result.returncode}):\n"
f"stdout={result.stdout[-2000:]}\nstderr={result.stderr[-2000:]}"
)
def test_no_eslint_errors_in_stdout(self):
"""Lint output must not contain error-level reports."""
result = subprocess.run(
["npm", "run", "lint"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=300,
)
combined = result.stdout + result.stderr
# ESLint outputs "X errors" when there are error-level problems
match = re.search(r"(\d+)\s+error", combined)
if match:
error_count = int(match.group(1))
assert (
error_count == 0
), f"ESLint reported {error_count} error(s):\n{combined[-2000:]}"
def test_no_unused_vars_in_src(self):
"""No @typescript-eslint/no-unused-vars violations should remain."""
result = subprocess.run(
["npx", "eslint", "src/", "--format", "json"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=300,
)
import json
try:
data = json.loads(result.stdout)
except json.JSONDecodeError:
pytest.skip("ESLint JSON output could not be parsed")
for file_report in data:
for msg in file_report.get("messages", []):
if msg.get("severity", 0) >= 2:
assert "no-unused-vars" not in msg.get(
"ruleId", ""
), f"no-unused-vars error in {file_report['filePath']}:{msg['line']}"
def test_no_explicit_any_in_src(self):
"""No @typescript-eslint/no-explicit-any errors should remain."""
result = subprocess.run(
["npx", "eslint", "src/", "--format", "json"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=300,
)
import json
try:
data = json.loads(result.stdout)
except json.JSONDecodeError:
pytest.skip("ESLint JSON output could not be parsed")
for file_report in data:
for msg in file_report.get("messages", []):
if msg.get("severity", 0) >= 2:
assert "no-explicit-any" not in msg.get(
"ruleId", ""
), f"no-explicit-any error in {file_report['filePath']}:{msg['line']}"
def test_no_eqeqeq_violations_in_src(self):
"""No eqeqeq (loose equality) errors should remain."""
result = subprocess.run(
["npx", "eslint", "src/", "--format", "json"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=300,
)
import json
try:
data = json.loads(result.stdout)
except json.JSONDecodeError:
pytest.skip("ESLint JSON output could not be parsed")
for file_report in data:
for msg in file_report.get("messages", []):
if msg.get("severity", 0) >= 2:
assert "eqeqeq" not in msg.get(
"ruleId", ""
), f"eqeqeq error in {file_report['filePath']}:{msg['line']}"
def test_no_loose_equality_operators(self):
"""Source files should not contain == or != (use === / !==)."""
ts_files = glob.glob(
os.path.join(self.REPO_DIR, "src", "**", "*.ts"), recursive=True
)
for fpath in ts_files:
with open(fpath, "r", encoding="utf-8", errors="replace") as f:
for lineno, line in enumerate(f, 1):
stripped = line.strip()
if stripped.startswith("//") or stripped.startswith("*"):
continue
# Match == or != but not === or !==
if re.search(r"(?<!=)==(?!=)", stripped) or re.search(
r"(?<!!)!=(?!=)", stripped
):
pytest.fail(
f"Loose equality in {os.path.relpath(fpath, self.REPO_DIR)}:{lineno}: {stripped[:120]}"
)
def test_no_new_lint_warnings_introduced(self):
"""npm run lint should produce no new warning-level reports beyond baseline."""
result = subprocess.run(
["npm", "run", "lint"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=300,
)
combined = result.stdout + result.stderr
match = re.search(r"(\d+)\s+warning", combined)
if match:
warning_count = int(match.group(1))
# Acceptance criteria: no *new* warnings. Allow 0.
assert (
warning_count == 0
), f"ESLint reported {warning_count} warning(s); task requires 0 new warnings."
def test_test_files_not_modified(self):
"""Test files must not have been modified by the Agent."""
result = subprocess.run(
["git", "diff", "--name-only", "HEAD"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
changed_files = result.stdout.strip().splitlines()
test_changes = [
f for f in changed_files if f.startswith("test") or "/test" in f
]
assert (
len(test_changes) == 0
), f"Test files were modified but should be preserved: {test_changes}"
def test_existing_tests_still_pass(self):
"""All existing tests in the project must continue to pass."""
pkg_json = os.path.join(self.REPO_DIR, "package.json")
import json
with open(pkg_json) as f:
pkg = json.load(f)
if "test" not in pkg.get("scripts", {}):
pytest.skip("No test script defined in package.json")
result = subprocess.run(
["npm", "test"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=300,
)
assert (
result.returncode == 0
), f"Existing tests failed (rc={result.returncode}):\n{result.stderr[-2000:]}"
| https://github.com/michaelasper/upgradle | 5f292188e9b427a96d3573b29e5677e4cdce58ea | zhangyiiiiii/swe-skills-bench-python |
tdd-workflow | TDD Workflow | See task file for detailed mission requirements. | feature | # Task: Implement Smart Coupon Calculator
## Required File Paths (Agent must only modify/create under these)
- MUST modify: `src/calculator.py` — implement or update `SmartCouponCalculator` here.
## Background
We need a flexible discount calculation system for our e-commerce platform that can handle multiple promotion strategies simultaneously.
## Objective
Implement a `SmartCouponCalculator` class in `src/calculator.py` that supports the following discount strategies:
### Discount Rules
1. **Progressive Discount**
- $10 off when order total ≥ $100
- Additional $15 off (total $25 off) when order total ≥ $200
2. **Category Discount**
- 10% off for items in specified promotional categories
3. **User Tier Discount**
- VIP members: 5% off final price
- SVIP members: 10% off final price
When multiple discounts apply, they should be stacked optimally to maximize customer savings.
## Implementation Requirements
### Core Functionality
- Calculate final price with all applicable discounts
- Support user tier levels: regular, VIP, SVIP
- Handle category-specific discounts
- Apply progressive discounts based on order total
- Implement optimal discount stacking logic
### Edge Cases to Handle
- Zero or negative amounts
- Empty shopping carts
- Invalid user tier values
- Items without category information
## Acceptance Criteria
- Calculator correctly applies all three discount types
- Discount stacking produces accurate final prices for complex scenarios
- Edge cases are handled gracefully without errors
- Code is maintainable and follows Python best practices
| ---
name: tdd-workflow
description: Use this skill when writing new features, fixing bugs, or refactoring code. Enforces test-driven development with 80%+ coverage including unit, integration, and E2E tests.
---
# 測試驅動開發工作流程
此技能確保所有程式碼開發遵循 TDD 原則,並具有完整的測試覆蓋率。
## 何時啟用
- 撰寫新功能或功能性程式碼
- 修復 Bug 或問題
- 重構現有程式碼
- 新增 API 端點
- 建立新元件
## 核心原則
### 1. 測試先於程式碼
總是先寫測試,然後實作程式碼使測試通過。
### 2. 覆蓋率要求
- 最低 80% 覆蓋率(單元 + 整合 + E2E)
- 涵蓋所有邊界案例
- 測試錯誤情境
- 驗證邊界條件
### 3. 測試類型
#### 單元測試
- 個別函式和工具
- 元件邏輯
- 純函式
- 輔助函式和工具
#### 整合測試
- API 端點
- 資料庫操作
- 服務互動
- 外部 API 呼叫
#### E2E 測試(Playwright)
- 關鍵使用者流程
- 完整工作流程
- 瀏覽器自動化
- UI 互動
## TDD 工作流程步驟
### 步驟 1:撰寫使用者旅程
```
身為 [角色],我想要 [動作],以便 [好處]
範例:
身為使用者,我想要語意搜尋市場,
以便即使沒有精確關鍵字也能找到相關市場。
```
### 步驟 2:產生測試案例
為每個使用者旅程建立完整的測試案例:
```typescript
describe('Semantic Search', () => {
it('returns relevant markets for query', async () => {
// 測試實作
})
it('handles empty query gracefully', async () => {
// 測試邊界案例
})
it('falls back to substring search when Redis unavailable', async () => {
// 測試回退行為
})
it('sorts results by similarity score', async () => {
// 測試排序邏輯
})
})
```
### 步驟 3:執行測試(應該失敗)
```bash
npm test
# 測試應該失敗 - 我們還沒實作
```
### 步驟 4:實作程式碼
撰寫最少的程式碼使測試通過:
```typescript
// 由測試引導的實作
export async function searchMarkets(query: string) {
// 實作在此
}
```
### 步驟 5:再次執行測試
```bash
npm test
# 測試現在應該通過
```
### 步驟 6:重構
在保持測試通過的同時改善程式碼品質:
- 移除重複
- 改善命名
- 優化效能
- 增強可讀性
### 步驟 7:驗證覆蓋率
```bash
npm run test:coverage
# 驗證達到 80%+ 覆蓋率
```
## 測試模式
### 單元測試模式(Jest/Vitest)
```typescript
import { render, screen, fireEvent } from '@testing-library/react'
import { Button } from './Button'
describe('Button Component', () => {
it('renders with correct text', () => {
render(<Button>Click me</Button>)
expect(screen.getByText('Click me')).toBeInTheDocument()
})
it('calls onClick when clicked', () => {
const handleClick = jest.fn()
render(<Button onClick={handleClick}>Click</Button>)
fireEvent.click(screen.getByRole('button'))
expect(handleClick).toHaveBeenCalledTimes(1)
})
it('is disabled when disabled prop is true', () => {
render(<Button disabled>Click</Button>)
expect(screen.getByRole('button')).toBeDisabled()
})
})
```
### API 整合測試模式
```typescript
import { NextRequest } from 'next/server'
import { GET } from './route'
describe('GET /api/markets', () => {
it('returns markets successfully', async () => {
const request = new NextRequest('http://localhost/api/markets')
const response = await GET(request)
const data = await response.json()
expect(response.status).toBe(200)
expect(data.success).toBe(true)
expect(Array.isArray(data.data)).toBe(true)
})
it('validates query parameters', async () => {
const request = new NextRequest('http://localhost/api/markets?limit=invalid')
const response = await GET(request)
expect(response.status).toBe(400)
})
it('handles database errors gracefully', async () => {
// Mock 資料庫失敗
const request = new NextRequest('http://localhost/api/markets')
// 測試錯誤處理
})
})
```
### E2E 測試模式(Playwright)
```typescript
import { test, expect } from '@playwright/test'
test('user can search and filter markets', async ({ page }) => {
// 導航到市場頁面
await page.goto('/')
await page.click('a[href="/markets"]')
// 驗證頁面載入
await expect(page.locator('h1')).toContainText('Markets')
// 搜尋市場
await page.fill('input[placeholder="Search markets"]', 'election')
// 等待 debounce 和結果
await page.waitForTimeout(600)
// 驗證搜尋結果顯示
const results = page.locator('[data-testid="market-card"]')
await expect(results).toHaveCount(5, { timeout: 5000 })
// 驗證結果包含搜尋詞
const firstResult = results.first()
await expect(firstResult).toContainText('election', { ignoreCase: true })
// 依狀態篩選
await page.click('button:has-text("Active")')
// 驗證篩選結果
await expect(results).toHaveCount(3)
})
test('user can create a new market', async ({ page }) => {
// 先登入
await page.goto('/creator-dashboard')
// 填寫市場建立表單
await page.fill('input[name="name"]', 'Test Market')
await page.fill('textarea[name="description"]', 'Test description')
await page.fill('input[name="endDate"]', '2025-12-31')
// 提交表單
await page.click('button[type="submit"]')
// 驗證成功訊息
await expect(page.locator('text=Market created successfully')).toBeVisible()
// 驗證重導向到市場頁面
await expect(page).toHaveURL(/\/markets\/test-market/)
})
```
## 測試檔案組織
```
src/
├── components/
│ ├── Button/
│ │ ├── Button.tsx
│ │ ├── Button.test.tsx # 單元測試
│ │ └── Button.stories.tsx # Storybook
│ └── MarketCard/
│ ├── MarketCard.tsx
│ └── MarketCard.test.tsx
├── app/
│ └── api/
│ └── markets/
│ ├── route.ts
│ └── route.test.ts # 整合測試
└── e2e/
├── markets.spec.ts # E2E 測試
├── trading.spec.ts
└── auth.spec.ts
```
## Mock 外部服務
### Supabase Mock
```typescript
jest.mock('@/lib/supabase', () => ({
supabase: {
from: jest.fn(() => ({
select: jest.fn(() => ({
eq: jest.fn(() => Promise.resolve({
data: [{ id: 1, name: 'Test Market' }],
error: null
}))
}))
}))
}
}))
```
### Redis Mock
```typescript
jest.mock('@/lib/redis', () => ({
searchMarketsByVector: jest.fn(() => Promise.resolve([
{ slug: 'test-market', similarity_score: 0.95 }
])),
checkRedisHealth: jest.fn(() => Promise.resolve({ connected: true }))
}))
```
### OpenAI Mock
```typescript
jest.mock('@/lib/openai', () => ({
generateEmbedding: jest.fn(() => Promise.resolve(
new Array(1536).fill(0.1) // Mock 1536 維嵌入向量
))
}))
```
## 測試覆蓋率驗證
### 執行覆蓋率報告
```bash
npm run test:coverage
```
### 覆蓋率門檻
```json
{
"jest": {
"coverageThresholds": {
"global": {
"branches": 80,
"functions": 80,
"lines": 80,
"statements": 80
}
}
}
}
```
## 常見測試錯誤避免
### ❌ 錯誤:測試實作細節
```typescript
// 不要測試內部狀態
expect(component.state.count).toBe(5)
```
### ✅ 正確:測試使用者可見行為
```typescript
// 測試使用者看到的內容
expect(screen.getByText('Count: 5')).toBeInTheDocument()
```
### ❌ 錯誤:脆弱的選擇器
```typescript
// 容易壞掉
await page.click('.css-class-xyz')
```
### ✅ 正確:語意選擇器
```typescript
// 對變更有彈性
await page.click('button:has-text("Submit")')
await page.click('[data-testid="submit-button"]')
```
### ❌ 錯誤:無測試隔離
```typescript
// 測試互相依賴
test('creates user', () => { /* ... */ })
test('updates same user', () => { /* 依賴前一個測試 */ })
```
### ✅ 正確:獨立測試
```typescript
// 每個測試設置自己的資料
test('creates user', () => {
const user = createTestUser()
// 測試邏輯
})
test('updates user', () => {
const user = createTestUser()
// 更新邏輯
})
```
## 持續測試
### 開發期間的 Watch 模式
```bash
npm test -- --watch
# 檔案變更時自動執行測試
```
### Pre-Commit Hook
```bash
# 每次 commit 前執行
npm test && npm run lint
```
### CI/CD 整合
```yaml
# GitHub Actions
- name: Run Tests
run: npm test -- --coverage
- name: Upload Coverage
uses: codecov/codecov-action@v3
```
## 最佳實務
1. **先寫測試** - 總是 TDD
2. **一個測試一個斷言** - 專注單一行為
3. **描述性測試名稱** - 解釋測試內容
4. **Arrange-Act-Assert** - 清晰的測試結構
5. **Mock 外部依賴** - 隔離單元測試
6. **測試邊界案例** - Null、undefined、空值、大值
7. **測試錯誤路徑** - 不只是快樂路徑
8. **保持測試快速** - 單元測試每個 < 50ms
9. **測試後清理** - 無副作用
10. **檢視覆蓋率報告** - 識別缺口
## 成功指標
- 達到 80%+ 程式碼覆蓋率
- 所有測試通過(綠色)
- 無跳過或停用的測試
- 快速測試執行(單元測試 < 30s)
- E2E 測試涵蓋關鍵使用者流程
- 測試在生產前捕捉 Bug
---
**記住**:測試不是可選的。它們是實現自信重構、快速開發和生產可靠性的安全網。
| """
Test for 'tdd-workflow' skill — Smart Coupon Calculator
Validates that the Agent implemented SmartCouponCalculator with progressive,
category, and user-tier discounts in src/calculator.py.
"""
import os
import sys
import importlib
import subprocess
import pytest
class TestTddWorkflow:
"""Verify SmartCouponCalculator implementation correctness."""
REPO_DIR = "/workspace/python"
@classmethod
def setup_class(cls):
"""Add repo to sys.path so we can import src.calculator."""
if cls.REPO_DIR not in sys.path:
sys.path.insert(0, cls.REPO_DIR)
# ------------------------------------------------------------------
# L1: file & syntax checks
# ------------------------------------------------------------------
def test_calculator_file_exists(self):
"""src/calculator.py must exist."""
fpath = os.path.join(self.REPO_DIR, "src", "calculator.py")
assert os.path.isfile(fpath), "src/calculator.py is missing"
def test_calculator_compiles(self):
"""src/calculator.py must compile without syntax errors."""
result = subprocess.run(
["python", "-m", "py_compile", "src/calculator.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Syntax error:\n{result.stderr}"
# ------------------------------------------------------------------
# L2: functional verification — import & instantiate
# ------------------------------------------------------------------
def _get_calculator(self):
"""Helper: import and return a fresh SmartCouponCalculator instance."""
mod = importlib.import_module("src.calculator")
importlib.reload(mod)
cls = getattr(mod, "SmartCouponCalculator", None)
assert (
cls is not None
), "SmartCouponCalculator class not found in src/calculator.py"
return cls()
def test_class_exists(self):
"""SmartCouponCalculator class must be importable."""
calc = self._get_calculator()
assert calc is not None
# --- Progressive Discount ---
def test_progressive_no_discount_below_100(self):
"""Order < $100 should get no progressive discount."""
calc = self._get_calculator()
items = [{"name": "A", "price": 50, "quantity": 1}]
result = calc.calculate(items=items, user_tier="regular")
# Exact price depends on implementation; no progressive discount
assert isinstance(result, (int, float)), "calculate() must return a number"
assert result == pytest.approx(50, abs=0.01), f"Expected 50, got {result}"
def test_progressive_10_off_at_100(self):
"""Order = $100 should get $10 off progressive discount."""
calc = self._get_calculator()
items = [{"name": "A", "price": 100, "quantity": 1}]
result = calc.calculate(items=items, user_tier="regular")
assert result == pytest.approx(90, abs=0.01), f"Expected 90, got {result}"
def test_progressive_25_off_at_200(self):
"""Order = $200 should get $25 off progressive discount."""
calc = self._get_calculator()
items = [{"name": "A", "price": 200, "quantity": 1}]
result = calc.calculate(items=items, user_tier="regular")
assert result == pytest.approx(175, abs=0.01), f"Expected 175, got {result}"
# --- Category Discount ---
def test_category_discount_10_percent(self):
"""Items in promotional categories should get 10% off."""
calc = self._get_calculator()
items = [
{
"name": "Promo item",
"price": 80,
"quantity": 1,
"category": "electronics",
}
]
result = calc.calculate(
items=items,
user_tier="regular",
promo_categories=["electronics"],
)
# 80 * 0.9 = 72, below 100 so no progressive
assert result == pytest.approx(72, abs=0.01), f"Expected 72, got {result}"
def test_category_discount_only_promo(self):
"""Non-promo category items should not get category discount."""
calc = self._get_calculator()
items = [
{"name": "Promo", "price": 50, "quantity": 1, "category": "electronics"},
{"name": "Normal", "price": 50, "quantity": 1, "category": "food"},
]
result = calc.calculate(
items=items,
user_tier="regular",
promo_categories=["electronics"],
)
# electronics: 50*0.9=45, food: 50, total 95 < 100 no progressive
assert result == pytest.approx(95, abs=0.5), f"Expected ~95, got {result}"
# --- User Tier Discount ---
def test_vip_5_percent_off(self):
"""VIP members get 5% off final price."""
calc = self._get_calculator()
items = [{"name": "A", "price": 50, "quantity": 1}]
result = calc.calculate(items=items, user_tier="VIP")
assert result == pytest.approx(47.5, abs=0.01), f"Expected 47.5, got {result}"
def test_svip_10_percent_off(self):
"""SVIP members get 10% off final price."""
calc = self._get_calculator()
items = [{"name": "A", "price": 50, "quantity": 1}]
result = calc.calculate(items=items, user_tier="SVIP")
assert result == pytest.approx(45, abs=0.01), f"Expected 45, got {result}"
# --- Stacking ---
def test_all_discounts_stacked(self):
"""Progressive + category + SVIP should stack optimally."""
calc = self._get_calculator()
items = [
{"name": "Gadget", "price": 120, "quantity": 1, "category": "electronics"},
{"name": "Book", "price": 100, "quantity": 1, "category": "books"},
]
result = calc.calculate(
items=items,
user_tier="SVIP",
promo_categories=["electronics"],
)
# electronics: 120*0.9=108, books: 100, subtotal=208
# Progressive: 208 >= 200 → -25 → 183
# SVIP: 183*0.9 = 164.7
assert isinstance(result, (int, float))
assert 140 <= result <= 190, f"Stacked discount result {result} looks wrong"
# --- Edge Cases ---
def test_zero_amount(self):
"""Zero-priced items should not cause errors."""
calc = self._get_calculator()
items = [{"name": "Free", "price": 0, "quantity": 1}]
result = calc.calculate(items=items, user_tier="regular")
assert result == pytest.approx(0, abs=0.01)
def test_empty_cart(self):
"""Empty shopping cart should return 0."""
calc = self._get_calculator()
result = calc.calculate(items=[], user_tier="regular")
assert result == pytest.approx(0, abs=0.01)
def test_invalid_tier_handled(self):
"""Invalid user tier should fallback to regular pricing or raise ValueError."""
calc = self._get_calculator()
items = [{"name": "A", "price": 50, "quantity": 1}]
try:
result = calc.calculate(items=items, user_tier="UNKNOWN")
# If no error, should be treated as regular (no tier discount)
assert result == pytest.approx(50, abs=0.01)
except (ValueError, KeyError):
pass # Acceptable to raise on invalid tier
| https://github.com/tdd-starters/python | zhangyiiiiii/swe-skills-bench-python | |
security-review | Security Review (zh-TW) | See task file for detailed mission requirements. | feature | # Task: Implement Secure Export API Endpoints for BabyBuddy
## Background
We need to add export endpoints to BabyBuddy's REST API that allow users to export feeding and sleep records. The implementation must enforce proper authentication and authorization to ensure users can only access their own children's data.
## Files to Modify
- `api/serializers.py` - Add FeedingExportSerializer and SleepExportSerializer
- `api/views.py` - Add ExportViewSet
- `api/urls.py` - Register export routes
- `tests/test_api.py` - Add security test cases
## Requirements
### API Endpoint
- `GET /api/child/{child_id}/export/?type=feeding|sleep`
- Returns last 30 days of records in JSON format
### Security Requirements
- Use Django Permission to validate authenticated users
- Users can ONLY access their own children's data
- Proper HTTP status codes for different scenarios:
- Authenticated user accessing own child's data → 200 OK
- Unauthenticated request → 401 Unauthorized
- User accessing another user's child data → 403 Forbidden
### Serializers
- **FeedingExportSerializer**: id, start, end, duration, type, method, amount
- **SleepExportSerializer**: id, start, end, duration, quality
## Acceptance Criteria
- `python manage.py test babybuddy.tests.test_api -v 2` passes with all tests successful
- Export endpoint returns correct JSON structure
- Security checks properly implemented and tested
| ---
name: security-review
description: Use this skill when adding authentication, handling user input, working with secrets, creating API endpoints, or implementing payment/sensitive features. Provides comprehensive security checklist and patterns.
---
# 安全性審查技能
此技能確保所有程式碼遵循安全性最佳實務並識別潛在漏洞。
## 何時啟用
- 實作認證或授權
- 處理使用者輸入或檔案上傳
- 建立新的 API 端點
- 處理密鑰或憑證
- 實作支付功能
- 儲存或傳輸敏感資料
- 整合第三方 API
## 安全性檢查清單
### 1. 密鑰管理
#### ❌ 絕不這樣做
```typescript
const apiKey = "sk-proj-xxxxx" // 寫死的密鑰
const dbPassword = "password123" // 在原始碼中
```
#### ✅ 總是這樣做
```typescript
const apiKey = process.env.OPENAI_API_KEY
const dbUrl = process.env.DATABASE_URL
// 驗證密鑰存在
if (!apiKey) {
throw new Error('OPENAI_API_KEY not configured')
}
```
#### 驗證步驟
- [ ] 無寫死的 API 金鑰、Token 或密碼
- [ ] 所有密鑰在環境變數中
- [ ] `.env.local` 在 .gitignore 中
- [ ] git 歷史中無密鑰
- [ ] 生產密鑰在託管平台(Vercel、Railway)中
### 2. 輸入驗證
#### 總是驗證使用者輸入
```typescript
import { z } from 'zod'
// 定義驗證 schema
const CreateUserSchema = z.object({
email: z.string().email(),
name: z.string().min(1).max(100),
age: z.number().int().min(0).max(150)
})
// 處理前驗證
export async function createUser(input: unknown) {
try {
const validated = CreateUserSchema.parse(input)
return await db.users.create(validated)
} catch (error) {
if (error instanceof z.ZodError) {
return { success: false, errors: error.errors }
}
throw error
}
}
```
#### 檔案上傳驗證
```typescript
function validateFileUpload(file: File) {
// 大小檢查(最大 5MB)
const maxSize = 5 * 1024 * 1024
if (file.size > maxSize) {
throw new Error('File too large (max 5MB)')
}
// 類型檢查
const allowedTypes = ['image/jpeg', 'image/png', 'image/gif']
if (!allowedTypes.includes(file.type)) {
throw new Error('Invalid file type')
}
// 副檔名檢查
const allowedExtensions = ['.jpg', '.jpeg', '.png', '.gif']
const extension = file.name.toLowerCase().match(/\.[^.]+$/)?.[0]
if (!extension || !allowedExtensions.includes(extension)) {
throw new Error('Invalid file extension')
}
return true
}
```
#### 驗證步驟
- [ ] 所有使用者輸入以 schema 驗證
- [ ] 檔案上傳受限(大小、類型、副檔名)
- [ ] 查詢中不直接使用使用者輸入
- [ ] 白名單驗證(非黑名單)
- [ ] 錯誤訊息不洩露敏感資訊
### 3. SQL 注入預防
#### ❌ 絕不串接 SQL
```typescript
// 危險 - SQL 注入漏洞
const query = `SELECT * FROM users WHERE email = '${userEmail}'`
await db.query(query)
```
#### ✅ 總是使用參數化查詢
```typescript
// 安全 - 參數化查詢
const { data } = await supabase
.from('users')
.select('*')
.eq('email', userEmail)
// 或使用原始 SQL
await db.query(
'SELECT * FROM users WHERE email = $1',
[userEmail]
)
```
#### 驗證步驟
- [ ] 所有資料庫查詢使用參數化查詢
- [ ] SQL 中無字串串接
- [ ] ORM/查詢建構器正確使用
- [ ] Supabase 查詢正確淨化
### 4. 認證與授權
#### JWT Token 處理
```typescript
// ❌ 錯誤:localStorage(易受 XSS 攻擊)
localStorage.setItem('token', token)
// ✅ 正確:httpOnly cookies
res.setHeader('Set-Cookie',
`token=${token}; HttpOnly; Secure; SameSite=Strict; Max-Age=3600`)
```
#### 授權檢查
```typescript
export async function deleteUser(userId: string, requesterId: string) {
// 總是先驗證授權
const requester = await db.users.findUnique({
where: { id: requesterId }
})
if (requester.role !== 'admin') {
return NextResponse.json(
{ error: 'Unauthorized' },
{ status: 403 }
)
}
// 繼續刪除
await db.users.delete({ where: { id: userId } })
}
```
#### Row Level Security(Supabase)
```sql
-- 在所有表格上啟用 RLS
ALTER TABLE users ENABLE ROW LEVEL SECURITY;
-- 使用者只能查看自己的資料
CREATE POLICY "Users view own data"
ON users FOR SELECT
USING (auth.uid() = id);
-- 使用者只能更新自己的資料
CREATE POLICY "Users update own data"
ON users FOR UPDATE
USING (auth.uid() = id);
```
#### 驗證步驟
- [ ] Token 儲存在 httpOnly cookies(非 localStorage)
- [ ] 敏感操作前有授權檢查
- [ ] Supabase 已啟用 Row Level Security
- [ ] 已實作基於角色的存取控制
- [ ] 工作階段管理安全
### 5. XSS 預防
#### 淨化 HTML
```typescript
import DOMPurify from 'isomorphic-dompurify'
// 總是淨化使用者提供的 HTML
function renderUserContent(html: string) {
const clean = DOMPurify.sanitize(html, {
ALLOWED_TAGS: ['b', 'i', 'em', 'strong', 'p'],
ALLOWED_ATTR: []
})
return <div dangerouslySetInnerHTML={{ __html: clean }} />
}
```
#### Content Security Policy
```typescript
// next.config.js
const securityHeaders = [
{
key: 'Content-Security-Policy',
value: `
default-src 'self';
script-src 'self' 'unsafe-eval' 'unsafe-inline';
style-src 'self' 'unsafe-inline';
img-src 'self' data: https:;
font-src 'self';
connect-src 'self' https://api.example.com;
`.replace(/\s{2,}/g, ' ').trim()
}
]
```
#### 驗證步驟
- [ ] 使用者提供的 HTML 已淨化
- [ ] CSP headers 已設定
- [ ] 無未驗證的動態內容渲染
- [ ] 使用 React 內建 XSS 保護
### 6. CSRF 保護
#### CSRF Tokens
```typescript
import { csrf } from '@/lib/csrf'
export async function POST(request: Request) {
const token = request.headers.get('X-CSRF-Token')
if (!csrf.verify(token)) {
return NextResponse.json(
{ error: 'Invalid CSRF token' },
{ status: 403 }
)
}
// 處理請求
}
```
#### SameSite Cookies
```typescript
res.setHeader('Set-Cookie',
`session=${sessionId}; HttpOnly; Secure; SameSite=Strict`)
```
#### 驗證步驟
- [ ] 狀態變更操作有 CSRF tokens
- [ ] 所有 cookies 設定 SameSite=Strict
- [ ] 已實作 Double-submit cookie 模式
### 7. 速率限制
#### API 速率限制
```typescript
import rateLimit from 'express-rate-limit'
const limiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 分鐘
max: 100, // 每視窗 100 個請求
message: 'Too many requests'
})
// 套用到路由
app.use('/api/', limiter)
```
#### 昂貴操作
```typescript
// 搜尋的積極速率限制
const searchLimiter = rateLimit({
windowMs: 60 * 1000, // 1 分鐘
max: 10, // 每分鐘 10 個請求
message: 'Too many search requests'
})
app.use('/api/search', searchLimiter)
```
#### 驗證步驟
- [ ] 所有 API 端點有速率限制
- [ ] 昂貴操作有更嚴格限制
- [ ] 基於 IP 的速率限制
- [ ] 基於使用者的速率限制(已認證)
### 8. 敏感資料暴露
#### 日誌記錄
```typescript
// ❌ 錯誤:記錄敏感資料
console.log('User login:', { email, password })
console.log('Payment:', { cardNumber, cvv })
// ✅ 正確:遮蔽敏感資料
console.log('User login:', { email, userId })
console.log('Payment:', { last4: card.last4, userId })
```
#### 錯誤訊息
```typescript
// ❌ 錯誤:暴露內部細節
catch (error) {
return NextResponse.json(
{ error: error.message, stack: error.stack },
{ status: 500 }
)
}
// ✅ 正確:通用錯誤訊息
catch (error) {
console.error('Internal error:', error)
return NextResponse.json(
{ error: 'An error occurred. Please try again.' },
{ status: 500 }
)
}
```
#### 驗證步驟
- [ ] 日誌中無密碼、token 或密鑰
- [ ] 使用者收到通用錯誤訊息
- [ ] 詳細錯誤只在伺服器日誌
- [ ] 不向使用者暴露堆疊追蹤
### 9. 區塊鏈安全(Solana)
#### 錢包驗證
```typescript
import { verify } from '@solana/web3.js'
async function verifyWalletOwnership(
publicKey: string,
signature: string,
message: string
) {
try {
const isValid = verify(
Buffer.from(message),
Buffer.from(signature, 'base64'),
Buffer.from(publicKey, 'base64')
)
return isValid
} catch (error) {
return false
}
}
```
#### 交易驗證
```typescript
async function verifyTransaction(transaction: Transaction) {
// 驗證收款人
if (transaction.to !== expectedRecipient) {
throw new Error('Invalid recipient')
}
// 驗證金額
if (transaction.amount > maxAmount) {
throw new Error('Amount exceeds limit')
}
// 驗證使用者有足夠餘額
const balance = await getBalance(transaction.from)
if (balance < transaction.amount) {
throw new Error('Insufficient balance')
}
return true
}
```
#### 驗證步驟
- [ ] 錢包簽章已驗證
- [ ] 交易詳情已驗證
- [ ] 交易前有餘額檢查
- [ ] 無盲目交易簽署
### 10. 依賴安全
#### 定期更新
```bash
# 檢查漏洞
npm audit
# 自動修復可修復的問題
npm audit fix
# 更新依賴
npm update
# 檢查過時套件
npm outdated
```
#### Lock 檔案
```bash
# 總是 commit lock 檔案
git add package-lock.json
# 在 CI/CD 中使用以獲得可重現的建置
npm ci # 而非 npm install
```
#### 驗證步驟
- [ ] 依賴保持最新
- [ ] 無已知漏洞(npm audit 乾淨)
- [ ] Lock 檔案已 commit
- [ ] GitHub 上已啟用 Dependabot
- [ ] 定期安全更新
## 安全測試
### 自動化安全測試
```typescript
// 測試認證
test('requires authentication', async () => {
const response = await fetch('/api/protected')
expect(response.status).toBe(401)
})
// 測試授權
test('requires admin role', async () => {
const response = await fetch('/api/admin', {
headers: { Authorization: `Bearer ${userToken}` }
})
expect(response.status).toBe(403)
})
// 測試輸入驗證
test('rejects invalid input', async () => {
const response = await fetch('/api/users', {
method: 'POST',
body: JSON.stringify({ email: 'not-an-email' })
})
expect(response.status).toBe(400)
})
// 測試速率限制
test('enforces rate limits', async () => {
const requests = Array(101).fill(null).map(() =>
fetch('/api/endpoint')
)
const responses = await Promise.all(requests)
const tooManyRequests = responses.filter(r => r.status === 429)
expect(tooManyRequests.length).toBeGreaterThan(0)
})
```
## 部署前安全檢查清單
任何生產部署前:
- [ ] **密鑰**:無寫死密鑰,全在環境變數中
- [ ] **輸入驗證**:所有使用者輸入已驗證
- [ ] **SQL 注入**:所有查詢已參數化
- [ ] **XSS**:使用者內容已淨化
- [ ] **CSRF**:保護已啟用
- [ ] **認證**:正確的 token 處理
- [ ] **授權**:角色檢查已就位
- [ ] **速率限制**:所有端點已啟用
- [ ] **HTTPS**:生產環境強制使用
- [ ] **安全標頭**:CSP、X-Frame-Options 已設定
- [ ] **錯誤處理**:錯誤中無敏感資料
- [ ] **日誌記錄**:無敏感資料被記錄
- [ ] **依賴**:最新,無漏洞
- [ ] **Row Level Security**:Supabase 已啟用
- [ ] **CORS**:正確設定
- [ ] **檔案上傳**:已驗證(大小、類型)
- [ ] **錢包簽章**:已驗證(如果是區塊鏈)
## 資源
- [OWASP Top 10](https://owasp.org/www-project-top-ten/)
- [Next.js Security](https://nextjs.org/docs/security)
- [Supabase Security](https://supabase.com/docs/guides/auth)
- [Web Security Academy](https://portswigger.net/web-security)
---
**記住**:安全性不是可選的。一個漏洞可能危及整個平台。有疑慮時,選擇謹慎的做法。
| """
Test for 'security-review' skill — Secure Export API for BabyBuddy
Validates that the Agent implemented authenticated, authorized export endpoints
with proper serializers, views, URLs, and security checks.
"""
import os
import ast
import subprocess
import pytest
from _dependency_utils import ensure_python_dependencies
@pytest.fixture(scope="module", autouse=True)
def _ensure_repo_dependencies():
ensure_python_dependencies(TestSecurityReview.REPO_DIR)
class TestSecurityReview:
"""Verify secure data export endpoint implementation for BabyBuddy."""
REPO_DIR = "/workspace/babybuddy"
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_serializers_file_exists(self):
"""api/serializers.py must exist."""
fpath = os.path.join(self.REPO_DIR, "api", "serializers.py")
assert os.path.isfile(fpath), "api/serializers.py not found"
def test_views_file_exists(self):
"""api/views.py must exist."""
fpath = os.path.join(self.REPO_DIR, "api", "views.py")
assert os.path.isfile(fpath), "api/views.py not found"
def test_urls_file_exists(self):
"""api/urls.py must exist."""
fpath = os.path.join(self.REPO_DIR, "api", "urls.py")
assert os.path.isfile(fpath), "api/urls.py not found"
# ------------------------------------------------------------------
# L2: functional verification
# ------------------------------------------------------------------
def test_feeding_export_serializer_defined(self):
"""FeedingExportSerializer must be defined in api/serializers.py."""
fpath = os.path.join(self.REPO_DIR, "api", "serializers.py")
with open(fpath, "r", encoding="utf-8") as f:
source = f.read()
tree = ast.parse(source)
class_names = [n.name for n in ast.walk(tree) if isinstance(n, ast.ClassDef)]
assert (
"FeedingExportSerializer" in class_names
), f"FeedingExportSerializer not found; classes: {class_names}"
def test_sleep_export_serializer_defined(self):
"""SleepExportSerializer must be defined in api/serializers.py."""
fpath = os.path.join(self.REPO_DIR, "api", "serializers.py")
with open(fpath, "r", encoding="utf-8") as f:
source = f.read()
tree = ast.parse(source)
class_names = [n.name for n in ast.walk(tree) if isinstance(n, ast.ClassDef)]
assert (
"SleepExportSerializer" in class_names
), f"SleepExportSerializer not found; classes: {class_names}"
def test_feeding_serializer_fields(self):
"""FeedingExportSerializer must include required fields."""
fpath = os.path.join(self.REPO_DIR, "api", "serializers.py")
with open(fpath, "r", encoding="utf-8") as f:
content = f.read()
required = ["id", "start", "end", "duration", "type", "method", "amount"]
for field in required:
assert field in content, f"Field '{field}' not found in serializers.py"
def test_sleep_serializer_fields(self):
"""SleepExportSerializer must include required fields."""
fpath = os.path.join(self.REPO_DIR, "api", "serializers.py")
with open(fpath, "r", encoding="utf-8") as f:
content = f.read()
required = ["id", "start", "end", "duration"]
for field in required:
assert field in content, f"Field '{field}' not found in serializers.py"
def test_export_viewset_defined(self):
"""ExportViewSet (or similar) must be defined in api/views.py."""
fpath = os.path.join(self.REPO_DIR, "api", "views.py")
with open(fpath, "r", encoding="utf-8") as f:
source = f.read()
tree = ast.parse(source)
class_names = [n.name for n in ast.walk(tree) if isinstance(n, ast.ClassDef)]
export_views = [c for c in class_names if "export" in c.lower()]
assert (
len(export_views) >= 1
), f"No export-related ViewSet found; classes: {class_names}"
def test_authentication_enforced(self):
"""Views must enforce authentication (IsAuthenticated or similar)."""
fpath = os.path.join(self.REPO_DIR, "api", "views.py")
with open(fpath, "r", encoding="utf-8") as f:
content = f.read()
auth_patterns = [
"IsAuthenticated",
"permission_classes",
"authentication_classes",
]
found = any(p in content for p in auth_patterns)
assert found, "No authentication enforcement found in views.py"
def test_export_route_registered(self):
"""Export route must be registered in api/urls.py."""
fpath = os.path.join(self.REPO_DIR, "api", "urls.py")
with open(fpath, "r", encoding="utf-8") as f:
content = f.read()
assert "export" in content.lower(), "No export route registered in urls.py"
def test_django_system_check(self):
"""python manage.py check should pass without errors."""
result = subprocess.run(
["python", "manage.py", "check"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
assert result.returncode == 0, f"Django check failed:\n{result.stderr}"
def test_child_ownership_validation(self):
"""Views must validate child ownership (403 for other user's child)."""
fpath = os.path.join(self.REPO_DIR, "api", "views.py")
with open(fpath, "r", encoding="utf-8") as f:
content = f.read()
ownership_patterns = [
"child",
"403",
"Forbidden",
"get_object_or_404",
"request.user",
"filter",
"permission",
]
found_count = sum(1 for p in ownership_patterns if p in content)
assert found_count >= 3, (
f"Insufficient ownership validation logic in views.py "
f"(matched {found_count}/6 patterns)"
)
def test_api_tests_exist(self):
"""Test file for the export API must exist."""
candidates = [
os.path.join(self.REPO_DIR, "tests", "test_api.py"),
os.path.join(self.REPO_DIR, "api", "tests.py"),
os.path.join(self.REPO_DIR, "babybuddy", "tests", "test_api.py"),
]
found = any(os.path.isfile(c) for c in candidates)
assert found, f"No API test file found among {candidates}"
| https://github.com/babybuddy/babybuddy | zhangyiiiiii/swe-skills-bench-python | |
springboot-tdd | Spring Boot TDD | See task file for detailed mission requirements. | feature | # Task: Add Pet Weight Tracking Feature to PetClinic
## Background
We need to add a weight tracking feature to the Spring PetClinic application. Pet owners should be able to record and view their pets' weight history over time.
## Files to Create/Modify
- `src/main/java/org/springframework/samples/petclinic/owner/WeightRecord.java` - Entity class
- `src/main/java/org/springframework/samples/petclinic/owner/WeightRecordRepository.java` - Data access
- `src/main/java/org/springframework/samples/petclinic/owner/OwnerController.java` - REST endpoints
- `src/main/resources/db/h2/` - DDL for weight_record table
## Requirements
### Entity (WeightRecord.java)
- `id`: Long (Primary Key)
- `petId`: Long (Foreign Key to Pet)
- `weightKg`: Double (Required, positive value)
- `recordDate`: LocalDate
### Repository
- Extend `JpaRepository<WeightRecord, Long>`
- Method: `findByPetIdOrderByRecordDateDesc(Long petId)`
### Controller Endpoints
- `POST /owners/{ownerId}/pets/{petId}/weight` - Record new weight
- `GET /owners/{ownerId}/pets/{petId}/weight/history` - Get weight history
### Database
- Create DDL in `src/main/resources/db/h2/`
## Expected Functionality
1. Successfully record pet weight → returns 201 Created
2. Reject invalid petId → returns 404 Not Found
3. Reject missing weightKg field → returns 400 Bad Request
4. Weight history returns list ordered by date (newest first)
## Acceptance Criteria
- Application compiles without errors: `./mvnw compile`
- All CRUD operations work correctly
- Endpoints handle edge cases appropriately (invalid input, missing data)
| ---
name: springboot-tdd
description: Test-driven development for Spring Boot using JUnit 5, Mockito, MockMvc, Testcontainers, and JaCoCo. Use when adding features, fixing bugs, or refactoring.
---
# Spring Boot TDD Workflow
TDD guidance for Spring Boot services with 80%+ coverage (unit + integration).
## When to Use
- New features or endpoints
- Bug fixes or refactors
- Adding data access logic or security rules
## Workflow
1) Write tests first (they should fail)
2) Implement minimal code to pass
3) Refactor with tests green
4) Enforce coverage (JaCoCo)
## Unit Tests (JUnit 5 + Mockito)
```java
@ExtendWith(MockitoExtension.class)
class MarketServiceTest {
@Mock MarketRepository repo;
@InjectMocks MarketService service;
@Test
void createsMarket() {
CreateMarketRequest req = new CreateMarketRequest("name", "desc", Instant.now(), List.of("cat"));
when(repo.save(any())).thenAnswer(inv -> inv.getArgument(0));
Market result = service.create(req);
assertThat(result.name()).isEqualTo("name");
verify(repo).save(any());
}
}
```
Patterns:
- Arrange-Act-Assert
- Avoid partial mocks; prefer explicit stubbing
- Use `@ParameterizedTest` for variants
## Web Layer Tests (MockMvc)
```java
@WebMvcTest(MarketController.class)
class MarketControllerTest {
@Autowired MockMvc mockMvc;
@MockBean MarketService marketService;
@Test
void returnsMarkets() throws Exception {
when(marketService.list(any())).thenReturn(Page.empty());
mockMvc.perform(get("/api/markets"))
.andExpect(status().isOk())
.andExpect(jsonPath("$.content").isArray());
}
}
```
## Integration Tests (SpringBootTest)
```java
@SpringBootTest
@AutoConfigureMockMvc
@ActiveProfiles("test")
class MarketIntegrationTest {
@Autowired MockMvc mockMvc;
@Test
void createsMarket() throws Exception {
mockMvc.perform(post("/api/markets")
.contentType(MediaType.APPLICATION_JSON)
.content("""
{"name":"Test","description":"Desc","endDate":"2030-01-01T00:00:00Z","categories":["general"]}
"""))
.andExpect(status().isCreated());
}
}
```
## Persistence Tests (DataJpaTest)
```java
@DataJpaTest
@AutoConfigureTestDatabase(replace = AutoConfigureTestDatabase.Replace.NONE)
@Import(TestContainersConfig.class)
class MarketRepositoryTest {
@Autowired MarketRepository repo;
@Test
void savesAndFinds() {
MarketEntity entity = new MarketEntity();
entity.setName("Test");
repo.save(entity);
Optional<MarketEntity> found = repo.findByName("Test");
assertThat(found).isPresent();
}
}
```
## Testcontainers
- Use reusable containers for Postgres/Redis to mirror production
- Wire via `@DynamicPropertySource` to inject JDBC URLs into Spring context
## Coverage (JaCoCo)
Maven snippet:
```xml
<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>0.8.14</version>
<executions>
<execution>
<goals><goal>prepare-agent</goal></goals>
</execution>
<execution>
<id>report</id>
<phase>verify</phase>
<goals><goal>report</goal></goals>
</execution>
</executions>
</plugin>
```
## Assertions
- Prefer AssertJ (`assertThat`) for readability
- For JSON responses, use `jsonPath`
- For exceptions: `assertThatThrownBy(...)`
## Test Data Builders
```java
class MarketBuilder {
private String name = "Test";
MarketBuilder withName(String name) { this.name = name; return this; }
Market build() { return new Market(null, name, MarketStatus.ACTIVE); }
}
```
## CI Commands
- Maven: `mvn -T 4 test` or `mvn verify`
- Gradle: `./gradlew test jacocoTestReport`
**Remember**: Keep tests fast, isolated, and deterministic. Test behavior, not implementation details.
| """
Test for 'springboot-tdd' skill — Spring Boot TDD Workflow
Validates that the Agent added REST endpoints with TDD approach in the
Spring PetClinic application: controller, service, model, and tests.
"""
import os
import subprocess
import pytest
class TestSpringbootTdd:
"""Verify Spring Boot TDD implementation in PetClinic."""
REPO_DIR = "/workspace/spring-petclinic"
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_controller_exists(self):
"""A new controller Java file must exist."""
src_dir = os.path.join(self.REPO_DIR, "src", "main", "java")
found = []
for root, dirs, files in os.walk(src_dir):
for f in files:
if f.endswith("Controller.java") and "Visit" in f:
found.append(os.path.join(root, f))
assert len(found) >= 1, "No Visit*Controller.java found"
def test_service_exists(self):
"""A service class for the feature must exist."""
src_dir = os.path.join(self.REPO_DIR, "src", "main", "java")
found = []
for root, dirs, files in os.walk(src_dir):
for f in files:
if f.endswith("Service.java") and "Visit" in f:
found.append(os.path.join(root, f))
assert len(found) >= 1, "No Visit*Service.java found"
def test_test_file_exists(self):
"""Test class for the controller must exist."""
test_dir = os.path.join(self.REPO_DIR, "src", "test", "java")
found = []
for root, dirs, files in os.walk(test_dir):
for f in files:
if "Visit" in f and f.endswith("Test.java"):
found.append(os.path.join(root, f))
assert len(found) >= 1, "No Visit*Test.java found"
# ------------------------------------------------------------------
# L2: compilation & test execution
# ------------------------------------------------------------------
def test_maven_compile(self):
"""./mvnw compile must succeed."""
result = subprocess.run(
["./mvnw", "compile", "-q", "-B"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=600,
)
assert (
result.returncode == 0
), f"Maven compile failed:\n{result.stdout[-2000:]}\n{result.stderr[-1000:]}"
def test_maven_tests_pass(self):
"""./mvnw test must pass."""
result = subprocess.run(
["./mvnw", "test", "-q", "-B"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=600,
)
assert (
result.returncode == 0
), f"Maven tests failed:\n{result.stdout[-2000:]}\n{result.stderr[-1000:]}"
def test_controller_has_rest_annotations(self):
"""Controller must use Spring REST annotations."""
src_dir = os.path.join(self.REPO_DIR, "src", "main", "java")
for root, dirs, files in os.walk(src_dir):
for f in files:
if f.endswith("Controller.java") and "Visit" in f:
fpath = os.path.join(root, f)
with open(fpath, "r") as fh:
content = fh.read()
rest_annotations = [
"@RestController",
"@Controller",
"@GetMapping",
"@PostMapping",
"@RequestMapping",
]
found = any(a in content for a in rest_annotations)
assert found, f"{f} missing REST annotations"
return
pytest.fail("Controller file not found for annotation check")
def test_service_has_transactional(self):
"""Service should use @Transactional or @Service."""
src_dir = os.path.join(self.REPO_DIR, "src", "main", "java")
for root, dirs, files in os.walk(src_dir):
for f in files:
if f.endswith("Service.java") and "Visit" in f:
fpath = os.path.join(root, f)
with open(fpath, "r") as fh:
content = fh.read()
annotations = ["@Service", "@Transactional", "@Component"]
found = any(a in content for a in annotations)
assert found, f"{f} missing Spring service annotations"
return
def test_test_uses_spring_testing(self):
"""Test class must use Spring test annotations."""
test_dir = os.path.join(self.REPO_DIR, "src", "test", "java")
for root, dirs, files in os.walk(test_dir):
for f in files:
if "Visit" in f and f.endswith("Test.java"):
fpath = os.path.join(root, f)
with open(fpath, "r") as fh:
content = fh.read()
annotations = [
"@SpringBootTest",
"@WebMvcTest",
"@MockBean",
"@DataJpaTest",
"@AutoConfigureMockMvc",
"@Test",
]
found = sum(1 for a in annotations if a in content)
assert found >= 2, f"{f} needs Spring test annotations"
return
def test_controller_has_validation(self):
"""Controller should validate inputs with @Valid or similar."""
src_dir = os.path.join(self.REPO_DIR, "src", "main", "java")
for root, dirs, files in os.walk(src_dir):
for f in files:
if f.endswith("Controller.java") and "Visit" in f:
fpath = os.path.join(root, f)
with open(fpath, "r") as fh:
content = fh.read()
validation = [
"@Valid",
"@NotNull",
"@NotBlank",
"@RequestBody",
"BindingResult",
]
found = any(v in content for v in validation)
assert found, f"{f} missing input validation"
return
def test_at_least_5_test_methods(self):
"""Test class must have at least 5 @Test methods."""
test_dir = os.path.join(self.REPO_DIR, "src", "test", "java")
for root, dirs, files in os.walk(test_dir):
for f in files:
if "Visit" in f and f.endswith("Test.java"):
fpath = os.path.join(root, f)
with open(fpath, "r") as fh:
content = fh.read()
test_count = content.count("@Test")
assert (
test_count >= 5
), f"{f} has only {test_count} @Test methods, need >= 5"
return
| https://github.com/spring-projects/spring-petclinic | zhangyiiiiii/swe-skills-bench-jvm | |
add-admin-api-endpoint | Ghost Admin API Endpoint Creator | See task file for detailed mission requirements. | feature | # Task: Create audit_logs Admin API Endpoint for Ghost CMS
## Background
We need to add an `audit_logs` resource endpoint to the Ghost Admin API, allowing administrators to query recent user operation records for security and compliance purposes.
## Files to Create/Modify
* `ghost/core/core/server/api/endpoints/audit-logs.js` - API endpoint implementation
* `ghost/core/core/server/models/audit-log.js` - Data model
* `ghost/core/core/server/web/api/endpoints/admin/routes.js` - Register endpoint
* `ghost/core/test/e2e-api/admin/audit-logs.test.js` - Test cases
## Requirements
### Model (audit-log.js)
* `id`: ObjectId (Primary Key)
* `userId`: ObjectId (Reference to User)
* `action`: String (e.g., "post.created", "user.login")
* `context`: JSON (Additional metadata)
* `createdAt`: DateTime
### API Endpoints
* `GET /ghost/api/admin/audit_logs/` - Browse with pagination (limit/page)
* `GET /ghost/api/admin/audit_logs/:id` - Read single record
### Implementation (audit-logs.js)
* **browse** : Support limit and page pagination parameters
* **read** : Query single record by id
* Proper permission checking (admin only)
## Expected Functionality
1. Authenticated owner/admin users receive 200 OK with audit_logs array in response body
2. Unauthenticated requests return 401 Unauthorized
3. Pagination parameters (limit, page) work correctly
## Acceptance Criteria
* API endpoints respond with correct status codes
* Response body contains `audit_logs` field with proper structure
* Permission checking works (admin-only access)
* Pagination functions as specified
| ---
name: Add Admin API Endpoint
description: Add a new endpoint or endpoints to Ghost's Admin API at `ghost/api/admin/**`.
---
# Create Admin API Endpoint
## Instructions
1. If creating an endpoint for an entirely new resource, create a new endpoint file in `ghost/core/core/server/api/endpoints/`. Otherwise, locate the existing endpoint file in the same directory.
2. The endpoint file should create a controller object using the JSDoc type from (@tryghost/api-framework).Controller, including at minimum a `docName` and a single endpoint definition, i.e. `browse`.
3. Add routes for each endpoint to `ghost/core/core/server/web/api/endpoints/admin/routes.js`.
4. Add basic `e2e-api` tests for the endpoint in `ghost/core/test/e2e-api/admin` to ensure the new endpoints function as expected.
5. Run the tests and iterate until they pass: `cd ghost/core && yarn test:single test/e2e-api/admin/{test-file-name}`.
## Reference
For a detailed reference on Ghost's API framework and how to create API controllers, see [reference.md](reference.md). | """
Test for 'add-admin-api-endpoint' skill — Ghost Admin API Endpoint
Validates that the Agent added a new audit_logs Admin API endpoint in Ghost with
model, endpoint handler, route registration, and tests.
"""
import os
import re
import subprocess
import pytest
from _dependency_utils import ensure_npm_dependencies
# @pytest.fixture(scope="module", autouse=True)
# def _ensure_repo_dependencies():
# ensure_npm_dependencies(TestAddAdminApiEndpoint.REPO_DIR)
class TestAddAdminApiEndpoint:
"""Verify Ghost Admin API audit_logs endpoint implementation."""
REPO_DIR = "/workspace/Ghost"
# ------------------------------------------------------------------
# Helpers
# ------------------------------------------------------------------
def _read(self, *parts):
fpath = os.path.join(self.REPO_DIR, *parts)
assert os.path.isfile(fpath), f"Required file not found: {fpath}"
with open(fpath, "r", errors="ignore") as fh:
return fh.read()
# ------------------------------------------------------------------
# L1: Model field & schema validation
# ------------------------------------------------------------------
def test_model_defines_all_required_fields(self):
"""AuditLog model must define ALL five schema fields: id, userId, action, context, createdAt."""
content = self._read(
"ghost", "core", "core", "server", "models", "audit-log.js"
)
required_fields = ["userId", "action", "context", "createdAt"]
missing = [f for f in required_fields if f not in content]
assert not missing, f"audit-log.js is missing required schema fields: {missing}"
def test_model_userId_is_objectid_type(self):
"""userId field in model must be declared as ObjectId (foreign key reference to User)."""
content = self._read(
"ghost", "core", "core", "server", "models", "audit-log.js"
)
# Typical Ghost/Bookshelf pattern: type: 'string' with length 24, or ObjectId comment,
# or a foreign-key relationship to users table.
objectid_patterns = [
r"ObjectId",
r"userId.*user",
r"user.*userId",
r"references.*users",
r"foreign.*key",
]
matched = any(re.search(p, content, re.IGNORECASE) for p in objectid_patterns)
# At minimum userId must appear in close proximity to an id-like qualifier
assert matched or re.search(
r"userId\s*[=:,]", content
), "userId does not appear to be declared as an ObjectId / user reference in audit-log.js"
def test_model_action_is_string_type(self):
"""action field must be declared as a string type in the schema."""
content = self._read(
"ghost", "core", "core", "server", "models", "audit-log.js"
)
# Look for action appearing alongside string type hints or schema definition
assert re.search(r"action", content), "action field missing from model"
# Ensure it is not only used as a variable in logic — it should appear in a schema block
assert re.search(
r"['\"]action['\"]|action\s*:", content
), "action does not appear to be declared as a schema property in audit-log.js"
def test_model_context_supports_json(self):
"""context field must support JSON/object storage (not a plain scalar type)."""
content = self._read(
"ghost", "core", "core", "server", "models", "audit-log.js"
)
json_patterns = [
r"JSON",
r"json",
r"jsonb",
r"context.*object",
r"object.*context",
r"serialize",
r"parse",
]
assert (
any(re.search(p, content) for p in json_patterns) or "context" in content
), "context field does not appear to support JSON storage in audit-log.js"
def test_model_extends_ghost_base_model(self):
"""Model must extend Ghost's base model (ghostBookshelf or similar backbone/bookshelf pattern)."""
content = self._read(
"ghost", "core", "core", "server", "models", "audit-log.js"
)
base_patterns = [
r"ghostBookshelf",
r"bookshelf",
r"Model\.extend",
r"extend\(",
r"GhostModel",
]
assert any(
re.search(p, content, re.IGNORECASE) for p in base_patterns
), "audit-log.js does not appear to extend Ghost's base model (ghostBookshelf/bookshelf)"
def test_model_exports_audit_log(self):
"""Model must export the AuditLog class/object so it can be require()'d."""
content = self._read(
"ghost", "core", "core", "server", "models", "audit-log.js"
)
assert re.search(
r"module\.exports|exports\.", content
), "audit-log.js does not export anything via module.exports"
assert re.search(
r"[Aa]udit[_-]?[Ll]og", content
), "audit-log.js exports do not reference AuditLog"
# ------------------------------------------------------------------
# L2: Endpoint handler structure & pagination logic
# ------------------------------------------------------------------
def test_endpoint_exports_browse_and_read(self):
"""audit-logs.js must export both browse and read handler functions."""
content = self._read(
"ghost", "core", "core", "server", "api", "endpoints", "audit-logs.js"
)
assert "browse" in content, "audit-logs.js missing 'browse' handler export"
assert "read" in content, "audit-logs.js missing 'read' handler export"
# Both must appear in an exports/module.exports context
assert re.search(
r"module\.exports|exports\.", content
), "audit-logs.js does not use module.exports"
def test_endpoint_browse_supports_limit_and_page(self):
"""browse handler must declare support for BOTH limit AND page pagination parameters."""
content = self._read(
"ghost", "core", "core", "server", "api", "endpoints", "audit-logs.js"
)
assert (
"limit" in content
), "audit-logs.js browse handler missing 'limit' pagination param"
assert (
"page" in content
), "audit-logs.js browse handler missing 'page' pagination param"
def test_endpoint_read_accepts_id_param(self):
"""read handler must accept an id parameter to fetch a single audit log record."""
content = self._read(
"ghost", "core", "core", "server", "api", "endpoints", "audit-logs.js"
)
id_patterns = [r"\bid\b", r"options\.id", r"data\.id", r"params\.id"]
assert any(
re.search(p, content) for p in id_patterns
), "read handler in audit-logs.js does not appear to consume an 'id' parameter"
def test_endpoint_response_wraps_in_audit_logs_key(self):
"""Response must wrap records under the 'audit_logs' key (Ghost API convention)."""
content = self._read(
"ghost", "core", "core", "server", "api", "endpoints", "audit-logs.js"
)
assert re.search(
r"audit_logs|auditLogs", content
), "audit-logs.js does not wrap its response data in an 'audit_logs' key"
def test_endpoint_browse_calls_model_fetch(self):
"""browse handler must call a model method to retrieve records (findPage, findAll, fetchAll, etc.)."""
content = self._read(
"ghost", "core", "core", "server", "api", "endpoints", "audit-logs.js"
)
fetch_patterns = [
r"findPage",
r"findAll",
r"fetchAll",
r"\.fetch\b",
r"\.findOne",
r"getFilteredCollection",
]
assert any(
re.search(p, content) for p in fetch_patterns
), "browse handler does not appear to call any model fetch method (findPage/findAll/fetchAll)"
# ------------------------------------------------------------------
# L3: Permission / authentication enforcement
# ------------------------------------------------------------------
def test_endpoint_declares_permissions(self):
"""Endpoint must declare admin-only permissions (Ghost uses permissions objects or docName)."""
content = self._read(
"ghost", "core", "core", "server", "api", "endpoints", "audit-logs.js"
)
perm_patterns = [
r"permissions",
r"docName",
r"canThis",
r"isAuthenticated",
r"authorize",
r"owner",
r"administrator",
]
assert any(
re.search(p, content, re.IGNORECASE) for p in perm_patterns
), "No permission/auth declaration found in audit-logs.js — endpoint must be admin-only"
def test_endpoint_permission_targets_audit_log_resource(self):
"""The permission check must reference the audit_log or audit-log resource (not a generic wildcard)."""
content = self._read(
"ghost", "core", "core", "server", "api", "endpoints", "audit-logs.js"
)
resource_patterns = [r"audit[_\-]log", r"auditLog", r"audit_log"]
assert any(
re.search(p, content, re.IGNORECASE) for p in resource_patterns
), "Permission check in audit-logs.js does not appear to reference the audit_log resource"
# ------------------------------------------------------------------
# L4: Route registration structure
# ------------------------------------------------------------------
def test_routes_maps_get_method_for_browse(self):
"""admin/routes.js must register a GET route for the audit_logs collection endpoint."""
content = self._read(
"ghost",
"core",
"core",
"server",
"web",
"api",
"endpoints",
"admin",
"routes.js",
)
# Must have a GET (or router.get) associated with audit_logs path
get_audit_pattern = re.search(
r"(get|GET).*audit|audit.*(get|GET)", content, re.IGNORECASE | re.DOTALL
)
# Or a resource/router definition that lists audit_logs as a route
resource_pattern = re.search(r"audit_logs|audit-logs", content, re.IGNORECASE)
assert (
resource_pattern
), "admin/routes.js does not register any route containing 'audit_logs' or 'audit-logs'"
assert get_audit_pattern or re.search(
r"router\.(get|use)", content, re.IGNORECASE
), "admin/routes.js does not use a GET handler alongside the audit_logs route"
def test_routes_registers_single_record_route(self):
"""admin/routes.js must register both the collection route and the /:id single-record route."""
content = self._read(
"ghost",
"core",
"core",
"server",
"web",
"api",
"endpoints",
"admin",
"routes.js",
)
# Single-record GET route (:id parameter or similar)
id_route_pattern = re.search(r":id|\/\:|\/:id", content)
# OR at minimum two separate mentions of audit in the route block
audit_mentions = len(re.findall(r"audit", content, re.IGNORECASE))
assert id_route_pattern or audit_mentions >= 2, (
"admin/routes.js appears to be missing a /:id single-record route for audit_logs "
"(need GET /audit_logs/:id in addition to GET /audit_logs/)"
)
def test_routes_references_endpoint_handler(self):
"""admin/routes.js must reference the audit-logs endpoint handler module."""
content = self._read(
"ghost",
"core",
"core",
"server",
"web",
"api",
"endpoints",
"admin",
"routes.js",
)
require_pattern = re.search(
r"require.*audit|audit.*require|audit.*endpoint|endpoint.*audit",
content,
re.IGNORECASE,
)
import_pattern = re.search(
r"import.*audit|audit.*import", content, re.IGNORECASE
)
# May also reference via a router binding without explicit require if it is auto-loaded
direct_ref = re.search(
r"auditLogs|audit_logs|audit-logs", content, re.IGNORECASE
)
assert (
require_pattern or import_pattern or direct_ref
), "admin/routes.js does not appear to reference the audit-logs endpoint handler"
# ------------------------------------------------------------------
# L5: E2E test file coverage
# ------------------------------------------------------------------
def test_e2e_tests_cover_browse_endpoint(self):
"""E2E test file must contain tests for the list/browse (GET /audit_logs/) endpoint."""
content = self._read(
"ghost", "core", "test", "e2e-api", "admin", "audit-logs.test.js"
)
browse_patterns = [
r"/audit_logs/\b",
r"audit[_-]logs.*get",
r"get.*audit[_-]logs",
r"browse",
]
assert any(
re.search(p, content, re.IGNORECASE) for p in browse_patterns
), "E2E test file does not appear to test the browse (GET /audit_logs/) endpoint"
def test_e2e_tests_cover_read_by_id(self):
"""E2E test file must contain a test for the single-record (GET /audit_logs/:id) endpoint."""
content = self._read(
"ghost", "core", "test", "e2e-api", "admin", "audit-logs.test.js"
)
read_patterns = [
r"audit_logs/\$\{",
r"audit_logs/.*id",
r"/:id",
r"\bread\b",
r"single",
]
assert any(
re.search(p, content, re.IGNORECASE) for p in read_patterns
), "E2E test file does not appear to test the single-record (GET /audit_logs/:id) endpoint"
def test_e2e_tests_assert_200_on_authenticated_request(self):
"""E2E test must assert HTTP 200 for authenticated owner/admin requests."""
content = self._read(
"ghost", "core", "test", "e2e-api", "admin", "audit-logs.test.js"
)
assert re.search(
r"200", content
), "E2E test file does not assert HTTP 200 for authenticated requests"
def test_e2e_tests_assert_401_for_unauthenticated(self):
"""E2E test must assert HTTP 401 for unauthenticated requests."""
content = self._read(
"ghost", "core", "test", "e2e-api", "admin", "audit-logs.test.js"
)
assert re.search(
r"401", content
), "E2E test file does not assert HTTP 401 for unauthenticated requests"
def test_e2e_tests_validate_response_structure(self):
"""E2E test must inspect the response body for the audit_logs field."""
content = self._read(
"ghost", "core", "test", "e2e-api", "admin", "audit-logs.test.js"
)
assert re.search(
r"audit_logs|auditLogs", content
), "E2E test does not validate that response body contains the 'audit_logs' field"
def test_e2e_tests_cover_pagination(self):
"""E2E test must exercise the pagination parameters (limit and/or page)."""
content = self._read(
"ghost", "core", "test", "e2e-api", "admin", "audit-logs.test.js"
)
pagination_patterns = [r"limit", r"page", r"pagination", r"per_page"]
assert any(
re.search(p, content, re.IGNORECASE) for p in pagination_patterns
), "E2E test file does not exercise pagination (limit/page) parameters"
# ------------------------------------------------------------------
# L6: Node.js syntax sanity checks
# ------------------------------------------------------------------
def test_model_has_no_syntax_errors(self):
"""Node.js must be able to parse the AuditLog model without SyntaxErrors."""
model_path = os.path.join(
self.REPO_DIR,
"ghost",
"core",
"core",
"server",
"models",
"audit-log.js",
)
result = subprocess.run(
["node", "--check", model_path],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert (
result.returncode == 0
), f"Syntax error detected in audit-log.js:\n{result.stderr}"
def test_endpoint_has_no_syntax_errors(self):
"""Node.js must be able to parse the audit-logs endpoint without SyntaxErrors."""
endpoint_path = os.path.join(
self.REPO_DIR,
"ghost",
"core",
"core",
"server",
"api",
"endpoints",
"audit-logs.js",
)
result = subprocess.run(
["node", "--check", endpoint_path],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert (
result.returncode == 0
), f"Syntax error detected in audit-logs.js:\n{result.stderr}"
def test_e2e_test_file_has_no_syntax_errors(self):
"""Node.js must be able to parse the E2E test file without SyntaxErrors."""
test_path = os.path.join(
self.REPO_DIR,
"ghost",
"core",
"test",
"e2e-api",
"admin",
"audit-logs.test.js",
)
result = subprocess.run(
["node", "--check", test_path],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert (
result.returncode == 0
), f"Syntax error detected in audit-logs.test.js:\n{result.stderr}"
| https://github.com/TryGhost/Ghost | zhangyiiiiii/swe-skills-bench-python | |
mcp-builder | MCP Server Builder | See task file for detailed mission requirements. | feature | # Task: Build MCP Server for Markdown Knowledge Base with SQLite
## Background
We need to create a hybrid MCP (Model Context Protocol) server using TypeScript and `@modelcontextprotocol/sdk` that connects a local Markdown knowledge base with a SQLite metadata database.
## Files to Create/Modify
- `src/markdown-sqlite/index.ts` - Main server implementation
- `src/markdown-sqlite/package.json` - Package configuration
- `src/markdown-sqlite/tests/index.test.ts` - Unit tests
## Requirements
### Tools to Implement
**1. index_markdown(dir_path: string)**
- Scan all `.md` files in specified directory
- Extract: file path, first-level heading, tags (from YAML front-matter)
- Write to SQLite table `documents`
**2. search_documents(query: string)**
- Use SQLite FTS5 full-text search
- Return matching document summaries: id, title, snippet
**3. read_document(doc_id: number)**
- Return complete Markdown content of specified document
### Package Configuration
- TypeScript compilation with `@modelcontextprotocol/sdk`
- `"build"` script for compilation
- `"test"` script for running tests
### SQLite Schema
```sql
CREATE VIRTUAL TABLE documents USING fts5(
path, title, tags, content
);
```
### Expected Functionality
- `index_markdown` successfully indexes all markdown files in directory
- `search_documents` returns relevant results matching the query
- `read_document` returns complete and correct markdown content
- Graceful error handling for non-existent file paths
## Acceptance Criteria
- `cd src/markdown-sqlite && npm run build` compiles without errors
- All three MCP tools work as specified
- Error cases are handled appropriately
| ---
name: mcp-builder
description: Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use when building MCP servers to integrate external APIs or services, whether in Python (FastMCP) or Node/TypeScript (MCP SDK).
license: Complete terms in LICENSE.txt
---
# MCP Server Development Guide
## Overview
Create MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. The quality of an MCP server is measured by how well it enables LLMs to accomplish real-world tasks.
---
# Process
## 🚀 High-Level Workflow
Creating a high-quality MCP server involves four main phases:
### Phase 1: Deep Research and Planning
#### 1.1 Understand Modern MCP Design
**API Coverage vs. Workflow Tools:**
Balance comprehensive API endpoint coverage with specialized workflow tools. Workflow tools can be more convenient for specific tasks, while comprehensive coverage gives agents flexibility to compose operations. Performance varies by client—some clients benefit from code execution that combines basic tools, while others work better with higher-level workflows. When uncertain, prioritize comprehensive API coverage.
**Tool Naming and Discoverability:**
Clear, descriptive tool names help agents find the right tools quickly. Use consistent prefixes (e.g., `github_create_issue`, `github_list_repos`) and action-oriented naming.
**Context Management:**
Agents benefit from concise tool descriptions and the ability to filter/paginate results. Design tools that return focused, relevant data. Some clients support code execution which can help agents filter and process data efficiently.
**Actionable Error Messages:**
Error messages should guide agents toward solutions with specific suggestions and next steps.
#### 1.2 Study MCP Protocol Documentation
**Navigate the MCP specification:**
Start with the sitemap to find relevant pages: `https://modelcontextprotocol.io/sitemap.xml`
Then fetch specific pages with `.md` suffix for markdown format (e.g., `https://modelcontextprotocol.io/specification/draft.md`).
Key pages to review:
- Specification overview and architecture
- Transport mechanisms (streamable HTTP, stdio)
- Tool, resource, and prompt definitions
#### 1.3 Study Framework Documentation
**Recommended stack:**
- **Language**: TypeScript (high-quality SDK support and good compatibility in many execution environments e.g. MCPB. Plus AI models are good at generating TypeScript code, benefiting from its broad usage, static typing and good linting tools)
- **Transport**: Streamable HTTP for remote servers, using stateless JSON (simpler to scale and maintain, as opposed to stateful sessions and streaming responses). stdio for local servers.
**Load framework documentation:**
- **MCP Best Practices**: [📋 View Best Practices](./reference/mcp_best_practices.md) - Core guidelines
**For TypeScript (recommended):**
- **TypeScript SDK**: Use WebFetch to load `https://raw.githubusercontent.com/modelcontextprotocol/typescript-sdk/main/README.md`
- [⚡ TypeScript Guide](./reference/node_mcp_server.md) - TypeScript patterns and examples
**For Python:**
- **Python SDK**: Use WebFetch to load `https://raw.githubusercontent.com/modelcontextprotocol/python-sdk/main/README.md`
- [🐍 Python Guide](./reference/python_mcp_server.md) - Python patterns and examples
#### 1.4 Plan Your Implementation
**Understand the API:**
Review the service's API documentation to identify key endpoints, authentication requirements, and data models. Use web search and WebFetch as needed.
**Tool Selection:**
Prioritize comprehensive API coverage. List endpoints to implement, starting with the most common operations.
---
### Phase 2: Implementation
#### 2.1 Set Up Project Structure
See language-specific guides for project setup:
- [⚡ TypeScript Guide](./reference/node_mcp_server.md) - Project structure, package.json, tsconfig.json
- [🐍 Python Guide](./reference/python_mcp_server.md) - Module organization, dependencies
#### 2.2 Implement Core Infrastructure
Create shared utilities:
- API client with authentication
- Error handling helpers
- Response formatting (JSON/Markdown)
- Pagination support
#### 2.3 Implement Tools
For each tool:
**Input Schema:**
- Use Zod (TypeScript) or Pydantic (Python)
- Include constraints and clear descriptions
- Add examples in field descriptions
**Output Schema:**
- Define `outputSchema` where possible for structured data
- Use `structuredContent` in tool responses (TypeScript SDK feature)
- Helps clients understand and process tool outputs
**Tool Description:**
- Concise summary of functionality
- Parameter descriptions
- Return type schema
**Implementation:**
- Async/await for I/O operations
- Proper error handling with actionable messages
- Support pagination where applicable
- Return both text content and structured data when using modern SDKs
**Annotations:**
- `readOnlyHint`: true/false
- `destructiveHint`: true/false
- `idempotentHint`: true/false
- `openWorldHint`: true/false
---
### Phase 3: Review and Test
#### 3.1 Code Quality
Review for:
- No duplicated code (DRY principle)
- Consistent error handling
- Full type coverage
- Clear tool descriptions
#### 3.2 Build and Test
**TypeScript:**
- Run `npm run build` to verify compilation
- Test with MCP Inspector: `npx @modelcontextprotocol/inspector`
**Python:**
- Verify syntax: `python -m py_compile your_server.py`
- Test with MCP Inspector
See language-specific guides for detailed testing approaches and quality checklists.
---
### Phase 4: Create Evaluations
After implementing your MCP server, create comprehensive evaluations to test its effectiveness.
**Load [✅ Evaluation Guide](./reference/evaluation.md) for complete evaluation guidelines.**
#### 4.1 Understand Evaluation Purpose
Use evaluations to test whether LLMs can effectively use your MCP server to answer realistic, complex questions.
#### 4.2 Create 10 Evaluation Questions
To create effective evaluations, follow the process outlined in the evaluation guide:
1. **Tool Inspection**: List available tools and understand their capabilities
2. **Content Exploration**: Use READ-ONLY operations to explore available data
3. **Question Generation**: Create 10 complex, realistic questions
4. **Answer Verification**: Solve each question yourself to verify answers
#### 4.3 Evaluation Requirements
Ensure each question is:
- **Independent**: Not dependent on other questions
- **Read-only**: Only non-destructive operations required
- **Complex**: Requiring multiple tool calls and deep exploration
- **Realistic**: Based on real use cases humans would care about
- **Verifiable**: Single, clear answer that can be verified by string comparison
- **Stable**: Answer won't change over time
#### 4.4 Output Format
Create an XML file with this structure:
```xml
<evaluation>
<qa_pair>
<question>Find discussions about AI model launches with animal codenames. One model needed a specific safety designation that uses the format ASL-X. What number X was being determined for the model named after a spotted wild cat?</question>
<answer>3</answer>
</qa_pair>
<!-- More qa_pairs... -->
</evaluation>
```
---
# Reference Files
## 📚 Documentation Library
Load these resources as needed during development:
### Core MCP Documentation (Load First)
- **MCP Protocol**: Start with sitemap at `https://modelcontextprotocol.io/sitemap.xml`, then fetch specific pages with `.md` suffix
- [📋 MCP Best Practices](./reference/mcp_best_practices.md) - Universal MCP guidelines including:
- Server and tool naming conventions
- Response format guidelines (JSON vs Markdown)
- Pagination best practices
- Transport selection (streamable HTTP vs stdio)
- Security and error handling standards
### SDK Documentation (Load During Phase 1/2)
- **Python SDK**: Fetch from `https://raw.githubusercontent.com/modelcontextprotocol/python-sdk/main/README.md`
- **TypeScript SDK**: Fetch from `https://raw.githubusercontent.com/modelcontextprotocol/typescript-sdk/main/README.md`
### Language-Specific Implementation Guides (Load During Phase 2)
- [🐍 Python Implementation Guide](./reference/python_mcp_server.md) - Complete Python/FastMCP guide with:
- Server initialization patterns
- Pydantic model examples
- Tool registration with `@mcp.tool`
- Complete working examples
- Quality checklist
- [⚡ TypeScript Implementation Guide](./reference/node_mcp_server.md) - Complete TypeScript guide with:
- Project structure
- Zod schema patterns
- Tool registration with `server.registerTool`
- Complete working examples
- Quality checklist
### Evaluation Guide (Load During Phase 4)
- [✅ Evaluation Guide](./reference/evaluation.md) - Complete evaluation creation guide with:
- Question creation guidelines
- Answer verification strategies
- XML format specifications
- Example questions and answers
- Running an evaluation with the provided scripts
| """
Test for 'mcp-builder' skill — MCP Server Builder
Validates that the Agent created a new MCP (Model Context Protocol) server
implementation with TypeScript source, build config, and tests.
"""
import os
import subprocess
import pytest
from _dependency_utils import ensure_npm_dependencies
@pytest.fixture(scope="module", autouse=True)
def _ensure_repo_dependencies():
ensure_npm_dependencies(TestMcpBuilder.REPO_DIR)
class TestMcpBuilder:
"""Verify MCP server implementation."""
REPO_DIR = "/workspace/servers"
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_src_directory_exists(self):
"""New server source directory must exist."""
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.endswith(".ts") and "index" in f.lower():
found.append(os.path.join(root, f))
assert len(found) >= 1, "No TypeScript index.ts found"
def test_package_json_exists(self):
"""package.json must exist for the server."""
found = False
for root, dirs, files in os.walk(self.REPO_DIR):
if "package.json" in files:
fpath = os.path.join(root, "package.json")
with open(fpath, "r") as f:
content = f.read()
if "mcp" in content.lower() or "server" in content.lower():
found = True
break
assert found, "No package.json for MCP server found"
def test_tsconfig_exists(self):
"""tsconfig.json must exist."""
found = False
for root, dirs, files in os.walk(self.REPO_DIR):
if "tsconfig.json" in files:
found = True
break
assert found, "tsconfig.json not found"
# ------------------------------------------------------------------
# L2: content & build validation
# ------------------------------------------------------------------
def _find_ts_files(self):
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.endswith(".ts") and "node_modules" not in root:
found.append(os.path.join(root, f))
return found
def _read_all_ts(self):
content = ""
for fpath in self._find_ts_files():
try:
with open(fpath, "r", errors="ignore") as f:
content += f.read() + "\n"
except OSError:
pass
return content
def test_mcp_protocol_implementation(self):
"""Source must implement MCP protocol concepts."""
content = self._read_all_ts()
mcp_patterns = [
"Server",
"Tool",
"Resource",
"Prompt",
"handler",
"schema",
"jsonrpc",
]
found = sum(1 for p in mcp_patterns if p in content)
assert found >= 3, f"Only {found} MCP protocol concepts found"
def test_tool_definitions(self):
"""Server must define at least one tool."""
content = self._read_all_ts()
tool_patterns = [
"tool",
"Tool",
"tools",
"listTools",
"callTool",
"inputSchema",
]
found = sum(1 for p in tool_patterns if p in content)
assert found >= 2, "Insufficient tool definitions"
def test_error_handling(self):
"""Server must implement error handling."""
content = self._read_all_ts()
error_patterns = ["catch", "Error", "throw", "try", "McpError", "ErrorCode"]
found = sum(1 for p in error_patterns if p in content)
assert found >= 2, "Insufficient error handling"
def test_npm_build(self):
"""npm run build must succeed (find the right package dir)."""
# Find package.json with build script
for root, dirs, files in os.walk(self.REPO_DIR):
if "package.json" in files and "node_modules" not in root:
import json
pkg_path = os.path.join(root, "package.json")
with open(pkg_path, "r") as f:
pkg = json.load(f)
if "build" in pkg.get("scripts", {}):
result = subprocess.run(
["npm", "run", "build"],
cwd=root,
capture_output=True,
text=True,
timeout=300,
)
assert (
result.returncode == 0
), f"npm build failed in {root}:\n{result.stderr[-1000:]}"
return
pytest.skip("No package.json with build script found")
def test_input_validation(self):
"""Tools must validate input schemas."""
content = self._read_all_ts()
validation_patterns = [
"schema",
"zod",
"Zod",
"validate",
"inputSchema",
"z.object",
"z.string",
]
found = any(p in content for p in validation_patterns)
assert found, "No input validation/schema found"
def test_transport_handling(self):
"""Server must handle transport (stdio or HTTP)."""
content = self._read_all_ts()
transport_patterns = [
"stdio",
"StdioServerTransport",
"SSEServerTransport",
"StreamableHTTPServerTransport",
"transport",
"stdin",
"stdout",
]
found = any(p in content for p in transport_patterns)
assert found, "No transport handling found"
def test_exports_or_main(self):
"""Package must have main/exports in package.json."""
for root, dirs, files in os.walk(self.REPO_DIR):
if "package.json" in files and "node_modules" not in root:
import json
with open(os.path.join(root, "package.json"), "r") as f:
pkg = json.load(f)
if pkg.get("main") or pkg.get("bin") or pkg.get("exports"):
return
pytest.fail("No main/bin/exports field in package.json")
| https://github.com/modelcontextprotocol/servers | zhangyiiiiii/swe-skills-bench-python | |
python-resilience | Python Resilience Patterns | See task file for detailed mission requirements. | feature | # Task: Implement Resilient Transport Layer for httpx
## Background
We need to add a resilient transport module to the httpx library that provides automatic retry and circuit breaker capabilities directly within the httpx transport layer.
## Files to Create/Modify
- `httpx/_transports/resilient.py` - Resilient transport implementation (new)
## Requirements
### ResilientTransport Class
Implement a `ResilientTransport` class in `httpx/_transports/resilient.py` that wraps an existing `httpx.BaseTransport` and adds:
**Retry Logic:**
- Maximum 3 retry attempts on transient failures
- Exponential backoff between retries: 1s → 2s → 4s
- Retry only on: HTTP 5xx responses, `ConnectError`, `TimeoutException`
- Do NOT retry on: HTTP 4xx responses (client errors)
- Configurable timeout settings per request
**Circuit Breaker:**
- Three states: `CLOSED`, `OPEN`, `HALF_OPEN`
- Transition to `OPEN` after 5 consecutive failures
- 30-second cooldown before transitioning to `HALF_OPEN`
- Single success in `HALF_OPEN` restores to `CLOSED`
- Raise custom `CircuitOpenError` when circuit is open
### Expected Functionality
- Retry logic exhausts attempts and raises the final exception appropriately
- Circuit breaker opens when the failure threshold is reached
- Circuit transitions from `HALF_OPEN` to `CLOSED` on a successful request
- 4xx client errors are not retried (only 5xx and connection errors)
## Acceptance Criteria
- `httpx/_transports/resilient.py` compiles without syntax errors
- `ResilientTransport` correctly implements retry and circuit breaker behavior
- Error handling covers all specified scenarios
| ---
name: python-resilience
description: Python resilience patterns including automatic retries, exponential backoff, timeouts, and fault-tolerant decorators. Use when adding retry logic, implementing timeouts, building fault-tolerant services, or handling transient failures.
---
# Python Resilience Patterns
Build fault-tolerant Python applications that gracefully handle transient failures, network issues, and service outages. Resilience patterns keep systems running when dependencies are unreliable.
## When to Use This Skill
- Adding retry logic to external service calls
- Implementing timeouts for network operations
- Building fault-tolerant microservices
- Handling rate limiting and backpressure
- Creating infrastructure decorators
- Designing circuit breakers
## Core Concepts
### 1. Transient vs Permanent Failures
Retry transient errors (network timeouts, temporary service issues). Don't retry permanent errors (invalid credentials, bad requests).
### 2. Exponential Backoff
Increase wait time between retries to avoid overwhelming recovering services.
### 3. Jitter
Add randomness to backoff to prevent thundering herd when many clients retry simultaneously.
### 4. Bounded Retries
Cap both attempt count and total duration to prevent infinite retry loops.
## Quick Start
```python
from tenacity import retry, stop_after_attempt, wait_exponential_jitter
@retry(
stop=stop_after_attempt(3),
wait=wait_exponential_jitter(initial=1, max=10),
)
def call_external_service(request: dict) -> dict:
return httpx.post("https://api.example.com", json=request).json()
```
## Fundamental Patterns
### Pattern 1: Basic Retry with Tenacity
Use the `tenacity` library for production-grade retry logic. For simpler cases, consider built-in retry functionality or a lightweight custom implementation.
```python
from tenacity import (
retry,
stop_after_attempt,
stop_after_delay,
wait_exponential_jitter,
retry_if_exception_type,
)
TRANSIENT_ERRORS = (ConnectionError, TimeoutError, OSError)
@retry(
retry=retry_if_exception_type(TRANSIENT_ERRORS),
stop=stop_after_attempt(5) | stop_after_delay(60),
wait=wait_exponential_jitter(initial=1, max=30),
)
def fetch_data(url: str) -> dict:
"""Fetch data with automatic retry on transient failures."""
response = httpx.get(url, timeout=30)
response.raise_for_status()
return response.json()
```
### Pattern 2: Retry Only Appropriate Errors
Whitelist specific transient exceptions. Never retry:
- `ValueError`, `TypeError` - These are bugs, not transient issues
- `AuthenticationError` - Invalid credentials won't become valid
- HTTP 4xx errors (except 429) - Client errors are permanent
```python
from tenacity import retry, retry_if_exception_type
import httpx
# Define what's retryable
RETRYABLE_EXCEPTIONS = (
ConnectionError,
TimeoutError,
httpx.ConnectTimeout,
httpx.ReadTimeout,
)
@retry(
retry=retry_if_exception_type(RETRYABLE_EXCEPTIONS),
stop=stop_after_attempt(3),
wait=wait_exponential_jitter(initial=1, max=10),
)
def resilient_api_call(endpoint: str) -> dict:
"""Make API call with retry on network issues."""
return httpx.get(endpoint, timeout=10).json()
```
### Pattern 3: HTTP Status Code Retries
Retry specific HTTP status codes that indicate transient issues.
```python
from tenacity import retry, retry_if_result, stop_after_attempt
import httpx
RETRY_STATUS_CODES = {429, 502, 503, 504}
def should_retry_response(response: httpx.Response) -> bool:
"""Check if response indicates a retryable error."""
return response.status_code in RETRY_STATUS_CODES
@retry(
retry=retry_if_result(should_retry_response),
stop=stop_after_attempt(3),
wait=wait_exponential_jitter(initial=1, max=10),
)
def http_request(method: str, url: str, **kwargs) -> httpx.Response:
"""Make HTTP request with retry on transient status codes."""
return httpx.request(method, url, timeout=30, **kwargs)
```
### Pattern 4: Combined Exception and Status Retry
Handle both network exceptions and HTTP status codes.
```python
from tenacity import (
retry,
retry_if_exception_type,
retry_if_result,
stop_after_attempt,
wait_exponential_jitter,
before_sleep_log,
)
import logging
import httpx
logger = logging.getLogger(__name__)
TRANSIENT_EXCEPTIONS = (
ConnectionError,
TimeoutError,
httpx.ConnectError,
httpx.ReadTimeout,
)
RETRY_STATUS_CODES = {429, 500, 502, 503, 504}
def is_retryable_response(response: httpx.Response) -> bool:
return response.status_code in RETRY_STATUS_CODES
@retry(
retry=(
retry_if_exception_type(TRANSIENT_EXCEPTIONS) |
retry_if_result(is_retryable_response)
),
stop=stop_after_attempt(5),
wait=wait_exponential_jitter(initial=1, max=30),
before_sleep=before_sleep_log(logger, logging.WARNING),
)
def robust_http_call(
method: str,
url: str,
**kwargs,
) -> httpx.Response:
"""HTTP call with comprehensive retry handling."""
return httpx.request(method, url, timeout=30, **kwargs)
```
## Advanced Patterns
### Pattern 5: Logging Retry Attempts
Track retry behavior for debugging and alerting.
```python
from tenacity import retry, stop_after_attempt, wait_exponential
import structlog
logger = structlog.get_logger()
def log_retry_attempt(retry_state):
"""Log detailed retry information."""
exception = retry_state.outcome.exception()
logger.warning(
"Retrying operation",
attempt=retry_state.attempt_number,
exception_type=type(exception).__name__,
exception_message=str(exception),
next_wait_seconds=retry_state.next_action.sleep if retry_state.next_action else None,
)
@retry(
stop=stop_after_attempt(3),
wait=wait_exponential(multiplier=1, max=10),
before_sleep=log_retry_attempt,
)
def call_with_logging(request: dict) -> dict:
"""External call with retry logging."""
...
```
### Pattern 6: Timeout Decorator
Create reusable timeout decorators for consistent timeout handling.
```python
import asyncio
from functools import wraps
from typing import TypeVar, Callable
T = TypeVar("T")
def with_timeout(seconds: float):
"""Decorator to add timeout to async functions."""
def decorator(func: Callable[..., T]) -> Callable[..., T]:
@wraps(func)
async def wrapper(*args, **kwargs) -> T:
return await asyncio.wait_for(
func(*args, **kwargs),
timeout=seconds,
)
return wrapper
return decorator
@with_timeout(30)
async def fetch_with_timeout(url: str) -> dict:
"""Fetch URL with 30 second timeout."""
async with httpx.AsyncClient() as client:
response = await client.get(url)
return response.json()
```
### Pattern 7: Cross-Cutting Concerns via Decorators
Stack decorators to separate infrastructure from business logic.
```python
from functools import wraps
from typing import TypeVar, Callable
import structlog
logger = structlog.get_logger()
T = TypeVar("T")
def traced(name: str | None = None):
"""Add tracing to function calls."""
def decorator(func: Callable[..., T]) -> Callable[..., T]:
span_name = name or func.__name__
@wraps(func)
async def wrapper(*args, **kwargs) -> T:
logger.info("Operation started", operation=span_name)
try:
result = await func(*args, **kwargs)
logger.info("Operation completed", operation=span_name)
return result
except Exception as e:
logger.error("Operation failed", operation=span_name, error=str(e))
raise
return wrapper
return decorator
# Stack multiple concerns
@traced("fetch_user_data")
@with_timeout(30)
@retry(stop=stop_after_attempt(3), wait=wait_exponential_jitter())
async def fetch_user_data(user_id: str) -> dict:
"""Fetch user with tracing, timeout, and retry."""
...
```
### Pattern 8: Dependency Injection for Testability
Pass infrastructure components through constructors for easy testing.
```python
from dataclasses import dataclass
from typing import Protocol
class Logger(Protocol):
def info(self, msg: str, **kwargs) -> None: ...
def error(self, msg: str, **kwargs) -> None: ...
class MetricsClient(Protocol):
def increment(self, metric: str, tags: dict | None = None) -> None: ...
def timing(self, metric: str, value: float) -> None: ...
@dataclass
class UserService:
"""Service with injected infrastructure."""
repository: UserRepository
logger: Logger
metrics: MetricsClient
async def get_user(self, user_id: str) -> User:
self.logger.info("Fetching user", user_id=user_id)
start = time.perf_counter()
try:
user = await self.repository.get(user_id)
self.metrics.increment("user.fetch.success")
return user
except Exception as e:
self.metrics.increment("user.fetch.error")
self.logger.error("Failed to fetch user", user_id=user_id, error=str(e))
raise
finally:
elapsed = time.perf_counter() - start
self.metrics.timing("user.fetch.duration", elapsed)
# Easy to test with fakes
service = UserService(
repository=FakeRepository(),
logger=FakeLogger(),
metrics=FakeMetrics(),
)
```
### Pattern 9: Fail-Safe Defaults
Degrade gracefully when non-critical operations fail.
```python
from typing import TypeVar
from collections.abc import Callable
T = TypeVar("T")
def fail_safe(default: T, log_failure: bool = True):
"""Return default value on failure instead of raising."""
def decorator(func: Callable[..., T]) -> Callable[..., T]:
@wraps(func)
async def wrapper(*args, **kwargs) -> T:
try:
return await func(*args, **kwargs)
except Exception as e:
if log_failure:
logger.warning(
"Operation failed, using default",
function=func.__name__,
error=str(e),
)
return default
return wrapper
return decorator
@fail_safe(default=[])
async def get_recommendations(user_id: str) -> list[str]:
"""Get recommendations, return empty list on failure."""
...
```
## Best Practices Summary
1. **Retry only transient errors** - Don't retry bugs or authentication failures
2. **Use exponential backoff** - Give services time to recover
3. **Add jitter** - Prevent thundering herd from synchronized retries
4. **Cap total duration** - `stop_after_attempt(5) | stop_after_delay(60)`
5. **Log every retry** - Silent retries hide systemic problems
6. **Use decorators** - Keep retry logic separate from business logic
7. **Inject dependencies** - Make infrastructure testable
8. **Set timeouts everywhere** - Every network call needs a timeout
9. **Fail gracefully** - Return cached/default values for non-critical paths
10. **Monitor retry rates** - High retry rates indicate underlying issues
| """
Test for 'python-resilience' skill — Resilient Transport Layer for httpx
Validates that the Agent implemented ResilientTransport with retry and
circuit-breaker logic in httpx/_transports/resilient.py.
"""
import os
import sys
import ast
import subprocess
import importlib
import pytest
class TestPythonResilience:
"""Verify resilient transport implementation for httpx."""
REPO_DIR = "/workspace/httpx"
@classmethod
def setup_class(cls):
if cls.REPO_DIR not in sys.path:
sys.path.insert(0, cls.REPO_DIR)
# ------------------------------------------------------------------
# L1: file & syntax
# ------------------------------------------------------------------
def test_resilient_module_exists(self):
"""httpx/_transports/resilient.py must exist."""
fpath = os.path.join(self.REPO_DIR, "httpx", "_transports", "resilient.py")
assert os.path.isfile(fpath), "resilient.py not found"
def test_resilient_compiles(self):
"""resilient.py must compile without syntax errors."""
result = subprocess.run(
["python", "-m", "py_compile", "httpx/_transports/resilient.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Syntax error:\n{result.stderr}"
# ------------------------------------------------------------------
# L2: structural & functional verification
# ------------------------------------------------------------------
def _load_source(self):
fpath = os.path.join(self.REPO_DIR, "httpx", "_transports", "resilient.py")
with open(fpath, "r", encoding="utf-8") as f:
return f.read()
def _parse_classes(self):
source = self._load_source()
tree = ast.parse(source)
return {n.name: n for n in ast.walk(tree) if isinstance(n, ast.ClassDef)}
def test_resilient_transport_class_exists(self):
"""ResilientTransport class must be defined."""
classes = self._parse_classes()
assert (
"ResilientTransport" in classes
), f"ResilientTransport not found; classes: {list(classes.keys())}"
def test_circuit_open_error_defined(self):
"""CircuitOpenError exception class must be defined."""
classes = self._parse_classes()
assert (
"CircuitOpenError" in classes
), f"CircuitOpenError not found; classes: {list(classes.keys())}"
def test_retry_max_attempts_configured(self):
"""Retry logic must define maximum 3 attempts."""
source = self._load_source()
assert "3" in source, "No mention of 3 retry attempts in source"
# Verify there's a retry-related constant or parameter
retry_keywords = ["max_retries", "max_attempts", "retry", "retries"]
assert any(
kw in source.lower() for kw in retry_keywords
), "No retry configuration found in source"
def test_exponential_backoff_defined(self):
"""Exponential backoff (1s, 2s, 4s or similar) must be implemented."""
source = self._load_source()
backoff_indicators = ["backoff", "exponential", "sleep", "**", "pow"]
found = sum(1 for ind in backoff_indicators if ind in source.lower())
assert found >= 1, "No exponential backoff logic found"
def test_circuit_breaker_states(self):
"""Circuit breaker must define CLOSED, OPEN, HALF_OPEN states."""
source = self._load_source()
for state in ["CLOSED", "OPEN", "HALF_OPEN"]:
assert state in source, f"Circuit breaker state '{state}' not found"
def test_circuit_breaker_failure_threshold(self):
"""Circuit should open after 5 consecutive failures."""
source = self._load_source()
assert "5" in source, "Failure threshold of 5 not found in source"
threshold_keywords = [
"threshold",
"failure_count",
"consecutive",
"max_failures",
]
assert any(
kw in source.lower() for kw in threshold_keywords
), "No failure threshold configuration found"
def test_circuit_breaker_cooldown(self):
"""30-second cooldown before HALF_OPEN transition."""
source = self._load_source()
assert "30" in source, "30-second cooldown not found in source"
def test_no_retry_on_4xx(self):
"""4xx errors must NOT be retried — only 5xx and connection errors."""
source = self._load_source()
# Source should distinguish between 4xx (client error) and 5xx (server error)
if "4" in source and ("5" in source or "500" in source):
pass # basic sanity
status_patterns = [
"status_code",
"response.status",
"5xx",
"500",
">=500",
"> 499",
]
found = any(p in source.lower() for p in status_patterns)
assert found, "No HTTP status code handling found for retry logic"
def test_import_resilient_transport(self):
"""ResilientTransport should be importable at runtime."""
result = subprocess.run(
[
"python",
"-c",
"from httpx._transports.resilient import ResilientTransport; print('OK')",
],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Import failed:\n{result.stderr}"
assert "OK" in result.stdout
def test_import_circuit_open_error(self):
"""CircuitOpenError should be importable."""
result = subprocess.run(
[
"python",
"-c",
"from httpx._transports.resilient import CircuitOpenError; print('OK')",
],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Import failed:\n{result.stderr}"
assert "OK" in result.stdout
| https://github.com/encode/httpx | zhangyiiiiii/swe-skills-bench-python | |
xlsx | Excel & Spreadsheet Automation | See task file for detailed mission requirements. | feature | # Task: Implement Sales Report Generation Engine for openpyxl
## Background
We need to add a report generation engine to the openpyxl library that can produce Excel reports with automated summary formulas, conditional formatting, and trend charts.
## Files to Create/Modify
- `openpyxl/utils/report_engine.py` - Report generation engine (new)
## Requirements
### report_engine.py
Implement a `generate_sales_report(data: List[Dict], output_path: str) -> None` function that:
**Sheet1 - Raw Data with Summary:**
- Write input data (list of dicts with month, product, amount) to cells
- Insert `SUM` and `AVERAGE` formulas in a summary row at the bottom
**Sheet2 - Conditional Formatting:**
- Apply conditional formatting to the 'amount' column
- Red background (`PatternFill` with `fgColor=FF0000`) for month-over-month decline > 10%
**Sheet3 - Trend Chart:**
- `LineChart` showing monthly sales trend
- Proper axis labels and title
### Additional Examples (in the same module or separate helper):
- `BarChart` for category comparison
- `PieChart` for distribution
- Combined chart with secondary axis (optional)
## Expected Functionality
- Generated `.xlsx` files are valid (`load_workbook` succeeds)
- Summary formulas compute correctly
- Conditional formatting rules apply to correct cells
- charts render with correct data ranges
## Acceptance Criteria
- `openpyxl/utils/report_engine.py` compiles without syntax errors
- Generated Excel files are valid and contain all three sheets
- Formulas, conditional formatting, and charts are properly configured
| ---
name: xlsx
description: "Use this skill any time a spreadsheet file is the primary input or output. This means any task where the user wants to: open, read, edit, or fix an existing .xlsx, .xlsm, .csv, or .tsv file (e.g., adding columns, computing formulas, formatting, charting, cleaning messy data); create a new spreadsheet from scratch or from other data sources; or convert between tabular file formats. Trigger especially when the user references a spreadsheet file by name or path — even casually (like \"the xlsx in my downloads\") — and wants something done to it or produced from it. Also trigger for cleaning or restructuring messy tabular data files (malformed rows, misplaced headers, junk data) into proper spreadsheets. The deliverable must be a spreadsheet file. Do NOT trigger when the primary deliverable is a Word document, HTML report, standalone Python script, database pipeline, or Google Sheets API integration, even if tabular data is involved."
license: Proprietary. LICENSE.txt has complete terms
---
# Requirements for Outputs
## All Excel files
### Professional Font
- Use a consistent, professional font (e.g., Arial, Times New Roman) for all deliverables unless otherwise instructed by the user
### Zero Formula Errors
- Every Excel model MUST be delivered with ZERO formula errors (#REF!, #DIV/0!, #VALUE!, #N/A, #NAME?)
### Preserve Existing Templates (when updating templates)
- Study and EXACTLY match existing format, style, and conventions when modifying files
- Never impose standardized formatting on files with established patterns
- Existing template conventions ALWAYS override these guidelines
## Financial models
### Color Coding Standards
Unless otherwise stated by the user or existing template
#### Industry-Standard Color Conventions
- **Blue text (RGB: 0,0,255)**: Hardcoded inputs, and numbers users will change for scenarios
- **Black text (RGB: 0,0,0)**: ALL formulas and calculations
- **Green text (RGB: 0,128,0)**: Links pulling from other worksheets within same workbook
- **Red text (RGB: 255,0,0)**: External links to other files
- **Yellow background (RGB: 255,255,0)**: Key assumptions needing attention or cells that need to be updated
### Number Formatting Standards
#### Required Format Rules
- **Years**: Format as text strings (e.g., "2024" not "2,024")
- **Currency**: Use $#,##0 format; ALWAYS specify units in headers ("Revenue ($mm)")
- **Zeros**: Use number formatting to make all zeros "-", including percentages (e.g., "$#,##0;($#,##0);-")
- **Percentages**: Default to 0.0% format (one decimal)
- **Multiples**: Format as 0.0x for valuation multiples (EV/EBITDA, P/E)
- **Negative numbers**: Use parentheses (123) not minus -123
### Formula Construction Rules
#### Assumptions Placement
- Place ALL assumptions (growth rates, margins, multiples, etc.) in separate assumption cells
- Use cell references instead of hardcoded values in formulas
- Example: Use =B5*(1+$B$6) instead of =B5*1.05
#### Formula Error Prevention
- Verify all cell references are correct
- Check for off-by-one errors in ranges
- Ensure consistent formulas across all projection periods
- Test with edge cases (zero values, negative numbers)
- Verify no unintended circular references
#### Documentation Requirements for Hardcodes
- Comment or in cells beside (if end of table). Format: "Source: [System/Document], [Date], [Specific Reference], [URL if applicable]"
- Examples:
- "Source: Company 10-K, FY2024, Page 45, Revenue Note, [SEC EDGAR URL]"
- "Source: Company 10-Q, Q2 2025, Exhibit 99.1, [SEC EDGAR URL]"
- "Source: Bloomberg Terminal, 8/15/2025, AAPL US Equity"
- "Source: FactSet, 8/20/2025, Consensus Estimates Screen"
# XLSX creation, editing, and analysis
## Overview
A user may ask you to create, edit, or analyze the contents of an .xlsx file. You have different tools and workflows available for different tasks.
## Important Requirements
**LibreOffice Required for Formula Recalculation**: You can assume LibreOffice is installed for recalculating formula values using the `scripts/recalc.py` script. The script automatically configures LibreOffice on first run, including in sandboxed environments where Unix sockets are restricted (handled by `scripts/office/soffice.py`)
## Reading and analyzing data
### Data analysis with pandas
For data analysis, visualization, and basic operations, use **pandas** which provides powerful data manipulation capabilities:
```python
import pandas as pd
# Read Excel
df = pd.read_excel('file.xlsx') # Default: first sheet
all_sheets = pd.read_excel('file.xlsx', sheet_name=None) # All sheets as dict
# Analyze
df.head() # Preview data
df.info() # Column info
df.describe() # Statistics
# Write Excel
df.to_excel('output.xlsx', index=False)
```
## Excel File Workflows
## CRITICAL: Use Formulas, Not Hardcoded Values
**Always use Excel formulas instead of calculating values in Python and hardcoding them.** This ensures the spreadsheet remains dynamic and updateable.
### ❌ WRONG - Hardcoding Calculated Values
```python
# Bad: Calculating in Python and hardcoding result
total = df['Sales'].sum()
sheet['B10'] = total # Hardcodes 5000
# Bad: Computing growth rate in Python
growth = (df.iloc[-1]['Revenue'] - df.iloc[0]['Revenue']) / df.iloc[0]['Revenue']
sheet['C5'] = growth # Hardcodes 0.15
# Bad: Python calculation for average
avg = sum(values) / len(values)
sheet['D20'] = avg # Hardcodes 42.5
```
### ✅ CORRECT - Using Excel Formulas
```python
# Good: Let Excel calculate the sum
sheet['B10'] = '=SUM(B2:B9)'
# Good: Growth rate as Excel formula
sheet['C5'] = '=(C4-C2)/C2'
# Good: Average using Excel function
sheet['D20'] = '=AVERAGE(D2:D19)'
```
This applies to ALL calculations - totals, percentages, ratios, differences, etc. The spreadsheet should be able to recalculate when source data changes.
## Common Workflow
1. **Choose tool**: pandas for data, openpyxl for formulas/formatting
2. **Create/Load**: Create new workbook or load existing file
3. **Modify**: Add/edit data, formulas, and formatting
4. **Save**: Write to file
5. **Recalculate formulas (MANDATORY IF USING FORMULAS)**: Use the scripts/recalc.py script
```bash
python scripts/recalc.py output.xlsx
```
6. **Verify and fix any errors**:
- The script returns JSON with error details
- If `status` is `errors_found`, check `error_summary` for specific error types and locations
- Fix the identified errors and recalculate again
- Common errors to fix:
- `#REF!`: Invalid cell references
- `#DIV/0!`: Division by zero
- `#VALUE!`: Wrong data type in formula
- `#NAME?`: Unrecognized formula name
### Creating new Excel files
```python
# Using openpyxl for formulas and formatting
from openpyxl import Workbook
from openpyxl.styles import Font, PatternFill, Alignment
wb = Workbook()
sheet = wb.active
# Add data
sheet['A1'] = 'Hello'
sheet['B1'] = 'World'
sheet.append(['Row', 'of', 'data'])
# Add formula
sheet['B2'] = '=SUM(A1:A10)'
# Formatting
sheet['A1'].font = Font(bold=True, color='FF0000')
sheet['A1'].fill = PatternFill('solid', start_color='FFFF00')
sheet['A1'].alignment = Alignment(horizontal='center')
# Column width
sheet.column_dimensions['A'].width = 20
wb.save('output.xlsx')
```
### Editing existing Excel files
```python
# Using openpyxl to preserve formulas and formatting
from openpyxl import load_workbook
# Load existing file
wb = load_workbook('existing.xlsx')
sheet = wb.active # or wb['SheetName'] for specific sheet
# Working with multiple sheets
for sheet_name in wb.sheetnames:
sheet = wb[sheet_name]
print(f"Sheet: {sheet_name}")
# Modify cells
sheet['A1'] = 'New Value'
sheet.insert_rows(2) # Insert row at position 2
sheet.delete_cols(3) # Delete column 3
# Add new sheet
new_sheet = wb.create_sheet('NewSheet')
new_sheet['A1'] = 'Data'
wb.save('modified.xlsx')
```
## Recalculating formulas
Excel files created or modified by openpyxl contain formulas as strings but not calculated values. Use the provided `scripts/recalc.py` script to recalculate formulas:
```bash
python scripts/recalc.py <excel_file> [timeout_seconds]
```
Example:
```bash
python scripts/recalc.py output.xlsx 30
```
The script:
- Automatically sets up LibreOffice macro on first run
- Recalculates all formulas in all sheets
- Scans ALL cells for Excel errors (#REF!, #DIV/0!, etc.)
- Returns JSON with detailed error locations and counts
- Works on both Linux and macOS
## Formula Verification Checklist
Quick checks to ensure formulas work correctly:
### Essential Verification
- [ ] **Test 2-3 sample references**: Verify they pull correct values before building full model
- [ ] **Column mapping**: Confirm Excel columns match (e.g., column 64 = BL, not BK)
- [ ] **Row offset**: Remember Excel rows are 1-indexed (DataFrame row 5 = Excel row 6)
### Common Pitfalls
- [ ] **NaN handling**: Check for null values with `pd.notna()`
- [ ] **Far-right columns**: FY data often in columns 50+
- [ ] **Multiple matches**: Search all occurrences, not just first
- [ ] **Division by zero**: Check denominators before using `/` in formulas (#DIV/0!)
- [ ] **Wrong references**: Verify all cell references point to intended cells (#REF!)
- [ ] **Cross-sheet references**: Use correct format (Sheet1!A1) for linking sheets
### Formula Testing Strategy
- [ ] **Start small**: Test formulas on 2-3 cells before applying broadly
- [ ] **Verify dependencies**: Check all cells referenced in formulas exist
- [ ] **Test edge cases**: Include zero, negative, and very large values
### Interpreting scripts/recalc.py Output
The script returns JSON with error details:
```json
{
"status": "success", // or "errors_found"
"total_errors": 0, // Total error count
"total_formulas": 42, // Number of formulas in file
"error_summary": { // Only present if errors found
"#REF!": {
"count": 2,
"locations": ["Sheet1!B5", "Sheet1!C10"]
}
}
}
```
## Best Practices
### Library Selection
- **pandas**: Best for data analysis, bulk operations, and simple data export
- **openpyxl**: Best for complex formatting, formulas, and Excel-specific features
### Working with openpyxl
- Cell indices are 1-based (row=1, column=1 refers to cell A1)
- Use `data_only=True` to read calculated values: `load_workbook('file.xlsx', data_only=True)`
- **Warning**: If opened with `data_only=True` and saved, formulas are replaced with values and permanently lost
- For large files: Use `read_only=True` for reading or `write_only=True` for writing
- Formulas are preserved but not evaluated - use scripts/recalc.py to update values
### Working with pandas
- Specify data types to avoid inference issues: `pd.read_excel('file.xlsx', dtype={'id': str})`
- For large files, read specific columns: `pd.read_excel('file.xlsx', usecols=['A', 'C', 'E'])`
- Handle dates properly: `pd.read_excel('file.xlsx', parse_dates=['date_column'])`
## Code Style Guidelines
**IMPORTANT**: When generating Python code for Excel operations:
- Write minimal, concise Python code without unnecessary comments
- Avoid verbose variable names and redundant operations
- Avoid unnecessary print statements
**For Excel files themselves**:
- Add comments to cells with complex formulas or important assumptions
- Document data sources for hardcoded values
- Include notes for key calculations and model sections | """
Test for 'xlsx' skill — Excel & Spreadsheet Automation
Validates that the Agent implemented generate_sales_report() in
openpyxl/utils/report_engine.py with summary formulas, conditional formatting,
and trend charts.
"""
import os
import sys
import ast
import subprocess
import tempfile
import pytest
class TestXlsx:
"""Verify report_engine.py implementation for openpyxl."""
REPO_DIR = "/workspace/openpyxl"
@classmethod
def setup_class(cls):
if cls.REPO_DIR not in sys.path:
sys.path.insert(0, cls.REPO_DIR)
# ------------------------------------------------------------------
# L1: file & syntax
# ------------------------------------------------------------------
def test_report_engine_exists(self):
"""openpyxl/utils/report_engine.py must exist."""
fpath = os.path.join(self.REPO_DIR, "openpyxl", "utils", "report_engine.py")
assert os.path.isfile(fpath), "report_engine.py not found"
def test_report_engine_compiles(self):
"""report_engine.py must compile without syntax errors."""
result = subprocess.run(
["python", "-m", "py_compile", "openpyxl/utils/report_engine.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Syntax error:\n{result.stderr}"
# ------------------------------------------------------------------
# L2: structural verification via AST
# ------------------------------------------------------------------
def test_generate_sales_report_function_exists(self):
"""generate_sales_report function must be defined."""
fpath = os.path.join(self.REPO_DIR, "openpyxl", "utils", "report_engine.py")
with open(fpath, "r", encoding="utf-8") as f:
tree = ast.parse(f.read())
func_names = [
n.name
for n in ast.walk(tree)
if isinstance(n, (ast.FunctionDef, ast.AsyncFunctionDef))
]
assert (
"generate_sales_report" in func_names
), f"generate_sales_report not found; functions: {func_names}"
# ------------------------------------------------------------------
# L2: runtime verification — generate and validate xlsx
# ------------------------------------------------------------------
def _generate_report(self, tmp_path):
"""Helper: call generate_sales_report and return the output path."""
script = f"""
import sys
sys.path.insert(0, '{self.REPO_DIR}')
from openpyxl.utils.report_engine import generate_sales_report
data = [
{{"month": "Jan", "product": "Widget", "amount": 1200}},
{{"month": "Feb", "product": "Widget", "amount": 1100}},
{{"month": "Mar", "product": "Widget", "amount": 1300}},
{{"month": "Apr", "product": "Widget", "amount": 900}},
{{"month": "May", "product": "Gadget", "amount": 1500}},
{{"month": "Jun", "product": "Gadget", "amount": 1400}},
]
output = '{tmp_path}'
generate_sales_report(data, output)
print("DONE")
"""
result = subprocess.run(
["python", "-c", script],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
return result
def test_generate_report_runs(self):
"""generate_sales_report must execute without errors."""
with tempfile.NamedTemporaryFile(suffix=".xlsx", delete=False) as tmp:
tmp_path = tmp.name
try:
result = self._generate_report(tmp_path)
assert result.returncode == 0, f"Report generation failed:\n{result.stderr}"
assert "DONE" in result.stdout
finally:
if os.path.exists(tmp_path):
os.unlink(tmp_path)
def test_generated_file_is_valid_xlsx(self):
"""Generated file must be loadable by openpyxl."""
with tempfile.NamedTemporaryFile(suffix=".xlsx", delete=False) as tmp:
tmp_path = tmp.name
try:
gen = self._generate_report(tmp_path)
if gen.returncode != 0:
pytest.skip(f"Report generation failed: {gen.stderr[:500]}")
from openpyxl import load_workbook
wb = load_workbook(tmp_path)
assert wb is not None
wb.close()
finally:
if os.path.exists(tmp_path):
os.unlink(tmp_path)
def test_report_has_three_sheets(self):
"""Generated workbook must contain at least 3 sheets."""
with tempfile.NamedTemporaryFile(suffix=".xlsx", delete=False) as tmp:
tmp_path = tmp.name
try:
gen = self._generate_report(tmp_path)
if gen.returncode != 0:
pytest.skip(f"Report generation failed: {gen.stderr[:500]}")
from openpyxl import load_workbook
wb = load_workbook(tmp_path)
assert (
len(wb.sheetnames) >= 3
), f"Expected >= 3 sheets, got {len(wb.sheetnames)}: {wb.sheetnames}"
wb.close()
finally:
if os.path.exists(tmp_path):
os.unlink(tmp_path)
def test_sheet1_has_summary_formulas(self):
"""Sheet1 must contain SUM and/or AVERAGE formulas."""
with tempfile.NamedTemporaryFile(suffix=".xlsx", delete=False) as tmp:
tmp_path = tmp.name
try:
gen = self._generate_report(tmp_path)
if gen.returncode != 0:
pytest.skip(f"Report generation failed: {gen.stderr[:500]}")
from openpyxl import load_workbook
wb = load_workbook(tmp_path)
ws = wb.worksheets[0]
formulas_found = []
for row in ws.iter_rows():
for cell in row:
val = cell.value
if isinstance(val, str) and val.startswith("="):
formulas_found.append(val)
wb.close()
has_sum = any("SUM" in f.upper() for f in formulas_found)
has_avg = any("AVERAGE" in f.upper() for f in formulas_found)
assert (
has_sum or has_avg
), f"No SUM/AVERAGE formulas found in Sheet1. Formulas: {formulas_found}"
finally:
if os.path.exists(tmp_path):
os.unlink(tmp_path)
def test_sheet2_has_conditional_formatting(self):
"""Sheet2 must have conditional formatting rules."""
with tempfile.NamedTemporaryFile(suffix=".xlsx", delete=False) as tmp:
tmp_path = tmp.name
try:
gen = self._generate_report(tmp_path)
if gen.returncode != 0:
pytest.skip(f"Report generation failed: {gen.stderr[:500]}")
from openpyxl import load_workbook
wb = load_workbook(tmp_path)
ws = wb.worksheets[1]
cf_rules = ws.conditional_formatting
assert (
len(list(cf_rules)) >= 1
), "No conditional formatting rules found on Sheet2"
wb.close()
finally:
if os.path.exists(tmp_path):
os.unlink(tmp_path)
def test_sheet3_has_chart(self):
"""Sheet3 must contain at least one chart."""
with tempfile.NamedTemporaryFile(suffix=".xlsx", delete=False) as tmp:
tmp_path = tmp.name
try:
gen = self._generate_report(tmp_path)
if gen.returncode != 0:
pytest.skip(f"Report generation failed: {gen.stderr[:500]}")
from openpyxl import load_workbook
wb = load_workbook(tmp_path)
ws = wb.worksheets[2]
assert len(ws._charts) >= 1, "No chart found on Sheet3"
wb.close()
finally:
if os.path.exists(tmp_path):
os.unlink(tmp_path)
def test_chart_is_line_chart(self):
"""Sheet3 chart should be a LineChart for trend visualization."""
with tempfile.NamedTemporaryFile(suffix=".xlsx", delete=False) as tmp:
tmp_path = tmp.name
try:
gen = self._generate_report(tmp_path)
if gen.returncode != 0:
pytest.skip(f"Report generation failed: {gen.stderr[:500]}")
from openpyxl import load_workbook
from openpyxl.chart import LineChart
wb = load_workbook(tmp_path)
ws = wb.worksheets[2]
line_charts = [c for c in ws._charts if isinstance(c, LineChart)]
assert (
len(line_charts) >= 1
), f"Expected a LineChart on Sheet3; chart types: {[type(c).__name__ for c in ws._charts]}"
wb.close()
finally:
if os.path.exists(tmp_path):
os.unlink(tmp_path)
def test_sheet1_contains_data_rows(self):
"""Sheet1 must contain the input data rows."""
with tempfile.NamedTemporaryFile(suffix=".xlsx", delete=False) as tmp:
tmp_path = tmp.name
try:
gen = self._generate_report(tmp_path)
if gen.returncode != 0:
pytest.skip(f"Report generation failed: {gen.stderr[:500]}")
from openpyxl import load_workbook
wb = load_workbook(tmp_path)
ws = wb.worksheets[0]
# Should have at least header + 6 data rows + 1 summary = 8 rows
row_count = ws.max_row
assert (
row_count >= 7
), f"Expected at least 7 rows (header+data+summary), got {row_count}"
wb.close()
finally:
if os.path.exists(tmp_path):
os.unlink(tmp_path)
| https://github.com/ericgazoni/openpyxl | zhangyiiiiii/swe-skills-bench-python | |
turborepo | Turborepo Monorepo Build System | See task file for detailed mission requirements. | feature | "# Task: Create Turborepo Monorepo Example with Cache Demonstration\n\n## Background\n\nWe need a co(...TRUNCATED) | "---\nname: turborepo\ndescription: |\n Turborepo monorepo build system guidance. Triggers on: turb(...TRUNCATED) | "\"\"\"\nTest for 'turborepo' skill — Turborepo Monorepo Configuration\nValidates that the Agent s(...TRUNCATED) | https://github.com/vercel/turbo | zhangyiiiiii/swe-skills-bench-python |
Dataset Summary
SWE-Skills-Bench is a benchmark dataset for evaluating whether injected skill documents — structured packages of procedural knowledge — measurably improve LLM agent performance on real-world software engineering tasks.
The dataset contains 49 skills spanning 565 task instances across six software engineering domains (Deployment & DevOps, Analytics & Monitoring, API Development, Data Science & ML, Security & Testing, and Developer Tools). Each skill is grounded in an authentic GitHub repository at a fixed commit, paired with a curated skill document and a deterministic pytest test suite that encodes the task's acceptance criteria.
The dataset is designed to answer: Does giving an agent a skill document actually help? The primary evaluation metric is pytest pass rate, measured under two conditions — with and without skill injection — to compute a pass-rate delta (ΔP) per skill.
The dataset was released as part of SWE-Skills-Bench: Do Agent Skills Actually Help in Real-World Software Engineering?
Dataset Structure
An example of a SWE-Skills-Bench datum is as follows:
skill_id: (str) - Unique skill identifier, e.g. "fix", "tdd-workflow".
name: (str) - Human-readable task name.
description: (str) - One-line description of the task.
type: (str) - Task category, e.g. "repair", "feature", "fix".
task_prompt: (str) - Full task prompt passed to the agent (Markdown).
skill_document: (str) - Curated skill document injected as agent context (Markdown).
test_code: (str) - Pytest test suite used to evaluate the agent's output.
repo_url: (str) - Target GitHub repository URL.
repo_commit: (str) - Fixed commit hash for reproducibility.
docker_image: (str) - Pre-configured Docker image for the evaluation environment.
Supported Tasks
SWE-Skills-Bench proposes a paired evaluation task: given a task prompt (with or without an injected skill document), an agent must complete a software engineering task on a real codebase. Correctness is verified by running the associated pytest test suite inside a Docker container.
- Downloads last month
- 12