Code Contributor Guide
This guide is for developers who plan to work directly on the Spry source code. It covers the project structure, development commands, core architecture, and best practices for creating extensions.
Architecture Context
Section titled “Architecture Context”Spry’s pipeline is divided into three fundamental layers, defining the execution flow. Understanding this is crucial: Parsing happens first, execution happens last.
- Markdown AST Pipeline (
remark/*): The core engine for parsing Markdown, enriching the Abstract Syntax Tree (AST) with metadata, and extracting tasks. - Execution Engines (
runbook/orchestrate,task/execute,sqlpage/playbook): Manages control flow, execution state, and coordinates tasks. - Application Layer (
runbook/cli.ts,sqlpage/cli.ts,task/cli.ts): Handles command line interface, argument parsing, and orchestrates the engines.
┌─────────────────────────────────────────────────────────────────┐│ APPLICATION LAYER │├─────────────────────────────────────────────────────────────────┤│ lib/runbook/cli.ts │ lib/sqlpage/cli.ts │ lib/task/cli.ts │└──────────┬────────────────┬─────────────────┬───────────────────┘│ │ │▼ ▼ ▼┌──────────────────────────────────────────────────────────────────────┐│ EXECUTION ENGINES │├──────────────────────────────────────────────────────────────────────┤│ lib/runbook/orchestrate │ lib/task/execute │ lib/sqlpage/playbook│└──────────┬──────────────────────┬────────────────────────────────────┘│ │▼ ▼┌──────────────────────────────────────────────────────────────────┐│ MARKDOWN AST PIPELINE │├──────────────────────────────────────────────────────────────────┤│ lib/remark/plugin/\* (code-frontmatter, doc-schema, etc) ││ lib/remark/graph/\* (dependency tracking, analysis) ││ lib/remark/mdastctl/\* (AST loading and manipulation) │└──────────────────────────────────────────────────────────────────┘Project Structure
Section titled “Project Structure”Spry’s core logic is organized within the lib/ directory:
spry/├── lib/ \# Core library source│ ├── markdown/ \# Document model and notebook structures (cql, playbook, etc.)│ ├── reflect/ \# Runtime reflection, provenance, and dependency tracking│ ├── remark/ \# Markdown AST processing (plugins, graph analysis, mdast utilities)│ ├── runbook/ \# Shell task orchestration and CLI runner│ ├── sqlpage/ \# SQLPage content generation and CLI│ ├── task/ \# Task cell definition and execution engine│ └── universal/ \# Shared utilities (CLI helpers, file I/O, general code tools)├── support/ \# Examples, complex fixtures (assurance), RFCs, and helper scripts├── deno.jsonc \# Deno configuration└── import\_map.json \# Import map for remote usageDevelopment Workflow
Section titled “Development Workflow”This section outlines the standard steps for contributing code.
Creating a Branch
Section titled “Creating a Branch”Always create a new branch for your work:
git checkout -b feature/your-feature-name#orgit checkout -b fix/your-bug-fixUse descriptive branch names:
feature/add-python-supportfix/sql-parsing-errordocs/improve-quickstart
Making Changes
Section titled “Making Changes”-
Write clean, maintainable code
- Follow TypeScript best practices (prefer
const, strict typing). - Use meaningful variable and function names.
- Add JSDoc comments for complex logic and public APIs.
- Follow TypeScript best practices (prefer
-
Update documentation
- Update relevant
.mdfiles indocs/. - Add examples if introducing new features.
- Update relevant
-
Commit your changes
Terminal window git add .git commit -m "feat: add Python execution support"
Local Development Workflow / Running Checks
Section titled “Local Development Workflow / Running Checks”It is critical to run checks frequently while developing.
| Check | Command | Purpose |
|---|---|---|
| Tests | deno test --parallel --allow-all | Run all unit and integration tests. |
| Watch Mode | deno test --watch --allow-all | Recommended for continuous development. |
| Formatting | deno fmt | Fix code formatting based on project standards. |
| Linting | deno lint | Static analysis to catch structural issues. |
| Type Checking | deno task ts-check | Verify strict TypeScript compliance. |
Commit Message Guidelines
Section titled “Commit Message Guidelines”We follow the Conventional Commits specification for clear, standardized commit history.
| Prefix | Description | Example |
|---|---|---|
feat: | A new feature | feat: add PostgreSQL connection pooling |
fix: | A bug fix | fix: resolve SQL injection vulnerability |
docs: | Documentation only changes | docs: update installation guide for Windows |
style: | Code style changes (formatting, missing semicolons, etc.) | style: fix missing semicolon in tsconfig |
refactor: | Code refactoring without changing functionality | refactor: simplify task dependency resolution |
perf: | Performance improvements | perf: optimize AST traversal speed |
test: | Adding or updating tests | test: add coverage for partials feature |
chore: | Maintenance tasks (build, configs, etc.) | chore: update Deno version requirement |
Submitting Pull Requests
Section titled “Submitting Pull Requests”Before Submitting
Section titled “Before Submitting”- Sync with upstream
git fetch upstreamgit rebase upstream/main- Run all checks (tests, format, lint, type check).
Creating the Pull Request
Section titled “Creating the Pull Request”- Push your branch
git push origin feature/your-feature-name- Open a pull request on GitHub.
- Use a clear, descriptive title.
- Reference related issues (e.g., “Fixes #123”).
- Describe what changed and why.
PR Template
Section titled “PR Template”We use this template to ensure all necessary information is provided for review:
## DescriptionBrief description of changes
## Related IssuesFixes #123
## Type of Change- [ ] Bug fix- [ ] New feature- [ ] Breaking change- [ ] Documentation update
## TestingHow has this been tested? (e.g., unit tests, manual reproduction of bug, new fixture)
## Checklist- [ ] Tests pass locally- [ ] Code follows project style- [ ] Documentation updated- [ ] Commit messages follow conventionsCode Style Guidelines
Section titled “Code Style Guidelines”TypeScript
Section titled “TypeScript”- Use TypeScript for all source code.
- Enable strict mode.
- Prefer
constoverlet, avoidvar. - Use descriptive type names.
- Document public APIs with JSDoc comments:
/** * Executes a Markdown code block as a task * @param cell - The code block to execute * @param context - Execution context * @returns Promise resolving to execution result */export async function executeCell( cell: CodeCell, context: ExecutionContext): Promise<ExecutionResult> { // Implementation}Markdown
Section titled “Markdown”- Use ATX-style headers (
#syntax). - Add blank lines around code blocks.
- Use fenced code blocks with language identifiers (e.g.,
bash,typescript). - Use reference-style links for readability.
Testing Principles
Section titled “Testing Principles”Writing good tests is fundamental to Spry’s stability.
- Write tests for all new features.
- Use descriptive test names (e.g.,
Deno.test("executeCell: handles SQL queries correctly")). - Follow the Arrange-Act-Assert pattern.
- Test edge cases and error conditions.
Deno.test("executeCell: handles SQL queries correctly", async () => { // Arrange const cell = createSQLCell("SELECT 1"); const context = createTestContext();
// Act const result = await executeCell(cell, context);
// Assert assertEquals(result.success, true); assertEquals(result.output, "1");});Advanced Contributions & Extensions
Section titled “Advanced Contributions & Extensions”Spry is highly extensible via custom modules.
Adding a New Remark Plugin
Section titled “Adding a New Remark Plugin”Remark plugins transform or enrich the Markdown AST.
- Create your plugin file in
lib/remark/plugin/node/(for code blocks) orlib/remark/plugin/doc/(for document-level metadata). - Use the
unifiedandunist-util-visitutilities to traverse the tree. - Attach Data: Use
safeNodeDataFactoryto attach new, type-safe data to nodes.
import { z } from "@zod/zod";import { safeNodeDataFactory } from "../../mdast/safe-data.ts";
const myDataSchema = z.object({ value: z.string() });export type MyData = z.infer<typeof myDataSchema>;export const MY_KEY = "myData" as const;
export const myPlugin: Plugin = (options) => { return (tree) => { // Transformer logic here };};
export default myPlugin;Extending Task Execution (New Cell Types)
Section titled “Extending Task Execution (New Cell Types)”If you need Spry to recognize a new executable cell type, create a Task Directive Inspector (TDI).
- Create a
TaskDirectiveInspectorinlib/task/. - Your inspector checks the cell’s language, flags, and attributes.
- Register your inspector with the
TaskDirectives.use()chain in the execution engine.
Custom Event Handlers
Section titled “Custom Event Handlers”The execution engines use an Event Bus to communicate state changes. You can listen to these events to add custom logging or side effects.
// React to task execution eventsconst tasksBus = eventBus<TaskExecEventMap>();
tasksBus.on("task:start", ({ task, ctx }) => { console.log(`Starting: ${task.taskId()}`);});
tasksBus.on("task:complete", ({ task, result }) => { console.log(`Completed: ${task.taskId()}`);});Technical Best Practices
Section titled “Technical Best Practices”Plugin Design Principles
Section titled “Plugin Design Principles”- Single Responsibility — Each plugin should do one thing well.
- Idempotent — Running a plugin multiple times must be safe and produce the same result.
- Type-Safe Data — Always use
safeNodeDataFactoryfor AST data validation. - No Side Effects — Avoid file I/O or network calls within plugins; emit data for later processing by the execution layer.
Error Handling
Section titled “Error Handling”- Validation Errors — Use
registerIssueto track recoverable parsing errors. - Schema Errors — Use the validation built into
safeNodeDataFactoryto catch invalid data structures. - Fatal Errors — Only
throwfor truly unrecoverable situations.
Performance
Section titled “Performance”- Early Return — Check the node type (
node.type === "code") immediately when traversing the AST to skip unnecessary work. - Avoid Reprocessing — Check if your data already exists on
node.databefore running an expensive computation. - Batch Operations — Collect data during a single tree visit, then process it afterward, instead of repeatedly visiting the tree.