-
Notifications
You must be signed in to change notification settings - Fork 176
✨(schema): Add YAML schema deparser for LLM context optimization #3751
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…d compatibility - Add new SchemaDeparser and OperationDeparser types returning Result<string, Error> - Mark existing implementations as LegacySchemaDeparser and LegacyOperationDeparser - Add @deprecated decorators and TODO comments for future migration - Maintain backward compatibility with existing PostgreSQL deparers 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
Add yamlSchemaDeparser to convert Schema objects to compact YAML format. This is optimized for LLM context by removing null values to reduce token usage. - Implement yamlSchemaDeparser with neverthrow error handling - Add null value removal to minimize output size - Add comprehensive tests using _unsafeUnwrap for test assertions - Use yaml package for reliable YAML serialization 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
Replace convertSchemaToText with yamlSchemaDeparser for better LLM context optimization through compact YAML format. - Update all agent nodes to use yamlSchemaDeparser - Remove deprecated convertSchemaToText utility - Fix import order for biome lint compliance 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
Unexpected error on running flow
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Updates to Preview Branch (yaml-parser) ↗︎
Tasks are run on every commit but only new migration files are pushed.
View logs for this Workflow Run ↗︎. |
WalkthroughReplaces convertSchemaToText with yamlSchemaDeparser across agent nodes, adopting Result-based error handling and combined config/schema resolution. Removes the old utility. Introduces a YAML schema deparser in the schema package, updates deparser type signatures (new Result-returning types and Legacy variants), adjusts PostgreSQL deparsers to legacy types, adds tests, and exports. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Flow as Workflow Node
participant Cfg as getConfigurable(config)
participant Yaml as yamlSchemaDeparser(schemaData)
participant Res as Result.combine(...)
participant Agent as Domain Agent
Flow->>Cfg: Build configurable (Result)
Flow->>Yaml: Deparse schema to YAML (Result)
Flow->>Res: Combine [configurable, schemaYAML]
alt combine is Err
Res-->>Flow: Error
Flow-->>Flow: Throw WorkflowTerminationError
else combine is Ok
Res-->>Flow: [configurable, schemaText]
Flow->>Agent: Invoke with config + schemaText
Agent-->>Flow: Result (success/error)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Check changeset necessityStatus: REQUIRED Reason:
Changeset (copy & paste):---
"@liam-hq/schema": minor
---
- ✨ Add YAML schema deparser optimized for LLM context
- Introduce `yamlSchemaDeparser(schema) -> Result<string, Error>` that serializes Schema to compact YAML (removes nulls)
- Export new neverthrow-based deparser types: `SchemaDeparser`, `OperationDeparser`
- Keep backward-compatible aliases as `LegacySchemaDeparser`, `LegacyOperationDeparser`
- Add `yaml` dependency and export `yamlSchemaDeparser` from package entrypoint |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
frontend/packages/schema/src/deparser/yaml/schemaDeparser.ts (1)
7-15
: Filter null entries out of arrays as wellThe current helper drops null object properties but preserves null items inside arrays, so any
Array<... | null>
fields still leaknull
into the YAML and undercut the “remove null values” goal. Please filter them out before mapping.- if (Array.isArray(obj)) { - return obj.map(removeNullValues) - } + if (Array.isArray(obj)) { + return obj.filter((item) => item !== null).map(removeNullValues) + }
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (13)
frontend/internal-packages/agent/src/db-agent/nodes/designSchemaNode.ts
(2 hunks)frontend/internal-packages/agent/src/pm-agent/nodes/analyzeRequirementsNode.ts
(2 hunks)frontend/internal-packages/agent/src/qa-agent/testcaseGeneration/generateTestcaseNode.ts
(2 hunks)frontend/internal-packages/agent/src/qa-agent/testcaseGeneration/validateSchemaRequirementsNode.ts
(2 hunks)frontend/internal-packages/agent/src/utils/convertSchemaToText.ts
(0 hunks)frontend/packages/schema/package.json
(1 hunks)frontend/packages/schema/src/deparser/postgresql/operationDeparser.ts
(2 hunks)frontend/packages/schema/src/deparser/postgresql/schemaDeparser.ts
(2 hunks)frontend/packages/schema/src/deparser/type.ts
(1 hunks)frontend/packages/schema/src/deparser/yaml/index.ts
(1 hunks)frontend/packages/schema/src/deparser/yaml/schemaDeparser.test.ts
(1 hunks)frontend/packages/schema/src/deparser/yaml/schemaDeparser.ts
(1 hunks)frontend/packages/schema/src/index.ts
(1 hunks)
💤 Files with no reviewable changes (1)
- frontend/internal-packages/agent/src/utils/convertSchemaToText.ts
🧰 Additional context used
📓 Path-based instructions (5)
frontend/packages/**
📄 CodeRabbit inference engine (AGENTS.md)
Shared libraries and tools live under frontend/packages
Files:
frontend/packages/schema/package.json
frontend/packages/schema/src/deparser/yaml/index.ts
frontend/packages/schema/src/deparser/postgresql/operationDeparser.ts
frontend/packages/schema/src/index.ts
frontend/packages/schema/src/deparser/yaml/schemaDeparser.test.ts
frontend/packages/schema/src/deparser/type.ts
frontend/packages/schema/src/deparser/postgresql/schemaDeparser.ts
frontend/packages/schema/src/deparser/yaml/schemaDeparser.ts
**/*.ts
📄 CodeRabbit inference engine (AGENTS.md)
Name utility files in camelCase (e.g., mergeSchema.ts)
Files:
frontend/packages/schema/src/deparser/yaml/index.ts
frontend/packages/schema/src/deparser/postgresql/operationDeparser.ts
frontend/internal-packages/agent/src/db-agent/nodes/designSchemaNode.ts
frontend/packages/schema/src/index.ts
frontend/internal-packages/agent/src/qa-agent/testcaseGeneration/validateSchemaRequirementsNode.ts
frontend/packages/schema/src/deparser/yaml/schemaDeparser.test.ts
frontend/packages/schema/src/deparser/type.ts
frontend/internal-packages/agent/src/qa-agent/testcaseGeneration/generateTestcaseNode.ts
frontend/internal-packages/agent/src/pm-agent/nodes/analyzeRequirementsNode.ts
frontend/packages/schema/src/deparser/postgresql/schemaDeparser.ts
frontend/packages/schema/src/deparser/yaml/schemaDeparser.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
Use TypeScript/TSX across the codebase
**/*.{ts,tsx}
: Use runtime type validation with valibot for external data validation
Prefer early returns for readability
Write simple, direct code without backward compatibility shims; update all call sites together
Use const-assigned arrow functions instead of function declarations for small utilities (e.g., const toggle = () => {})
Follow existing import patterns and tsconfig path aliases
Files:
frontend/packages/schema/src/deparser/yaml/index.ts
frontend/packages/schema/src/deparser/postgresql/operationDeparser.ts
frontend/internal-packages/agent/src/db-agent/nodes/designSchemaNode.ts
frontend/packages/schema/src/index.ts
frontend/internal-packages/agent/src/qa-agent/testcaseGeneration/validateSchemaRequirementsNode.ts
frontend/packages/schema/src/deparser/yaml/schemaDeparser.test.ts
frontend/packages/schema/src/deparser/type.ts
frontend/internal-packages/agent/src/qa-agent/testcaseGeneration/generateTestcaseNode.ts
frontend/internal-packages/agent/src/pm-agent/nodes/analyzeRequirementsNode.ts
frontend/packages/schema/src/deparser/postgresql/schemaDeparser.ts
frontend/packages/schema/src/deparser/yaml/schemaDeparser.ts
frontend/internal-packages/**
📄 CodeRabbit inference engine (AGENTS.md)
Infra and tooling (e2e, configs, storybook, agent) live under frontend/internal-packages
Files:
frontend/internal-packages/agent/src/db-agent/nodes/designSchemaNode.ts
frontend/internal-packages/agent/src/qa-agent/testcaseGeneration/validateSchemaRequirementsNode.ts
frontend/internal-packages/agent/src/qa-agent/testcaseGeneration/generateTestcaseNode.ts
frontend/internal-packages/agent/src/pm-agent/nodes/analyzeRequirementsNode.ts
**/*.test.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
Write unit tests with filenames ending in .test.ts or .test.tsx colocated near source
Files:
frontend/packages/schema/src/deparser/yaml/schemaDeparser.test.ts
🧠 Learnings (2)
📚 Learning: 2025-06-26T05:50:27.762Z
Learnt from: MH4GF
PR: liam-hq/liam#2225
File: frontend/packages/db-structure/src/deparser/postgresql/schemaDeparser.ts:14-14
Timestamp: 2025-06-26T05:50:27.762Z
Learning: In the PostgreSQL schema deparser (frontend/packages/db-structure/src/deparser/postgresql/schemaDeparser.ts), type assertions like `as Table[]` and `as Index[]` are acceptable when used with Object.values() because the schema structure is guaranteed to be correct upstream, making the assertions safe.
Applied to files:
frontend/packages/schema/src/deparser/postgresql/schemaDeparser.ts
📚 Learning: 2025-09-04T09:48:12.223Z
Learnt from: FunamaYukina
PR: liam-hq/liam#3331
File: frontend/internal-packages/agent/src/workflow/nodes/validateInitialSchemaNode.ts:94-103
Timestamp: 2025-09-04T09:48:12.223Z
Learning: The postgresqlSchemaDeparser function from liam-hq/schema always returns a string type in the value property (even empty strings for empty schemas), and executeQuery properly handles empty strings. Therefore, using !ddlStatements is sufficient to check for empty/invalid DDL without needing to explicitly check for empty arrays.
Applied to files:
frontend/packages/schema/src/deparser/postgresql/schemaDeparser.ts
🧬 Code graph analysis (7)
frontend/packages/schema/src/deparser/postgresql/operationDeparser.ts (2)
frontend/packages/schema/src/index.ts (2)
postgresqlOperationDeparser
(2-2)LegacyOperationDeparser
(6-6)frontend/packages/schema/src/deparser/type.ts (1)
LegacyOperationDeparser
(25-27)
frontend/internal-packages/agent/src/db-agent/nodes/designSchemaNode.ts (2)
frontend/internal-packages/agent/src/utils/getConfigurable.ts (1)
getConfigurable
(10-31)frontend/internal-packages/agent/src/utils/errorHandling.ts (1)
WorkflowTerminationError
(5-15)
frontend/packages/schema/src/deparser/yaml/schemaDeparser.test.ts (1)
frontend/packages/schema/src/deparser/yaml/schemaDeparser.ts (1)
yamlSchemaDeparser
(20-35)
frontend/packages/schema/src/deparser/type.ts (1)
frontend/packages/schema/src/index.ts (6)
LegacySchemaDeparser
(7-7)Schema
(62-62)LegacyOperationDeparser
(6-6)Operation
(38-38)SchemaDeparser
(9-9)OperationDeparser
(8-8)
frontend/internal-packages/agent/src/pm-agent/nodes/analyzeRequirementsNode.ts (3)
frontend/internal-packages/agent/src/utils/getConfigurable.ts (1)
getConfigurable
(10-31)frontend/internal-packages/agent/src/utils/errorHandling.ts (1)
WorkflowTerminationError
(5-15)frontend/internal-packages/agent/src/pm-agent/invokePmAnalysisAgent.ts (1)
invokePmAnalysisAgent
(44-83)
frontend/packages/schema/src/deparser/postgresql/schemaDeparser.ts (1)
frontend/packages/schema/src/deparser/type.ts (1)
LegacySchemaDeparser
(19-19)
frontend/packages/schema/src/deparser/yaml/schemaDeparser.ts (2)
frontend/packages/schema/src/deparser/yaml/index.ts (1)
yamlSchemaDeparser
(1-1)frontend/internal-packages/neverthrow/src/index.ts (1)
fromThrowable
(11-16)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: frontend-lint
- GitHub Check: frontend-ci
- GitHub Check: Supabase Preview
- GitHub Check: security-review
- GitHub Check: agent-deep-modeling
🔇 Additional comments (7)
frontend/packages/schema/src/deparser/postgresql/schemaDeparser.ts (1)
17-24
: LGTM! Clear deprecation path for legacy type.The deprecation notice clearly indicates the planned migration to the new Result-based type system. The type change from
SchemaDeparser
toLegacySchemaDeparser
is consistent with the broader deparser type refactoring.frontend/internal-packages/agent/src/qa-agent/testcaseGeneration/generateTestcaseNode.ts (1)
39-43
: LGTM! Error handling is appropriate for this context.The Result-based error handling pattern is correctly implemented. The error is thrown directly, which is suitable for this test case generation subgraph context.
frontend/internal-packages/agent/src/qa-agent/testcaseGeneration/validateSchemaRequirementsNode.ts (1)
50-54
: LGTM! Consistent error handling within QA agent.The implementation follows the same error handling pattern as
generateTestcaseNode.ts
, which is appropriate for the QA agent's test case generation flow.frontend/internal-packages/agent/src/db-agent/nodes/designSchemaNode.ts (1)
18-26
: LGTM! Elegant use of Result.combine for multiple operations.The combined error handling pattern using
Result.combine
cleanly coordinates both configuration retrieval and schema deparsing. The destructuring ofrepositories
andschemaText
is clear and type-safe.frontend/internal-packages/agent/src/pm-agent/nodes/analyzeRequirementsNode.ts (1)
17-29
: LGTM! Consistent Result.combine pattern.The implementation follows the same Result.combine pattern as
designSchemaNode.ts
, appropriately wrapping errors inWorkflowTerminationError
for the workflow context.frontend/packages/schema/src/index.ts (1)
5-11
: LGTM! Public API expanded appropriately.The exports correctly include both legacy types (for backward compatibility during migration) and the new
yamlSchemaDeparser
, maintaining a clear migration path without breaking existing code.frontend/packages/schema/src/deparser/postgresql/operationDeparser.ts (1)
608-615
: LGTM! Consistent deprecation strategy.The deprecation notice and type migration from
OperationDeparser
toLegacyOperationDeparser
follows the same pattern asschemaDeparser.ts
, maintaining consistency across the deparser module.
🤖 Agent Deep Modeling ExecutionProcessing time: 7m 5s Command Output
RUN v3.2.4 /home/runner/work/liam/liam/frontend/internal-packages/agent (node:7681) ExperimentalWarning: WASI is an experimental feature and might change at any time ✅ [INFO] 2025-10-10T11:25:03.668Z 🧠 Thinking (pm)... Designing a user management system I’m acting as a PM Agent for the user’s request to design a simple user management system. First, I need to prepare a Business Requirements Document (BRD) by turning ambiguous requirements into clear, actionable ones. The current schema structure is empty, so I need to create a concise checklist, review user input to clarify requirements, and then format everything into the specified BRD format. I’ll also prepare test cases for the database and QA agents. Creating a BRD for user management I’m focusing on presenting a Business Requirements Document (BRD) in the response text. I’ll save the analyzed requirements using a designated tool afterward. This final artifact will include a summary and test cases organized by categories. Since we’re working autonomously, I won’t need any user dialogue. I need to define elements like business requirements, user lifecycle (registration, email verification), login, password reset, role management, and authentication tokens. Test cases will cover SQL operations like INSERT, UPDATE, DELETE, and SELECT, emphasizing both positive and negative scenarios. Defining test case categories I’m organizing the test case categories for the user management system, including User Registration, Authentication, Profile Management, Roles & Permissions, Account Status & Security, Audit & Soft Delete, Data Validation & Constraints, and Reporting & Search. Each test case will have business-focused titles without detailing steps or expected results, and I’ll ensure there are negative scenarios included. I’ll summarize the session goal in 1–2 sentences and structure the BRD content with sections like Overview, Scope, and Acceptance Criteria, confirming successful completion after saving. Structuring the BRD I need to be concise while preparing the production-ready Business Requirements Document (BRD) that serves as an actionable foundation. Since the tool saves only test cases, I can present the remainder of the BRD in my response. First, I'll create a checklist of around 3–7 bullets to outline essential components, like defining the scope, actors, edge cases, validation rules, and acceptance criteria. Then I’ll structure the BRD sections: Overview and Goal, Scope, Stakeholders and Actors, Business Processes, and a high-level Data Model Overview. Drafting test cases and BRD content I’m focusing on edge cases like normalizing email and password complexity while including negative tests. It’s critical to provide a purpose statement before any tool call, stating: "Purpose: Save analyzed requirements to artifact; Inputs: goal summary + categorized SQL-oriented test cases." I'll create test cases across various categories, like Registration, Authentication, Profile Updates, Roles & Permissions, Account Security, and more, ensuring around 40 total. The goal is to define a simple user management system for database design and QA verification. After saving, I’ll confirm completion. ⏺ (pm): Purpose: Save the analyzed requirements and SQL-oriented test cases for the simple user management system; Inputs: goal summary + categorized test cases. 🔧 saveRequirementsToArtifactTool() ⎿ unknown: Requirements saved successfully to artifact 🔧 schemaDesignTool() 🧠 Thinking (db)... Modifying operations for tables I've successfully added enums before the tables, but now I need to modify operations. This means I have to remove previous additions and recreate them with the necessary fixes. Since the functions tool applies patches, I'm working with a partial state right now. Since the previous attempt failed, nothing was created. So, I'll add the corrected tables and prepare to call the tool for adding these tables again with the corrected fields. 🔧 schemaDesignTool() 🧠 Thinking (db)... Adjusting database operations It looks like the generated column approach had an earlier error due to a missing enum. I need to add this enum first, but the previous syntax I used for the type field might still be acceptable. I'll keep that as "text GENERATED ALWAYS AS (lower(email)) STORED." Also, I think changing the status default to "'pending'::user_status" could help since the prior usage of 'pending' might have caused some issues. Now, I’ll focus on crafting the operations to add users, roles, and more. 🔧 schemaDesignTool() 🔧 saveTestcase() 🔧 saveTestcase() 🔧 saveTestcase() ⏺ (qa): { 🔧 saveTestcase() Validation & Constraints\n-- Category 🔧 saveTestcase(): Data Validation & Constraints\n-- Title: Reject null mandatory fields (email, password_hash, status)\n-- Type: INSERT\n 🔧 saveTestcase() ⏺ (qa): { 🔧 saveTestcase()\n-- Attempt to insert a user with null email -> should fail due to NOT NULL and CHECK on email\n-- Attempt to insert a user with null password_hash -> should fail due to password_changed_at\n-- Goal: Update a user's password_hash, set password_changed_at to current timestamp,\n-- and optionally log the change in audit_logs. We will target a specific user by id.\n\n-- ⏺ (qa): { 🔧 saveTestcase() 🔧 saveTestcase() 🔧 saveTestcase() ⏺ (qa): { WITH params AS ( ⏺ (qa): { 🔧 saveTestcase() ⏺ (qa): { 🔧 saveTestcase() 🔧 saveTestcase() ⏺ (qa): { 🔧 saveTestcase() 🔧 saveTestcase() 🔧 saveTestcase() ⏺ (qa): { ⏺ (qa): { 🔧 saveTestcase() ⏺ (qa): { WITH ins AS ( ⏺ (qa): { 🔧 saveTestcase() ⏺ (qa): {"sql":"-- Soft-delete a user: set deleted_at, is_deleted true, update status to 'deleted', and refresh updated_at\n-- Target user is identified by a parameter (e.g., :target_user_id). Here we use a standard PostgreSQL parameterized variable style $1\n\nWITH target AS (\n SELECT u.id\n FROM users u\n WHERE u.id = $1\n AND u.is_deleted = FALSE\n AND u.deleted_at IS NULL\n)\nUPDATE users\nSET\n deleted_at = NOW(),\n is_deleted = TRUE,\n status = 'deleted'::user_status,\n updated_at = NOW()\nFROM target\nWHERE users.id = target.id\nRETURNING\n users.id,\n users.email,\n users.status,\n users.deleted_at,\n users.is_deleted,\n users.updated_at;\n\n-- Optional audit log entry for compliance; creates a record of the soft-delete action\nINSERT INTO audit_logs (action, detail, user_id, actor_id, created_at)\nVALUES (\n 'user.soft_delete',\n jsonb_build_object('target_user_id', $1, 'performed_by', NULL),\n NULL,\n NULL,\n NOW()\n);\n"} 🔧 saveTestcase()(node:7681) Warning: The 'import-in-the-middle' hook has already been initialized 🔧 saveTestcase() 🔧 saveTestcase() 🔧 saveTestcase() 🔧 saveTestcase() ⏺ (qa): { 🔧 saveTestcase() 🔧 saveTestcase() 🔧 saveTestcase() 🔧 saveTestcase()(node:7681) Warning: The 'import-in-the-middle' hook has already been initialized ⏺ (qa): { 🔧 saveTestcase() 🔧 saveTestcase() 🔧 saveTestcase() 🔧 saveTestcase() -- deleted_at\n 'Jane Doe'::varchar(150), -- display_name\n NULL, -- last_login_at\n '${bcrypt_hash_placeholder}'::text, -- password_hash (replace with a real hash in your test setup)\n NULL, -- password_changed_at\n NULL, -- last_failed_login_at\n 0 -- failed_login_attempts\n )\n RETURNING id\n) AS u(id)\n\nINSERT INTO audit_logs (\n action,\n detail,\n user_id,\n actor_id,\n created_at\n) VALUES (\n 'user.create',\n jsonb_build_object(\n 'new_user_id', u.id\n ),\n u.id,\n NULL, -- actor_id: set to admin user id if available in your test context\n now()\n);\n" ⏺ (qa): {"sql":"-- Test Case: Change email to a new unique address succeeds\n-- Preconditions (assumed): there exists a user with a given id (in the test harness) who currently has a non-null email,\n-- and the new_email is not already used (i.e., lower(new_email) is not present in email_normalized).\n\n-- The test will be executed by substituting the following bind variables:\n-- :user_id -> UUID of the user being updated\n-- :new_email -> new email address to assign\n\nWITH input AS (\n SELECT :user_id::uuid AS user_id,\n :new_email AS new_email\n)\nUPDATE users u\nSET\n email = i.new_email,\n email_normalized = lower(i.new_email),\n updated_at = now(),\n last_login_at = NULL,\n last_failed_login_at = NULL,\n failed_login_attempts = 0\nFROM input i\nWHERE u.id = i.user_id\n AND u.email IS DISTINCT FROM i.new_email\nRETURNING\n u.id AS user_id,\n u.email AS email_before_change,\n u.email_normalized AS email_normalized_before,\n u.email AS email_after_change,\n u.email_normalized AS email_normalized_after,\n u.updated_at AS updated_at_after;\n"}(node:7681) Warning: The 'import-in-the-middle' hook has already been initialized ⏺ (qa): { 🔧 saveTestcase()(node:7681) Warning: The 'import-in-the-middle' hook has already been initialized ⏺ (qa): {"sql":"-- Test Case: Authentication blocked for locked user after too many attempts\n-- Goal: Verify that a user with status 'locked' or a user 🔧 saveTestcase()('token_id', v_token_id, 'user_id', v_v_user_id),\n v_v_user_id,\n NULL,\n now()\n );\n END IF;\nEND\n$$;\n" 🔧 saveTestcase() based on schema:\n-- - User status can be 'locked' via user_status enum\n-- - failed_login_attempts >= 5 indicates lock threshold (adjustable in real policy)\n-- - email normalization and unique constraints exist; authentication would typically use email and password_hash\n-- - We simulate an eligibility check that would be used by authentication logic\n\n-- 1) Parameters for the test case (example values; in real tests these would be inputs)\nWITH\n test_input AS (\n SELECT\n LOWER('[email protected]') AS email_normalized_input,\n 'some-password' AS input_password\n ),\n\n-- 2) Fetch user matching the login email with necessary fields\n user_row AS (\n SELECT\n u.id,\n u.email,\n u.email_normalized,\n u.password_hash,\n u.status,\n u.is_deleted,\n u.deleted_at,\n u.failed_login_attempts,\n u.password_changed_at,\n u.created_at\n FROM users u\n JOIN test_input t ON true\n WHERE u.email_normalized = test_input.email_normalized_input\n -- ensure we only consider non-deleted accounts for login eligibility\n AND (u.is_deleted = false OR u.deleted_at IS NULL)\n LIMIT 1\n ),\n\n-- 3) Define business-logic checks for eligibility\n eligibility AS (\n SELECT\n u.id,\n u.email,\n u.status,\n u.is_deleted,\n u.failed_login_attempts,\n u.password_hash,\n (CASE\n -- Locked status blocks authentication regardless of attempts\n WHEN u.status = 'locked'::user_status THEN true\n -- If account is not locked, enforce failed attempts policy (threshold 5 as example)\n WHEN u.failed_login_attempts >= 5 THEN true\n ELSE false\n END) AS is_locked_or_exceeded_attempts\n FROM user_row u\n )\n\n-- 4) Output: whether authentication should be blocked and the reasons\nSELECT\n e.id AS user_id,\n e.email,\n e.status,\n e.failed_login_attempts,\n CASE\n WHEN e.status = 'locked'::user_status THEN 'blocked: account is locked' \n WHEN e.failed_login_attempts >= 5 THEN 'blocked: too many failed attempts'\n ELSE 'eligible'\n END AS authentication_status,\n CASE\n WHEN e.status = 'locked'::user_status THEN true\n ELSE false\n END AS is_locked_flag,\n CASE\n WHEN e.failed_login_attempts >= 5 THEN true\n ELSE false\n END AS exceeded_attempts_flag\nFROM eligibility e\nWHERE e.is_locked_or_exceeded_attempts = true\nORDER BY e.id\n;"} ⎿ unknown: 24/44 test cases passed, 20 failed ❌ Test Case: Create a new user with unique email stores hashed passwordError:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds a YAML schema deparser to optimize LLM context usage by converting Schema objects to compact YAML format. The new implementation removes null values to minimize token usage while providing better structured output than the previous text-based approach.
- Adds
yamlSchemaDeparser
function using neverthrow error handling patterns - Migrates all agent nodes from
convertSchemaToText
toyamlSchemaDeparser
- Updates type definitions with legacy annotations for backward compatibility
Reviewed Changes
Copilot reviewed 13 out of 14 changed files in this pull request and generated no comments.
Show a summary per file
File | Description |
---|---|
frontend/packages/schema/src/index.ts | Exports new YAML deparser and updated type definitions |
frontend/packages/schema/src/deparser/yaml/schemaDeparser.ts | Implements YAML schema deparser with null value removal |
frontend/packages/schema/src/deparser/yaml/schemaDeparser.test.ts | Comprehensive test suite for YAML deparser functionality |
frontend/packages/schema/src/deparser/yaml/index.ts | Export barrel for YAML deparser |
frontend/packages/schema/src/deparser/type.ts | Updates type definitions with legacy support and neverthrow integration |
frontend/packages/schema/src/deparser/postgresql/schemaDeparser.ts | Marks PostgreSQL deparser as legacy with deprecation notice |
frontend/packages/schema/src/deparser/postgresql/operationDeparser.ts | Marks PostgreSQL operation deparser as legacy with deprecation notice |
frontend/packages/schema/package.json | Adds yaml dependency |
frontend/internal-packages/agent/src/utils/convertSchemaToText.ts | Removes deprecated schema-to-text conversion utility |
frontend/internal-packages/agent/src/qa-agent/testcaseGeneration/validateSchemaRequirementsNode.ts | Migrates to yamlSchemaDeparser with error handling |
frontend/internal-packages/agent/src/qa-agent/testcaseGeneration/generateTestcaseNode.ts | Migrates to yamlSchemaDeparser with error handling |
frontend/internal-packages/agent/src/pm-agent/nodes/analyzeRequirementsNode.ts | Migrates to yamlSchemaDeparser using Result.combine |
frontend/internal-packages/agent/src/db-agent/nodes/designSchemaNode.ts | Migrates to yamlSchemaDeparser using Result.combine |
Files not reviewed (1)
- pnpm-lock.yaml: Language not supported
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
Issue
Why is this change needed?
This PR adds a new YAML schema deparser optimized for LLM context usage. The deparser converts Schema objects to compact YAML format by removing null values, significantly reducing token usage when passing schema information to LLMs.
Changes Made
Schema Package (
@liam-hq/schema
):yamlSchemaDeparser
function that converts Schema objects to YAML format_unsafeUnwrap
for test assertionsyaml
package for reliable YAML serializationfromThrowable
Agent Package (
@liam-hq/agent
):yamlSchemaDeparser
instead ofconvertSchemaToText
convertSchemaToText
utilityBenefits
Example Output
Before (convertSchemaToText):
After (yamlSchemaDeparser):
Summary by CodeRabbit