Summary
The Node.js client has quickly grown into a broad wrapper over hlquery’s collection, document, search, auth, and operational APIs, but the project still appears to lack a real test harness. This creates a high-priority reliability gap: protocol-level regressions in request construction, response parsing, and ingestion helpers can ship unnoticed and undermine trust in a high-performance search client.
Context
The recent README expansion documents a much wider surface area than a minimal first commit: Client, Request, Response, Collections, Documents, Search, auth/config/validation utilities, and newer helpers such as PDF/CSV ingestion. It also still describes tests/ as future work, which suggests the client is adding features faster than it is adding verification.
For a RocksDB-backed search wrapper, correctness at the client/server boundary is critical. Bugs in auth headers, query serialization, legacy route compatibility, bulk import/export payloads, or response normalization will present as search or indexing failures even when the server is behaving correctly. That is especially risky now that the client is exposing convenience abstractions and automatic normalization behavior.
Proposed Implementation
- Add a real automated test suite under
tests/ focused on API contracts rather than only isolated unit behavior.
- Cover core transport logic in
Request and Response, including bearer vs X-API-Key auth, query/body serialization, invalid JSON handling, and non-2xx error mapping.
- Add integration tests for the highest-risk workflows in
Search and Documents, especially canonical collection search, legacy search compatibility, bulk import/export, and PDF/CSV helper normalization.
- Add validator tests for malformed collection names, invalid document payloads, pagination edges, and unsupported search parameter combinations.
- Run the suite in CI against a lightweight local hlquery instance so the client is validated against real server behavior, not only mocks.
Impact
This would address the highest current engineering risk in the codebase: an expanding public API without a safety net. It would prevent subtle regressions at the exact boundary where failures are hardest to diagnose, improve confidence in future feature work, and make the Node client substantially more credible for production use in performance-sensitive search workloads.
Summary
The Node.js client has quickly grown into a broad wrapper over hlquery’s collection, document, search, auth, and operational APIs, but the project still appears to lack a real test harness. This creates a high-priority reliability gap: protocol-level regressions in request construction, response parsing, and ingestion helpers can ship unnoticed and undermine trust in a high-performance search client.
Context
The recent README expansion documents a much wider surface area than a minimal first commit:
Client,Request,Response,Collections,Documents,Search, auth/config/validation utilities, and newer helpers such as PDF/CSV ingestion. It also still describestests/as future work, which suggests the client is adding features faster than it is adding verification.For a RocksDB-backed search wrapper, correctness at the client/server boundary is critical. Bugs in auth headers, query serialization, legacy route compatibility, bulk import/export payloads, or response normalization will present as search or indexing failures even when the server is behaving correctly. That is especially risky now that the client is exposing convenience abstractions and automatic normalization behavior.
Proposed Implementation
tests/focused on API contracts rather than only isolated unit behavior.RequestandResponse, including bearer vsX-API-Keyauth, query/body serialization, invalid JSON handling, and non-2xx error mapping.SearchandDocuments, especially canonical collection search, legacy search compatibility, bulk import/export, and PDF/CSV helper normalization.Impact
This would address the highest current engineering risk in the codebase: an expanding public API without a safety net. It would prevent subtle regressions at the exact boundary where failures are hardest to diagnose, improve confidence in future feature work, and make the Node client substantially more credible for production use in performance-sensitive search workloads.