Whether CSRF token is available
Whether the bridge is ready (port discovered)
The discovered LS port
Cancel a running cascade invocation.
Create a new cascade and optionally send a message. Fully headless — no UI panel opened, no conversation switched.
cascadeId or null on failure
Switch the UI to show a specific cascade conversation.
Get details of a specific conversation.
Get trajectory descriptions (lighter than full trajectories). Returns { trajectories: [...] }.
Get user status (tier, models, etc.)
Discover the Language Server port and CSRF token. Must be called before other methods.
Discovery chain:
Get all cascade trajectories (conversation list).
Make a raw RPC call to any LS method.
RPC method name (e.g. 'StartCascade')
JSON payload
Send a message to an existing cascade.
true if sent successfully
Manually set the LS connection parameters.
Use this when auto-discovery fails (e.g., non-standard install,
or you've discovered the port/token through other means like lsof).
LS port number
CSRF token from LS process CLI args
Whether to use HTTPS (default: false, extension_server uses HTTP)
Star (pin) or unstar a conversation.
This sets the starred field in ConversationAnnotations.
Conversation ID
true to star, false to unstar
Set a custom title for a conversation.
This sets the title field in ConversationAnnotations.
When set, this title should be displayed instead of the
auto-generated summary from the LLM.
Conversation ID
Custom title to set
Native conversation annotations (verified from jetski_cortex.proto).
ConversationAnnotations protobuf fields:
Conversation ID
Partial annotation fields to set
If true, merge with existing annotations (default: true)
Direct bridge to the Language Server via ConnectRPC.
Discovers the LS port and CSRF token from the LS process CLI args, then makes authenticated HTTPS POST calls to the LS endpoints.
Example