Describe the bug
The OpenAI Custom Model node works correctly with most providers that expose an OpenAI-compatible Chat Completions API.
However, it fails when used with Gemini models exposed through OpenAI-compatible gateways (for example LLM routers that proxy Vertex AI / Gemini through an OpenAI format API).
The failure happens during tool calling / function calling flows and is related to the thought_signature field returned or expected in tool call payloads.
To Reproduce
- Create a chatflow using the OpenAI Custom Model node.
- Configure the node with an OpenAI-compatible endpoint that routes requests to a Gemini model (for example 3. Vertex AI through an LLM Router / proxy).
- Enable tool calling and connect any tool (Retriever Tool, Calculator, etc.).
- Ask a prompt that requires tool usage.
- Observe the execution after the tool call.
Optional:
7. Enable streaming and repeat the same test.
Expected behavior
The OpenAI Custom Model node should work with any provider exposing a valid OpenAI-compatible Chat Completions API, including Gemini-based routed endpoints.
Expected results:
Tool calling should execute successfully.
After tool execution, the assistant should continue normally.
Streaming should render tokens progressively.
No blank responses or stuck generations.
Provider-specific metadata should not break execution.
Screenshots
Flow
No response
Use Method
Docker
Flowise Version
3.1.2
Operating System
Windows
Browser
Chrome
Additional context
Tested with Gemini models exposed through OpenAI-compatible routers/proxies that internally use Vertex AI.
The same flow works correctly with standard OpenAI models, so the issue seems specific to Gemini compatibility.
Tool calling is triggered normally, but after the first tool execution the provider rejects the follow-up request because required metadata (thought_signature) is not preserved.
This suggests Flowise currently handles tool roundtrips using OpenAI-native assumptions, which may not be sufficient for other OpenAI-compatible providers.
Potentially related areas:
OpenAI Custom Model node
Tool call serialization
Assistant/tool follow-up message formatting
Streaming + tool execution compatibility
Preservation of provider-specific fields in tool calls
Describe the bug
The OpenAI Custom Model node works correctly with most providers that expose an OpenAI-compatible Chat Completions API.
However, it fails when used with Gemini models exposed through OpenAI-compatible gateways (for example LLM routers that proxy Vertex AI / Gemini through an OpenAI format API).
The failure happens during tool calling / function calling flows and is related to the thought_signature field returned or expected in tool call payloads.
To Reproduce
Optional:
7. Enable streaming and repeat the same test.
Expected behavior
The OpenAI Custom Model node should work with any provider exposing a valid OpenAI-compatible Chat Completions API, including Gemini-based routed endpoints.
Expected results:
Tool calling should execute successfully.
After tool execution, the assistant should continue normally.
Streaming should render tokens progressively.
No blank responses or stuck generations.
Provider-specific metadata should not break execution.
Screenshots
Flow
No response
Use Method
Docker
Flowise Version
3.1.2
Operating System
Windows
Browser
Chrome
Additional context
Tested with Gemini models exposed through OpenAI-compatible routers/proxies that internally use Vertex AI.
The same flow works correctly with standard OpenAI models, so the issue seems specific to Gemini compatibility.
Tool calling is triggered normally, but after the first tool execution the provider rejects the follow-up request because required metadata (thought_signature) is not preserved.
This suggests Flowise currently handles tool roundtrips using OpenAI-native assumptions, which may not be sufficient for other OpenAI-compatible providers.
Potentially related areas:
OpenAI Custom Model node
Tool call serialization
Assistant/tool follow-up message formatting
Streaming + tool execution compatibility
Preservation of provider-specific fields in tool calls