Skip to content

OpenAI Custom Model node fails with Gemini/OpenAI-compatible providers due to thought_signature tool-call incompatibility #6275

@limbergarcia

Description

@limbergarcia

Describe the bug

The OpenAI Custom Model node works correctly with most providers that expose an OpenAI-compatible Chat Completions API.

However, it fails when used with Gemini models exposed through OpenAI-compatible gateways (for example LLM routers that proxy Vertex AI / Gemini through an OpenAI format API).

The failure happens during tool calling / function calling flows and is related to the thought_signature field returned or expected in tool call payloads.

To Reproduce

  1. Create a chatflow using the OpenAI Custom Model node.
  2. Configure the node with an OpenAI-compatible endpoint that routes requests to a Gemini model (for example 3. Vertex AI through an LLM Router / proxy).
  3. Enable tool calling and connect any tool (Retriever Tool, Calculator, etc.).
  4. Ask a prompt that requires tool usage.
  5. Observe the execution after the tool call.

Optional:
7. Enable streaming and repeat the same test.

Expected behavior

The OpenAI Custom Model node should work with any provider exposing a valid OpenAI-compatible Chat Completions API, including Gemini-based routed endpoints.

Expected results:

Tool calling should execute successfully.
After tool execution, the assistant should continue normally.
Streaming should render tokens progressively.
No blank responses or stuck generations.
Provider-specific metadata should not break execution.

Screenshots

Image Image

Flow

No response

Use Method

Docker

Flowise Version

3.1.2

Operating System

Windows

Browser

Chrome

Additional context

Tested with Gemini models exposed through OpenAI-compatible routers/proxies that internally use Vertex AI.

The same flow works correctly with standard OpenAI models, so the issue seems specific to Gemini compatibility.

Tool calling is triggered normally, but after the first tool execution the provider rejects the follow-up request because required metadata (thought_signature) is not preserved.

This suggests Flowise currently handles tool roundtrips using OpenAI-native assumptions, which may not be sufficient for other OpenAI-compatible providers.

Potentially related areas:

OpenAI Custom Model node
Tool call serialization
Assistant/tool follow-up message formatting
Streaming + tool execution compatibility
Preservation of provider-specific fields in tool calls

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions