Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 8 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -582,12 +582,15 @@ If you put a list within a list, the inner list can only contain

The output of the model can be influenced by several optional settings
available in generate_content's config parameter. For example, increasing
`max_output_tokens` is essential for longer model responses. To make a model more
deterministic, lowering the `temperature` parameter reduces randomness, with
values near 0 minimizing variability. Capabilities and parameter defaults for
each model is shown in the
`max_output_tokens` is essential for longer model responses. To make a model
more deterministic, lowering the `temperature` parameter reduces randomness,
with values near 0 minimizing variability. Capabilities and parameter defaults
for each model is shown in the
[Vertex AI docs](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-5-flash)
and [Gemini API docs](https://ai.google.dev/gemini-api/docs/models) respectively.
and [Gemini API docs](https://ai.google.dev/gemini-api/docs/models)
respectively. Note that all API methods support Pydantic types and
dictionaries, which you can access from `google.genai.types`. In this example,
we use GenerateContentConfig to specify the desired behavior from the model.

```python
from google.genai import types
Expand Down
Loading