Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request improves the scalability of SnowflakeIO by introducing a BoundedSource, which allows for more efficient parallel processing of staged CSV files. Additionally, it addresses a race condition in the Flink runner's source split enumeration, ensuring that subtasks correctly wait for splits to become available. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
|
Assigning reviewers: R: @ahmedabu98 for label java. Note: If you would like to opt out of this review, comment Available commands:
The PR bot will only process comments in the main thread (not review comments). |
|
Hello @Abacn it seems you have implemented a different more thread-safe way of doing the I suppose I should rebase and get rid of mine, keeping only the Snowflake part? |
Sorry missed that it already been fixed here. Yeah please go ahead |
…ed files generated by Snowflake COPY.
|
@Abacn rebased and tested again to be fixing the original Snowflake issue. |
| getSnowflakeServices(), | ||
| getQuotationMark()))) | ||
| .apply(Reshuffle.viaRandomKey()) | ||
| .apply(FileIO.matchAll()) |
There was a problem hiding this comment.
Dataflow can do it because FileIO.matchAll() has a ReShuffle present by default:
beam/sdks/java/core/src/main/java/org/apache/beam/sdk/io/FileIO.java
Lines 734 to 735 in 6dd599c
This introduces a fusion break and the downstream can be parallelized.
It sounds a Flink runner bug that significant impacted its performence. Preferrably a fix should be done in Flink runner and brings back what ReShuffle intended to do (fusion break). Have you tried to not set useDataStreamForBatch (only availlable for Flink 1.x)?
This PR essentially rewrites SnowflakeIO bounded read and would need a closer eye (I could help for generic Java but less experience on Snowflake connecor)
There was a problem hiding this comment.
We had issues without useDataStreamForBatch. I'll try again on the updated version without it just to be sure. Also we have not tried Flink 2 yet because the support is quite recent.
Before the change, it seems Flink does apply reshuffle, but since it reduced the number of workers to 1, it's reshuffling... in 1 worker, not changing that because the input size (list of files) is tiny.
SnowflakeIOhas several steps:COPYthat outputs partitioned gzipped CSV files in a directoryWhile 1. and 2. are done by one worker, 3. and 4. can be parallelized.
It appears that Google Dataflow is able to do that (using work stealing?), but Apache Flink (with
--useDataStreamForBatch=true) propagates the scale of 1. / 2. to 3. 4., leading to very long processing when it can be fully scalable.This change creates a
SnowflakeBoundedSourceinstead of a simpleDoFnto execute theCOPYand then read the splits.When doing that, a bug appears: a race between the apparition of those splits and the reading. It is solved by a change inLazyFlinkSourceSplitEnumeratorto make subtasks wait for the splits to be ready.I tested it still works on Google Dataflow.GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.