Add GBK charset decoding kernel for StringDecode#4431
Add GBK charset decoding kernel for StringDecode#4431thirtiseven wants to merge 4 commits intoNVIDIA:mainfrom
Conversation
Add a GPU kernel that decodes GBK-encoded binary data to UTF-8 strings, matching Java's GBK CharsetDecoder behavior for use with Spark's StringDecode expression. - Lookup table generated from Java's GBK CharsetDecoder (REPORT mode), with generator script included for reproducibility - Two-pass kernel using cudf::strings::detail::make_strings_children - Invalid byte sequences replaced with U+FFFD - Second byte 0x7F excluded from pair consumption (matches Java) - Second byte 0xFF consumed as pair but mapped to U+FFFD (matches Java) - Java unit test with full 24066 byte-pair end-to-end verification Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Greptile SummaryThis PR implements a two-pass CUDA kernel that decodes GBK-encoded Confidence Score: 5/5PR is safe to merge; all prior review concerns are resolved and the only remaining finding is a P2 test coverage gap. All P0/P1 concerns from prior rounds (byte 0x80 mapping, static singleton lifecycle, Java-side validation, REPORT vs REPLACE mode) have been addressed or confirmed as non-issues on the target JDK. The single remaining gap — no explicit test for {valid_lead, 0xFF} second byte — is P2 and does not block merge. src/test/java/com/nvidia/spark/rapids/jni/CharsetDecodeTest.java — missing second=0xFF test case in testAllGbkDoubleBytePairs Important Files Changed
Sequence DiagramsequenceDiagram
participant Java as CharsetDecode.decode()
participant JNI as CharsetDecodeJni.cpp
participant CPP as charset_decode.cu
participant Once as get_gbk_table (once_flag)
participant GPU as gbk_decode_fn kernel
Java->>JNI: decodeNative(nativeColumnView, charset=GBK)
JNI->>CPP: decode_charset(input, GBK, stream, mr)
CPP->>Once: get_gbk_table(stream)
Once-->>CPP: device ptr to 24066-entry table (copied once)
CPP->>GPU: make_strings_children pass 1 (d_chars=null, compute sizes)
GPU->>GPU: per-row: walk bytes, lookup table, accumulate output_size
GPU-->>CPP: d_sizes[]
CPP->>GPU: make_strings_children pass 2 (d_chars=allocated, write bytes)
GPU->>GPU: per-row: walk bytes, lookup table, write UTF-8
GPU-->>CPP: output chars buffer
CPP->>CPP: make_strings_column(offsets, chars, null_count, copy_bitmask)
CPP-->>JNI: unique_ptr<cudf::column>
JNI-->>Java: ColumnVector (UTF-8 strings)
Reviews (3): Last reviewed commit: "address comments" | Re-trigger Greptile |
| if (byte <= 0x7F) { | ||
| // ASCII: single byte, maps directly to UTF-8 | ||
| if (out_ptr) { *out_ptr++ = static_cast<char>(byte); } | ||
| output_size += 1; | ||
| i += 1; | ||
| } else if (byte >= GBK_FIRST_BYTE_MIN && byte <= GBK_FIRST_BYTE_MAX && (i + 1) < len) { |
There was a problem hiding this comment.
Byte 0x80 not mapped to U+20AC (€) — mismatch with Java GBK
Java's "GBK" charset in both Oracle JDK and OpenJDK is implemented as CP936/Windows-936, which maps the single byte 0x80 to U+20AC (Euro sign, €). The current kernel treats 0x80 as an invalid lead byte (it falls outside [GBK_FIRST_BYTE_MIN, GBK_FIRST_BYTE_MAX] = [0x81, 0xFE]) and emits U+FFFD instead.
This causes a behavioral divergence from Spark's CPU-side StringDecode("GBK") on any JVM where GBK maps 0x80 → € (i.e., practically all modern JVMs). The testByte0x80 unit test acknowledges this JDK dependency — the test itself will fail on any JVM that maps 0x80 → € because the GPU always returns FFFD.
To match Java's GBK decoder, byte 0x80 should be decoded as U+20AC:
if (byte == 0x80) {
// CP936/Windows-936 extension: 0x80 → U+20AC (EURO SIGN)
int utf8_len = codepoint_to_utf8(0x20AC, out_ptr);
if (out_ptr) { out_ptr += utf8_len; }
output_size += utf8_len;
i += 1;
} else if (byte <= 0x7F) {
// ASCII ...There was a problem hiding this comment.
We verified this on the actual build JDK (OpenJDK 1.8.0_482): new String(new byte[]{(byte)0x80}, "GBK") returns U+FFFD, not U+20AC. We also tried adding a 0x80 → U+20AC special case in the kernel — the integration test (GPU vs CPU comparison with random binary data) failed immediately because the CPU side produces FFFD.
The testByte0x80 test uses decodeGbkJava() which dynamically queries the JDK behavior as ground truth, so it adapts to whichever JDK is in use. The claim that "practically all modern JVMs" map 0x80 to € does not hold for this JDK.
| static std::once_flag gbk_table_init_flag; | ||
| static std::unique_ptr<rmm::device_uvector<uint16_t>> gbk_table_device; | ||
|
|
||
| uint16_t const* get_gbk_table(rmm::cuda_stream_view stream) | ||
| { | ||
| // synchronize() ensures the memcpy completes before any kernel on any stream | ||
| // can use the table pointer returned below. | ||
| std::call_once(gbk_table_init_flag, [stream]() { | ||
| gbk_table_device = std::make_unique<rmm::device_uvector<uint16_t>>(GBK_TABLE_SIZE, stream); | ||
| cudf::detail::cuda_memcpy_async( | ||
| cudf::device_span<uint16_t>(gbk_table_device->data(), GBK_TABLE_SIZE), | ||
| cudf::host_span<uint16_t const>(gbk_to_unicode_host, GBK_TABLE_SIZE), | ||
| stream); | ||
| stream.synchronize(); | ||
| }); | ||
| return gbk_table_device->data(); | ||
| } |
There was a problem hiding this comment.
Static device table singleton — dangling pointer risk on CUDA/RMM re-init
gbk_table_device and gbk_table_init_flag are file-scope statics. std::once_flag is never reset. If CUDA or RMM is torn down and re-initialized within the same process (a common pattern in unit-test suites and JVM-embedded workloads), the sequence is:
- First run: table is allocated with the stream from the first call.
- RMM allocator is reset / CUDA context is destroyed (test teardown).
gbk_table_device's internal device pointer is now dangling.- Next call to
decode_gbk:call_oncesees the flag already set, skips initialization, and returns the dangling pointer. - The GPU kernel reads from freed device memory → silent corruption or crash.
Additionally, at process exit, the static destructor of gbk_table_device issues a cudaFree after CUDA is shut down, which can surface as a CUDA error in test logs.
Consider managing this table through the existing cudf resource lifecycle, or at minimum validating the device pointer on each call:
// Option: use a function-local static that ties lifetime to RMM resource
// Or check before returning:
CUDF_EXPECTS(gbk_table_device != nullptr, "GBK table not initialized");There was a problem hiding this comment.
Acknowledged. This is a known pattern in the project — other static singletons like the bloom filter hash seed table (bloom_filter.cu) follow the same approach. In spark-rapids-jni, the CUDA context and RMM allocator are not torn down and re-initialized within an executor process lifetime.
If this becomes a project-wide concern, it should be addressed holistically rather than in this PR alone.
| public static ColumnVector decode(ColumnView cv, int charset) { | ||
| return new ColumnVector(decodeNative(cv.getNativeView(), charset)); | ||
| } |
There was a problem hiding this comment.
Raw
int charset parameter — no validation before JNI call
decode(ColumnView cv, int charset) accepts any arbitrary integer. In CharsetDecodeJni.cpp, this is directly cast to charset_type via static_cast<spark_rapids_jni::charset_type>(charset). An invalid value (e.g., 99) will reach the switch statement and hit default: CUDF_FAIL(...), but only after an unchecked cast from an integer to an enum.
Consider validating the input before passing to native code, or using an enum on the Java side:
public static ColumnVector decode(ColumnView cv, int charset) {
if (charset != GBK) {
throw new IllegalArgumentException("Unsupported charset: " + charset);
}
return new ColumnVector(decodeNative(cv.getNativeView(), charset));
}Alternatively, exposing an enum-typed overload would make misuse a compile-time error.
There was a problem hiding this comment.
Fair point. The C++ side already catches invalid values via CUDF_FAIL in the default switch branch, so there is no undefined behavior. The only caller is the GPU plugin which always passes CharsetDecode.GBK. Will add a Java-side validation as defense in depth.
| .onMalformedInput(CodingErrorAction.REPORT) | ||
| .onUnmappableCharacter(CodingErrorAction.REPORT); | ||
|
|
||
| System.out.println("// GBK to Unicode lookup table."); | ||
| System.out.println("// Generated by GenerateGbkTable.java using Java's GBK CharsetDecoder."); | ||
| System.out.println("// Index: (first_byte - 0x81) * 191 + (second_byte - 0x40)"); | ||
| System.out.println("// Entries: 126 * 191 = 24066"); | ||
|
|
||
| for (int first = 0x81; first <= 0xFE; first++) { | ||
| StringBuilder sb = new StringBuilder(); | ||
| for (int second = 0x40; second <= 0xFE; second++) { | ||
| byte[] bytes = {(byte) first, (byte) second}; | ||
| int cp; | ||
| try { | ||
| decoder.reset(); | ||
| CharBuffer cb = decoder.decode(ByteBuffer.wrap(bytes)); | ||
| cp = (cb.length() == 1) ? cb.charAt(0) : 0xFFFD; | ||
| } catch (Exception e) { | ||
| cp = 0xFFFD; | ||
| } | ||
| if (second > 0x40) { |
There was a problem hiding this comment.
Generator uses
REPORT mode; test ground truth uses REPLACE mode
The generator uses CodingErrorAction.REPORT (throws MalformedInputException/UnmappableCharacterException, caught and mapped to 0xFFFD), while CharsetDecodeTest.decodeGbkJava uses CodingErrorAction.REPLACE. For standard GBK pairs these produce identical tables, but the generator does not handle standalone 0x80 at all (it only iterates first from 0x81 to 0xFE). This is fine for the double-byte table itself, but documents a discrepancy with the test's ground truth function worth noting:
- Generator:
0x80not iterated → not in table. - Java ground truth (
decodeGbkJava):0x80decoded withREPLACEmode → returns€on most JVMs.
The generator comment should note that 0x80 → U+20AC needs separate handling in the kernel, since it is outside the generated table range.
There was a problem hiding this comment.
Good observation on the mode difference. We investigated this thoroughly — the behavioral difference between REPORT and REPLACE modes is limited to:
- Second byte 0x7F: REPLACE mode splits
(lead, 0x7F)into two characters (FFFD + DEL), while REPORT mode on the isolated 2-byte buffer throws. The kernel excludes 0x7F from valid second bytes (second \!= 0x7F) to match REPLACE behavior. - Second byte 0xFF: REPLACE mode consumes
(lead, 0xFF)as a pair (advance 2, emit FFFD), while the table range only covers 0x40-0xFE. The kernel handles this with a special case (second > GBK_SECOND_BYTE_MAX → UNICODE_REPLACEMENT, advance 2).
We validated this by running a Java simulation of the kernel logic against Java REPLACE-mode streaming decode on 100k random byte sequences with zero mismatches.
Regarding 0x80: as noted in the reply above, this JDK (OpenJDK 1.8.0) returns FFFD for standalone 0x80, not €. The test dynamically adapts to the JDK behavior.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Signed-off-by: Haoyang Li <haoyangl@nvidia.com>
Add a GPU kernel that decodes GBK-encoded binary data to UTF-8 strings, matching Java's GBK CharsetDecoder behavior for use with Spark's StringDecode expression.