Read: fetch file_schema directly from pyarrow_to_schema#597
Merged
HonahX merged 8 commits intoapache:mainfrom Apr 13, 2024
Merged
Read: fetch file_schema directly from pyarrow_to_schema#597HonahX merged 8 commits intoapache:mainfrom
HonahX merged 8 commits intoapache:mainfrom
Conversation
kevinjqliu
approved these changes
Apr 11, 2024
Contributor
There was a problem hiding this comment.
LGTM!
I want to summarize my understanding, based on the comment from #584 .
When reading the parquet files, we use the projected version of the parquet file's schema, the arrow table created is then
"casted" the Iceberg's schema. This mapping is based on field_id
iceberg-python/pyiceberg/io/pyarrow.py
Line 1026 in 5039b5d
Fokko
approved these changes
Apr 12, 2024
| physical_schema = fragment.physical_schema | ||
| schema_raw = None | ||
| if metadata := physical_schema.metadata: | ||
| schema_raw = metadata.get(ICEBERG_SCHEMA) |
Contributor
There was a problem hiding this comment.
My initial intent was that it was probably faster to deserialize the schema, rather than run the visitor, but this shows is not worth the additional complexity :)
Contributor
Author
|
Thanks @kevinjqliu and @Fokko for reviewing! Thanks @kevinjqliu for the integration test! |
HonahX
added a commit
to HonahX/iceberg-python
that referenced
this pull request
Apr 13, 2024
Fokko
pushed a commit
that referenced
this pull request
Apr 14, 2024
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
#584 (comment)
I think we do correctly project by IDs. The real problem is the way that we sanitize the column names.
In #83, we add sanitize the file_schema in _task_to_table with the assumption that the column name of the parquet file follows the Avro Naming spec. However, I think the "sanitization" should be more general here: it should just ensure that the final
file_project_schemacontains the same column names of the parquet file's schema.The names in
file_schemaare different from the actual column names in the parquet file because we first try to load the file schema from the json string stored in the parquet file metadata: linkParquet files written by iceberg java contain this metadata json string. The json string represents the iceberg table schema at the time of writting the file. Therefore, it contains un-sanitized column names.
Since we always need to run a visitor to sanitize/ensure column names match, how about we just get the
file_schemadirectly from the pyarrow physical schema?This way, we can ensure that the column names match, and thus do not need to sanitize the column names later.
I have verified that changing to this can fix both the sanitization issue in #83 and the issue here. Given that we want to align the writing behavior with the java implementation, we should also proceed #590.
Borrowed the integration test from #590