←back to thread

174 points andy99 | 1 comments | | HN request time: 0s | source
Show context
g-mork ◴[] No.43603642[source]
When did vulnerability reports get so vague? Looks like a classic serialization bug

https://github.com/apache/parquet-java/compare/apache-parque...

replies(3): >>43603809 #>>43604045 #>>43604276 #
amluto ◴[] No.43603809[source]
Better link: https://github.com/apache/parquet-java/pull/3169

If by “classic” you mean “using a language-dependent deserialization mechanism that is wildly unsafe”, I suppose. The surprising part is that Parquet is a fairly modern format with a real schema that is nominally language-independent. How on Earth did Java class names end up in the file format? Why is the parser willing to parse them at all? At most (at least by default), the parser should treat them as predefined strings that have semantics completely independent of any actual Java class.

replies(1): >>43603943 #
bri3d ◴[] No.43603943[source]
This seems to come from parquet-avro, which looks to attempt to embed Avro in Parquet files and in the course of doing so, does silly Java reflection gymnastics. I don’t think “normal” parquet is affected.
replies(2): >>43604120 #>>43604161 #
amluto ◴[] No.43604120{3}[source]
The documentation for all of this is atrocious.

But if avro-in-parquet is a weird optional feature, it should be off by default! Parquet’s metadata is primarily in Thrift, not Avro, and it seems to me that no Avro should be involved in decoding Parquet files unless explicitly requested.

replies(1): >>43605060 #
bri3d ◴[] No.43605060{4}[source]
To the sibling comment’s point, I suppose it’s not weird in the Java ecosystem. The parquet-java project has a design where it deserializes Parquet fields into Java representations grabbed from _other_ projects rather than either having some kind of canonical self-representation in memory or acting as just an abstract codec. So, one of the most common things to do is apparently to use the “Avro” flavored serdes to get generic records in memory (note that the actual Avro serialization format is not involved with doing that; parquet-java just uses the classes from Avro as the in memory representations and deserializes Parquet into them). The whole approach seems a bit goofy; I’d expect the library to work as some kind of abstracted codec interface (requiring the in-memory representations to host Parquet, rather than the other way around - like how pandas hosts fastparquet in Python land) or provide a canonical object representation. Instead, it’s this in between where it has a grab bag of converters that transform Parquet to and from random object types pulled from elsewhere in the Java ecosystem.
replies(1): >>43605376 #
1. amluto ◴[] No.43605376{5}[source]
I’d still like to see a clear explanation of where one can stick a Java class name in a Parquet file such that it ends up interpreted by the Avro codec. And I’m curious why it was fixed by making a list of allowed class names instead of disabling the entire mechanism.