What’s your take on parquet?

I’m still reading into it. Why is it closely related to apache? Does inly apache push it? Meaning, if apache drops it, there’d be no interest from others to push it further?

It’s published under apache hadoop license. It is a permissive license. Is there a drawback to the license?

Do you use it? When?

I assume for sharing small data, csv is sufficient. Also, I assume csv is more accessible than parquet.

  • The Hobbyist@lemmy.zip
    link
    fedilink
    arrow-up
    5
    ·
    2 months ago

    In the deep learning community, I know of someone using parquet for the dataset and annotations. It allows you to select which data you want to retrieve from the dataset and stream only those, and nothing else. It is a rather effective method for that if you have many different annotations for different use cases and want to be able to select only the ones you need for your application.

      • ma343@beehaw.org
        link
        fedilink
        arrow-up
        5
        ·
        2 months ago

        Graphql is a protocol for interacting with a remote system, parquet is about having a local file that you can index and retrieve data from in a more efficient way. It’s especially useful when the data has a fairly well defined structure but may be large enough that you can’t or don’t want to bring it all into memory. They’re similar concepts, but different applications

        • sa@mastodontech.de
          link
          fedilink
          arrow-up
          1
          ·
          2 months ago

          @djnattyp
          exactly. and parquet is optimized for parallel processing, making it ideal for big data frameworks because data gets distributed to the nodes. no need for parquet as long as you can calculate on a local machine. and for the ones who are complaining about csv. most data that make it into a parquet file comes from… csv files 😊
          @demesisx