File Formats

CSV/TSV

Pros

  • Tabular Row storage.
  • Human-readable is easy to edit manually.
  • Simple schema.
  • Easy to implement and parse the file(s).

Cons

  • No standard way to present binary data.
  • No complex data types.
  • Large in size.

JSON

Pros

  • Supports hierarchical structure.
  • Most languages support them.
  • Widely used in Web

Cons

  • More memory usage due to repeatable column names.
  • Not very splittable.
  • Lacks indexing.

Parquet

Parquet is a columnar storage file format optimized for use with Apache Hadoop and related big data processing frameworks. Twitter and Cloudera developed it to provide a compact and efficient way of storing large, flat datasets.

Best for WORM (Write Once Read Many).

The key features of Parquet are:

  1. Columnar Storage: Parquet is optimized for columnar storage, unlike row-based files like CSV or TSV. This allows it to efficiently compress and encode data, which makes it a good fit for storing data frames.
  2. Schema Evolution: Parquet supports complex nested data structures, and the schema can be modified over time. This provides much flexibility when dealing with data that may evolve.
  3. Compression and Encoding: Parquet allows for highly efficient compression and encoding schemes. This is because columnar storage makes better compression and encoding schemes possible, which can lead to significant storage savings.
  4. Language Agnostic: Parquet is built from the ground up for use in many languages. Official libraries are available for reading and writing Parquet files in many languages, including Java, C++, Python, and more.
  5. Integration: Parquet is designed to integrate well with various big data frameworks. It has deep support in Apache Hadoop, Apache Spark, and Apache Hive and works well with other data processing frameworks.

In short, Parquet is a powerful tool in the big data ecosystem due to its efficiency, flexibility, and compatibility with a wide range of tools and languages.

Difference between CSV and Parquet

AspectCSV (Comma-Separated Values)Parquet
Data FormatText-based, plain textColumnar, binary format
CompressionUsually uncompressed (or lightly compressed)Highly compressed
SchemaNone, schema-lessStrong schema enforcement
Read/Write EfficiencyRow-based, less efficient for column operationsColumn-based, efficient for analytics
File SizeGenerally largerTypically smaller due to compression
StorageMore storage space requiredLess storage space required
Data AccessGood for sequential accessEfficient for accessing specific columns
Example Size (1 GB)Could be around 1 GB or more depending on compressionCould be 200-300 MB (due to compression)
Use CasesSimple data exchange, compatibilityBig data analytics, data warehousing
Support for Data TypesLimited to text, numbersRich data types (int, float, string, etc.)
Processing SpeedSlower for large datasets, particularly for queries on specific columnsFaster, especially for column-based queries
Tool CompatibilitySupported by most tools, databases, and programming languagesSupported by big data tools like Apache Spark, Hadoop, etc.

Parquet Compression

  • Snappy (default)
  • Gzip

Snappy

  • Low CPU Util
  • Low Compression Rate
  • Splittable
  • Use Case: Hot Layer
  • Compute Intensive

GZip

  • High CPU Util
  • High Compression Rate
  • Splittable
  • Use Case: Cold Layer
  • Storage Intensive