海量数据,无论批处理还是流处理,沃尔玛认为,完美选择就是Apache Spark!
- Spark Streaming从Kafka读数据存入Cassandra,
- Spark SQL 每隔六小时从Cassandra做聚合,再把结果以Parquet格式存起来
- 数据可视化,用Spark SQL把Parquet读出来发给Tableau!
Data processing had to be carried out at two places in the pipeline. First, during write, where we have to stream data from Kafka, process it and save it to Cassandra. Second, while generating business reports, where we have to read complete Cassandra table, join it with other data sources and aggregate it at multiple columns.
For both the requirements, Apache Spark was a perfect choice. This is because Spark achieves high performance for both batch and streaming data, using a state-of-the-art DAG scheduler, a query optimizer, and a physical execution engine.
网友评论