InfoQ AI/ML•
Uber Launches IngestionNext: Streaming-First Data Lake Cuts Latency and Compute by 25%
Back to overview
Uber launches IngestionNext, a streaming-first data lake platform that dramatically improves data processing efficiency. The system reduces data latency from hours to minutes and cuts computing resource usage by 25%. Built on Kafka, Flink, and Apache Hudi technologies, it handles thousands of datasets globally, enabling faster analytics, experimentation, and machine learning workflows across Uber's operations.
Read full article
0 views