Skip to content

Motivation, Needs

Regunath B edited this page Nov 18, 2020 · 3 revisions

The use cases for a distributed KV store were many in Flipkart and our efforts, use of tech there was fairly fragmented. Over the years, we had custom built solutions around existing solutions like Redis. For e.g.

  • Slightly better data durability via snapshots, backup/restore
  • Replica placement and replication for scaling reads, avoiding hotspots
  • Consensus based replication - mostly PoC work we had done using Raft consensus

Data durability was a big concern in these solutions and all of them fell short even for single DC deployments. We identified RocksDB, the flash optimized embedded KV storage library from FaceBook. FaceBook have built ZippyDB on RocksDB and there are some implementations of Redis that uses RocksDB as the storage layer - e.g ledisdb

This talk on ZippyDB provided sound ideas to build our own KV store (with replica placement, tunable consistency in reads) and "verticals" on it like a Queue. Some of these verticals are actually the Data structures we all love Redis for but often regret that it is not a durable distributed data store.

We chose Golang for the runtime. The choice of Golang was fairly straightforward - easy to embed RocksDB, ease of deployment and ready availability of Raft consensus implementation in etcd - the Raft implementation we intended to use.

Flipkart needs on dkv

This draft document captures the Flipkart needs on dkv. Also has a basic analysis of available systems that meet some of the identified requirements while identifying choices for dkv Draft - dkv Requirements