@@ -106,6 +106,115 @@ def cached_function():
106106- Connection pooling built-in
107107- Supports large values (up to Redis limits)
108108
109+ ### FileBackend
110+
111+ Store cache on the local filesystem with automatic LRU eviction:
112+
113+ ``` python
114+ from cachekit.backends.file import FileBackend
115+ from cachekit.backends.file.config import FileBackendConfig
116+ from cachekit import cache
117+
118+ # Use default configuration
119+ config = FileBackendConfig()
120+ backend = FileBackend(config)
121+
122+ @cache (backend = backend)
123+ def cached_function ():
124+ return expensive_computation()
125+ ```
126+
127+ ** Configuration via environment variables** :
128+
129+ ``` bash
130+ # Directory for cache files
131+ export CACHEKIT_FILE_CACHE_DIR=" /var/cache/myapp"
132+
133+ # Size limits
134+ export CACHEKIT_FILE_MAX_SIZE_MB=1024 # Default: 1024 MB
135+ export CACHEKIT_FILE_MAX_VALUE_MB=100 # Default: 100 MB (max single value)
136+ export CACHEKIT_FILE_MAX_ENTRY_COUNT=10000 # Default: 10,000 entries
137+
138+ # Lock configuration
139+ export CACHEKIT_FILE_LOCK_TIMEOUT_SECONDS=5.0 # Default: 5.0 seconds
140+
141+ # File permissions (octal, owner-only by default for security)
142+ export CACHEKIT_FILE_PERMISSIONS=0o600 # Default: 0o600 (owner read/write)
143+ export CACHEKIT_FILE_DIR_PERMISSIONS=0o700 # Default: 0o700 (owner rwx)
144+ ```
145+
146+ ** Configuration via Python** :
147+
148+ ``` python
149+ import tempfile
150+ from pathlib import Path
151+ from cachekit.backends.file import FileBackend
152+ from cachekit.backends.file.config import FileBackendConfig
153+
154+ # Custom configuration
155+ config = FileBackendConfig(
156+ cache_dir = Path(tempfile.gettempdir()) / " myapp_cache" ,
157+ max_size_mb = 2048 ,
158+ max_value_mb = 200 ,
159+ max_entry_count = 50000 ,
160+ lock_timeout_seconds = 10.0 ,
161+ permissions = 0o 600 ,
162+ dir_permissions = 0o 700 ,
163+ )
164+
165+ backend = FileBackend(config)
166+ ```
167+
168+ ** When to use** :
169+ - Single-process applications (scripts, CLI tools, development)
170+ - Local development and testing
171+ - Systems where Redis is unavailable
172+ - Low-traffic applications with modest cache sizes
173+ - Temporary caching needs
174+
175+ ** When NOT to use** :
176+ - Multi-process web servers (gunicorn, uWSGI) - use Redis instead
177+ - Distributed systems - use Redis or HTTP backend
178+ - High-concurrency scenarios - file locking overhead becomes limiting
179+ - Applications requiring sub-1ms latency - use L1-only cache
180+
181+ ** Characteristics** :
182+ - Latency: p50: 100-500μs, p99: 1-5ms
183+ - Throughput: 1000+ operations/second (single-threaded)
184+ - LRU eviction: Triggered at 90%, evicts to 70% capacity
185+ - TTL support: Yes (automatic expiration checking)
186+ - Cross-process: No (single-process only)
187+ - Platform support: Full on Linux/macOS, limited on Windows (no O_NOFOLLOW)
188+
189+ ** Limitations and Security Notes** :
190+
191+ 1 . ** Single-process only** : FileBackend uses file locking that doesn't prevent concurrent access from multiple processes. Do NOT use with multi-process WSGI servers.
192+
193+ 2 . ** File permissions** : Default permissions (0o600) restrict access to cache files to the owning user. Changing these permissions is a security risk and generates a warning.
194+
195+ 3 . ** Platform differences** : Windows does not support the O_NOFOLLOW flag used to prevent symlink attacks. FileBackend still works but has slightly reduced symlink protection on Windows.
196+
197+ 4 . ** Wall-clock TTL** : Expiration times rely on system time. Changes to system time (NTP, manual adjustments) may affect TTL accuracy.
198+
199+ 5 . ** Disk space** : FileBackend will evict least-recently-used entries when reaching 90% capacity. Ensure sufficient disk space beyond max_size_mb for temporary writes.
200+
201+ ** Performance characteristics** :
202+
203+ ```
204+ Sequential operations (single-threaded):
205+ - Write (set): p50: 120μs, p99: 800μs
206+ - Read (get): p50: 90μs, p99: 600μs
207+ - Delete: p50: 70μs, p99: 400μs
208+
209+ Concurrent operations (10 threads):
210+ - Throughput: ~887 ops/sec
211+ - Latency p99: ~30μs per operation
212+
213+ Large values (1MB):
214+ - Write p99: ~15μs per operation
215+ - Read p99: ~13μs per operation
216+ ```
217+
109218### HTTPBackend
110219
111220Store cache in HTTP API endpoints:
@@ -338,18 +447,27 @@ REDIS_URL=redis://localhost:6379/0
338447| Backend | Latency | Use Case | Notes |
339448| ---------| ---------| ----------| -------|
340449| ** L1 (In-Memory)** | ~ 50ns | Repeated calls in same process | Process-local only |
450+ | ** File** | 100μs-5ms | Single-process local caching | Development, scripts, CLI tools |
341451| ** Redis** | 1-7ms | Shared cache across pods | Production default |
342452| ** HTTP API** | 10-100ms | Cloud services, multi-region | Network dependent |
343453| ** DynamoDB** | 100-500ms | Serverless, low-traffic | High availability |
344454| ** Memcached** | 1-5ms | Alternative to Redis | No persistence |
345455
346456### When to Use Each Backend
347457
458+ ** Use FileBackend when** :
459+ - You're building single-process applications (scripts, CLI tools)
460+ - You're in development and don't have Redis available
461+ - You need local caching without network overhead
462+ - You have modest cache sizes (< 10GB)
463+ - Your application runs on a single machine
464+
348465** Use RedisBackend when** :
349- - You need sub-10ms latency
466+ - You need sub-10ms latency with shared cache
350467- Cache is shared across multiple processes
351468- You need persistence options
352469- You're building a typical web application
470+ - You require multi-process or distributed caching
353471
354472** Use HTTPBackend when** :
355473- You're using a cloud cache service
@@ -364,9 +482,10 @@ REDIS_URL=redis://localhost:6379/0
364482- You need automatic TTL management
365483
366484** Use L1-only when** :
367- - You're in development
485+ - You're in development with single-process code
368486- You have a single-process application
369487- You don't need cross-process cache sharing
488+ - You need the lowest possible latency (nanoseconds)
370489
371490### Testing Your Backend
372491
0 commit comments