← All topics/Data persistence

Practical interview questions

Scenario-style prompts with sample answer outlines. Focus is on how you would design and reason in real codebases.

Question 5

Performance with large datasets

Your database grows to thousands of records and performance degrades. How do you diagnose and optimize reads/writes?

Answer outline

Profile before changing schema: measure slow queries/fetches, write latency, and memory spikes on realistic datasets. Find exact hotspots (predicate, sort, relationship traversal, batch save frequency).

Optimize reads with indexes on common filters/sorts, pagination/batching, narrower fetch payloads, and precomputed fields only where profiling proves value.

Optimize writes by batching updates/inserts, reducing save frequency in hot loops, and moving heavy transforms off the main thread before persistence.

Principles

  • Use realistic fixtures and compare before/after with the same workload.
  • Fix top hotspots first; don’t micro-optimize cold paths.
  • Prefer simple schema + indexes before complex denormalization.