← All topics/Data persistence

Practical interview questions

Scenario-style prompts with sample answer outlines. Focus is on how you would design and reason in real codebases.

Question 3

Handling migrations in production

You need to evolve your data model after shipping. How do you handle migrations safely without data loss or breaking existing users?

Follow-ups

  • Lightweight vs heavyweight migration?
  • How do you recover from a failed migration?

Answer outline

Treat schema migrations as release-critical: version schemas, test upgrades from real historical stores, and never assume all users are on the latest schema.

Choose migration mode based on complexity. Lightweight migration handles compatible changes — adding optional fields, renaming with mapping hints — automatically. Custom migration is needed when transforms are structural: splitting entities, merging fields, or computing derived values.

Always plan for failure: back up the store before migrating, make steps idempotent so retries are safe, add telemetry to catch problems in rollout, and show graceful UI if migration can't complete on first launch.

Principles

  • Test from real old stores — users may be upgrading from several versions back, not just the last release.
  • Prefer incremental schema steps — one migration per release is easier to test and roll back than a large rewrite.
  • Have a recovery path: idempotent steps, fallback UI, and a pre-migration backup where feasible.

A step-by-step checklist to apply before shipping a schema change:

Migration checklist
1) Add schema version N+1
2) Define mapping/transforms
3) Test upgrade from N-2, N-1, N
4) Ship with telemetry + fallback

Follow-up angles

  • For very large stores, consider background/progressive migration patterns to avoid long startup blocks.