← All topics/Data persistence

Practical interview questions

Scenario-style prompts with sample answer outlines. Focus is on how you would design and reason in real codebases.

Question 2

Designing a scalable data model

You’re designing a local database for a feature with relationships (e.g. users, posts, comments). How do you model it to balance performance, flexibility, and future changes?

Follow-ups

  • When do you denormalize?

Answer outline

Model around what the app actually loads and mutates, not a textbook entity relationship diagram (ERD) — performance, flexibility, and migration safety all follow from that.

Start normalized and add indexes on hot filters and sorts. Treat denormalization as a deliberate choice with explicit update rules, not a default.

Principles

  • Model for the screens you have, not an abstract ERD — design around the queries the app actually runs.
  • Give every major entity a stable ID so merges, deduplication, and updates stay reliable.
  • Normalize first — one place to update each fact; denormalize only when profiling proves a hot read needs it.
  • Plan migrations early — schemas that leave room for optional additions are far cheaper to evolve than rigid ones.
  • Keep write ownership clear — undefined write paths are the fastest route to corruption.

A minimal normalized schema for a social feed, with indexes on the most common sort keys:

Entity sketch
User(id, name)
Post(id, authorId, createdAt, body, mediaPath?)
Comment(id, postId, authorId, createdAt, text)

// Index examples:
// Post(createdAt), Comment(postId, createdAt)