← All topics/Concurrency

Practical interview questions

Scenario-style prompts with sample answer outlines. Focus is on how you would design and reason in real codebases.

Question 3

Actor design in a real networking + cache stack

You're building a networking layer with caching. Where would you introduce actors, and how would you structure them to avoid contention or bottlenecks?

Follow-ups

  • When can actors become a performance problem?

Answer outline

Use an actor to protect the cache state, not the whole networking layer. URLSession already handles async network waiting. The part that can race is your own shared bookkeeping, like updating the cache map or tracking duplicate in-flight requests.

Split responsibilities: e.g. an actor that owns only the cache map and policies (TTL, maximum entries), while network calls stay as plain async functions or a non-actor client that awaits URLSession. That way a slow download suspends outside the cache actor instead of blocking the cache being read.

Contention shows up when every read/write funnels through a single hot actor, especially if you do heavy work (JSON decode, image decode) inside the actor. Mitigate with smaller actors and nonisolated helpers.

Principles

  • One serial executor per actor. Design APIs so short critical sections and long I/O happen outside or across await with reentrancy in mind.
  • Actor reentrancy: after await inside the cache actor, another task may have mutated state. Re-check before making assumptions.
Cache actor + async fetch outside (I/O suspends off the serial cache)
actor HTTPCache {
    private var entries: [URL: (data: Data, expiry: Date)] = [:]

    func cachedResponse(for url: URL) -> Data? {
        guard let e = entries[url], e.expiry > Date() else { return nil }
        return e.data
    }

    func store(_ data: Data, for url: URL, ttl: TimeInterval) {
        entries[url] = (data, Date().addingTimeInterval(ttl))
    }
}

struct APIClient {
    let session: URLSession
    let cache: HTTPCache

    func data(from url: URL) async throws -> Data {
        if let hit = await cache.cachedResponse(for: url) { return hit }
        let (data, _) = try await session.data(from: url)
        await cache.store(data, for: url, ttl: 300)
        return data
    }
}
When the actor becomes the bottleneck
// Slow: decode + image work inside the actor — blocks all cache users
actor BadImageCache {
    func image(for url: URL) async throws -> UIImage {
        let data = try await download(url)
        return UIImage(data: data)! // expensive, serializes everyone
    }
}

// Better: actor stores Data or file URLs; decode after leaving the actor,
// using async let or a task group owned by the caller for structured concurrency.

Follow-up angles

  • Disk cache (FileManager, SQLite): often a second actor or isolated serial queue so network-memory path stays snappy; watch same-file races across processes.