← All topics/Performance & optimization

Practical interview questions

Scenario-style prompts with sample answer outlines. Focus is on how you would design and reason in real codebases.

Question 3

Image loading and rendering performance

Your screen shows lots of remote images and scrolling gets slow. How would you optimize loading, decoding, caching, and rendering?

Follow-ups

  • Why does decoding matter?
  • Resize before display? Memory vs disk cache?

Answer outline

Optimize across four areas — each has a distinct cause and fix:

  1. 1.DecodingUIImage decodes large bitmaps on first draw, costing RAM and CPU. Always downsample to display pixel size using ImageIO — never decode full resolution into a small view.
  2. 2.Loading — async fetch with cancellation on cell reuse. Prefetch the next rows when the API allows.
  3. 3.Caching — two layers: memory (NSCache of decoded bitmaps) for fast reuse, disk (URLCache or a library) for cold start and scroll-back. Cap both — unbounded caches cause OOM.
  4. 4.Rendering — avoid scaling huge images in draw; prefer pre-sized assets. cornerRadius and masks can trigger offscreen passes — simplify or precompose.

Principles

  • Bytes in ≠ pixels shown — match decode size to on-screen dimensions × scale.
  • Cancel in-flight loads on cell reuse; identity-check after await.
  • Disk cache is cheaper than RAM pressure — cap memory caches and monitor growth in large feeds.

An actor serializes cache access and keeps the pipeline injectable for tests. Network fetching and CPU decode stay off the main actor so that only the image assignment runs on main. Identity-check after await in cells.

Fetch → downsample → memory cache → MainActor
import ImageIO
import UIKit

/// Holds `URLSession` + RAM cache; `actor` isolation serializes cache access and keeps the API easy to inject in tests.
actor ThumbnailPipeline {
    private let memoryCache = NSCache<NSURL, UIImage>()
    private let session: URLSession

    init(session: URLSession = .shared) {
        self.session = session
    }

    /// Fetch → downsample decode → memory cache (all before returning to caller).
    func loadImage(
        url: URL,
        pointSize: CGSize,
        scale: CGFloat
    ) async throws -> UIImage {
        let key = url as NSURL
        if let cached = memoryCache.object(forKey: key) { return cached }

        let (data, _) = try await session.data(from: url)
        let decoded = try await decodeDownsampled(
            data: data,
            maxPixelSize: max(pointSize.width, pointSize.height) * scale
        )
        memoryCache.setObject(decoded, forKey: key)
        return decoded
    }

    private func decodeDownsampled(data: Data, maxPixelSize: CGFloat) async throws -> UIImage {
        try await Task(priority: .userInitiated) {
            let sourceOptions = [kCGImageSourceShouldCache: false] as CFDictionary
            guard let source = CGImageSourceCreateWithData(data as CFData, sourceOptions) else {
                throw URLError(.cannotDecodeContentData)
            }
            let downsample: [CFString: Any] = [
                kCGImageSourceCreateThumbnailFromImageAlways: true,
                kCGImageSourceThumbnailMaxPixelSize: Int(maxPixelSize),
                kCGImageSourceCreateThumbnailWithTransform: true,
            ]
            guard let cgImage = CGImageSourceCreateThumbnailAtIndex(source, 0, downsample as CFDictionary) else {
                throw URLError(.cannotDecodeContentData)
            }
            return UIImage(cgImage: cgImage)
        }.value
    }
}

// let thumbnails = ThumbnailPipeline()  // inject / store one instance
// Task {
//   let img = try await thumbnails.loadImage(url: u, pointSize: thumb.bounds.size, scale: traitCollection.displayScale)
//   guard u == self.boundURL else { return }
//   await MainActor.run { self.thumb.image = img }
// }

AsyncImage loads a URL asynchronously with no boilerplate, but it does not replace a custom downsample pipeline — you have no control over decode size, and HTTP caching follows URLCache, not your in-memory bitmap cache. Fine for simple icons; use ThumbnailPipeline or a dedicated library for heavy feeds of large assets.

SwiftUI — `AsyncImage`
import SwiftUI

AsyncImage(url: url, scale: UIScreen.main.scale) { phase in
    switch phase {
    case .empty:
        ProgressView()
    case .success(let image):
        image
            .resizable()
            .scaledToFill()
    case .failure:
        Image(systemName: "photo")
    @unknown default:
        EmptyView()
    }
}
.frame(width: 64, height: 64)
.clipped()

Inject one ThumbnailPipeline (via environment or init) and use .task(id:) so SwiftUI cancels the previous load when the url changes — same identity idea as UIKit cell reuse.

SwiftUI — same actor + `.task`
import SwiftUI

struct FeedThumb: View {
    let url: URL
    let pipeline: ThumbnailPipeline
    @State private var image: UIImage?

    var body: some View {
        Group {
            if let image {
                Image(uiImage: image)
                    .resizable()
                    .scaledToFill()
            } else {
                Color.secondary.opacity(0.2)
            }
        }
        .frame(width: 64, height: 64)
        .clipped()
        .task(id: url) {
            image = try? await pipeline.loadImage(
                url: url,
                pointSize: CGSize(width: 64, height: 64),
                scale: UIScreen.main.scale
            )
        }
    }
}

Follow-up angles

  • HEIF/JPEG decode isn’t free — worst case is width × height × 4 bytes of RGBA in memory.
  • AsyncImage is a convenience; for heavy feeds you still need a pipeline or library that controls decode size and maintains a decoded RAM cache.