Par

Keith Williams

12 juil. 2025

Finding and fixing O(n²) performance bottlenecks: Lessons from Cal.com's algorithm optimizations

A deep dive into how we detected, analyzed, and optimized poorly performing algorithms in a real production codebase.

Introduction

Performance bottlenecks in scheduling systems can mean the difference between a responsive user experience and users leaving your platform out of frustration. At Cal.com, we recently uncovered and optimized several critical O(n²) algorithms that were causing slowdowns as our user base and data volumes increased.

In this article, I will share four real algorithm optimizations from Cal.com’s codebase. You will learn how we detected O(n²) complexity, which optimization strategies worked best, and how we put automated checks in place to prevent future regressions.

The problem: How innocent code becomes O(n²)

As scheduling platforms grow, it is surprisingly easy for innocent-looking nested loops to become silent performance killers. Consider this real example from our codebase:

// O(n²): Every available slot is checked against every busy period
availableTimeSlots.map(slot => {
  return busy.some(busyTime => checkOverlap(slot, busyTime));
});

For 100 slots and 50 busy periods, that is 5,000 overlap checks. At larger scales—500 slots and 200 busy periods—it explodes to 100,000 operations. That is a twenty-fold increase in computational load for only a five-fold increase in data.

Case study 1: Slot conflict checking optimization

The original problem

The checkForConflicts function used nested iteration to check if booking slots conflicted with busy periods. This quickly became a performance issue with large schedules.

Original implementation: O(n²) nested loops

function checkForConflicts({ busy, time, eventLength }) {
  return busy.some(busyTime => {
    const slotStart = time.valueOf();
    const slotEnd = slotStart + eventLength * 60 * 1000;
    const busyStart = dayjs.utc(busyTime.start).valueOf();
    const busyEnd = dayjs.utc(busyTime.end).valueOf();
    return !(busyEnd <= slotStart || busyStart >= slotEnd);
  });
}

Optimized implementation: O(n log n) with sorting and early exit

function checkForConflicts({ busy, time, eventLength }) {
  const slotStart = time.valueOf();
  const slotEnd = slotStart + eventLength * 60 * 1000;

  const sortedBusyTimes = busy
    .map(busyTime => ({
      start: dayjs.utc(busyTime.start).valueOf(),
      end: dayjs.utc(busyTime.end).valueOf()
    }))
    .sort((a, b) => a.start - b.start);

  for (const busyTime of sortedBusyTimes) {
    if (busyTime.start >= slotEnd) {
      break;
    }
    if (busyTime.end <= slotStart) {
      continue;
    }
    return true;
  }
  return false;
}

What changed:
We sorted busy times by their start times, which allowed us to skip non-overlapping periods and exit early when no conflicts were possible. The result was a dramatic drop in the number of unnecessary comparisons.

Impact:
The complexity dropped from O(available slots × busy periods) to O(n log n), and most checks were skipped entirely in typical cases.

Case study 2: Date range intersection using the two-pointer technique

The original problem

The intersect function previously used nested loops to find overlaps across multiple users’ availability.

Original implementation: O(n²) nested mapping

commonAvailability.forEach(commonRange => {
  userRanges.forEach(userRange => {
    const intersection = getIntersection(commonRange, userRange);
    if (intersection !== null) {
      intersectedRanges.push(intersection);
    }
  });
});

Optimized implementation: O(n log n) with two-pointer traversal

const sortedRanges = ranges.map(userRanges =>
  userRanges
    .map(r => ({
      ...r,
      startValue: r.start.valueOf(),
      endValue: r.end.valueOf()
    }))
    .sort((a, b) => a.startValue - b.startValue)
);

let commonIndex = 0;
let userIndex = 0;

while (commonIndex < commonAvailability.length && userIndex < userRanges.length) {
  const commonRange = commonAvailability[commonIndex];
  const userRange = userRanges[userIndex];

  const intersectStartValue = Math.max(commonRange.startValue, userRange.startValue);
  const intersectEndValue = Math.min(commonRange.endValue, userRange.endValue);

  if (intersectStartValue < intersectEndValue) {
    intersectedRanges.push(createIntersection(commonRange, userRange));
  }

  if (commonRange.endValue <= userRange.endValue) {
    commonIndex++;
  } else {
    userIndex++;
  }
}

What changed:
By sorting and using two pointers to walk through the ranges, we eliminated the need for nested iteration. This reduced the complexity from quadratic to nearly linear for most practical inputs.

Case study 3: Interval tree for redundant range filtering

The original problem

The filterRedundantDateRanges function checked every range against every other to see if one contained the other. For large schedules, this became painfully slow.

Original implementation: O(n²) containment checking

return dateRanges.filter((range, index) => {
  return !dateRanges.some((otherRange, otherIndex) => {
    if (index === otherIndex) return false;
    return isCompletelyContained(range, otherRange);
  });
});

Optimized implementation: O(n log n) with an interval tree

export function filterRedundantDateRanges(dateRanges) {
  const sortedRanges = [...dateRanges].sort((a, b) => a.start.valueOf() - b.start.valueOf());
  const intervalNodes = createIntervalNodes(
    sortedRanges,
    (range) => range.start.valueOf(),
    (range) => range.end.valueOf()
  );
  const intervalTree = new IntervalTree(intervalNodes);
  const searchAlgorithm = new ContainmentSearchAlgorithm(intervalTree);

  return sortedRanges.filter((range, index) => {
    const containingIntervals = searchAlgorithm.findContainingIntervals(
      range.start.valueOf(),
      range.end.valueOf(),
      index
    );
    return containingIntervals.length === 0;
  });
}

What changed:
Instead of brute-force comparison, we used an interval tree structure to allow fast O(log n) containment queries. This is a classic approach in computational geometry that works wonders for scheduling applications.

Performance results: Before and after

Our benchmarks showed significant improvements:

  • Slot conflict checking: Reduced from O(n²) to O(n log n), making 100 slots and 50 busy periods about ten times faster.

  • Date range intersection: Dropped from O(n²) to O(n log n), allowing hundreds of date ranges to be processed in milliseconds.

  • Redundant range filtering: Reduced from O(n²) to O(n log n), with up to a fifty-fold speedup for thousands of date ranges.

  • General range intersection: Across many patterns, we saw improvements of twenty-five times or more.


Easy techniques for detecting O(n²) algorithms

Recognize problem patterns

Watch for:

  • Nested array iterations, especially .map, .forEach, .filter inside another.

  • Array methods like .some, .find, or .filter inside loops or array callbacks.

  • Chained filters or nested mapping over large lists.

Test scalability explicitly

Write performance tests that run your function on data of increasing size and log execution times. If time grows quadratically with data size, you have an O(n²) issue.

Automate static analysis

Add lint rules or static analysis that:

  • Flags functions with nested loops or nested array methods.

  • Identifies recursion without memoization.

  • Detects functions with multiple nested .some, .find, or .filter calls.

Proven strategies to replace O(n²) code

  • Use binary search for lookups in sorted arrays.

  • Use two-pointer techniques for merging or intersecting sorted sequences.

  • Use tree-based data structures for interval queries or hierarchical data.

Automating performance checks

Add performance benchmarks to your CI pipeline that run critical algorithms on realistic data. Compare execution times on every pull request and block merges that regress. Add custom lint rules to prevent obvious O(n²) patterns from ever reaching production. Implement runtime monitoring for critical paths to catch new regressions before users notice.

Key takeaways

  • Prevention is always better than cure. Automated checks will catch O(n²) algorithms before they cause trouble.

  • Always profile before optimizing. Use tests to find actual bottlenecks.

  • Choose the right data structures for your needs. Sorted arrays, two-pointer approaches, and trees are invaluable.

  • Test with realistic, large-scale data. Small tests can hide big problems.

  • Monitor in production. Automated alerts can catch regressions early.

Conclusion

The optimizations we made at Cal.com shows that significant performance gains are possible by applying proven algorithmic techniques. Combine code review, smart data structures, and automated monitoring to keep your application fast as it scales.

Performance should be built into every stage of development, from code reviews that flag suspicious patterns to CI pipelines that run scalability tests and runtime monitoring that detects regressions.

Today’s fast algorithm can easily become tomorrow’s bottleneck as your data grows. Stay vigilant, test at scale, and always be ready to optimize.