Miro and Figma are online collaborative canvas tools that became very popular during the pandemic. Instead of using sticky notes on a physical wall, you can add a virtual post—and an array of other things—to a virtual canvas. This lets teams collaborate virtually in ways that feel familiar from the physical world.
I previously wrote an article showing how to create a Figma/Miro Clone in React and TypeScript. The code in the article was designed to be as easy to understand, and in this article, we’re going to optimize it. The code used DndKit for dragging and dropping, and D3 Zoom for panning and zooming. There were four components (App
, Canvas
, Draggable
and Addable
), and about 250 lines of code. You do not need to read the original article to understand this one.
Standard optimizations such as useCallback
, memo
, and similar made it about twice as fast when dragging, but made no difference for panning and zooming. More creative/intensive optimizations made it about ten times as fast in most cases.
You can see the optimized code on GitHub and there is a live demo on GitHub pages to test out the speed with 100,000 cards.
Table of Contents
How to Measure Performance in React Apps
There are three common ways to measure performance in React Apps
Chrome Dev Tools profiler, especially using custom tracks
These tools are all great, but none of them are quite the right fit in this case. In most codebases, the time spent executing JavaScript code (both our code and that of the React framework) is the primary issue. However, after all your code has run and React has updated the Dom, the browser still has a lot of work to do:
In this case, this browser layout and rendering time was significant, and is not accounted for by the React profiling.
You can use custom tracks in the Chrome dev tools profiler, but it is very cumbersome to use.
For us, the JavaScript performance API is the best option, which gives results that are closer to those experienced by the user, and is relatively easy to use.
First, we make a call to performance.mark
in the event handler that starts the action, with a string to describe the time point. For example, when starting a zoom or pan operation:
zoomBehavior.on("start", () => {
performance.mark('zoomingOrPanningStart');
}
Then, in a useEffect
hook, we call performance.mark
again, and call performance.measure
to calculate the time between the two points:
useEffect(() => {
performance.mark('zoomingOrPanningEnd');
performance.measure('zoomingOrPanning', 'zoomingOrPanningStart', 'zoomingOrPanningEnd');
});
The React docs states that useEffect
usually fires after the browser has painted the updated screen, which is what we want.
This isn't perfect, and will vary depending on the machine specifications, and what else the machine is doing at the time, but it was good enough to verify which optimizations worked best. It is possible to go further if you need to. For example, using Cypress to automate and profile scenarios, potentially running many times to get a good mean, or using Browserstack to test on a variety of devices.
How to Investigate the Performance
Most of the investigation involved using the React Dev Tools profiler to record profiles of user interactions.
The performance data shows how many commits there were in the profile, and how long each one took, which is a great way to see if there are too many commits.
Each commit displays a flame chart showing which components rendered and why they re-rendered. This makes it much easier to find ways to avoid the re-rendering, and to check that memoization strategies are working as expected. This does have some caveats though. It often says 'The parent component rendered', which is misleading default text for when it doesn’t understand what happened (and is often due to a change in a parent context). It also says things like 'hook 9 changed', which makes it time consuming to work out exactly which hook changed.
The flame chart also shows how long each component took to render. This helps target problem components that we need to focus on.
How to Optimize Panning and Zooming the Canvas
The original Canvas element used the CSS transform translate3d(x, y, k)
to pan and zoom the canvas. This works, but it doesn't scale child elements, so when the zoom changes, all the cards on the canvas have to be re-rendered with a new CSS transform for the new zoom level (scale(${canvasTransform.k})
).
<div
...
className="canvas"
style={{
transform: `translate3d(${transform.x}px, ${transform.y}px, ${transform.k}px)`,
...
}}>
...
</div>
<div
className="card"
style={{
...
transform: `scale(${canvasTransform.k})`,
}}>
...
</div>
I changed this to use translateX(x) translateY(y) scale(k)
, which has the same effect, but does scale child elements. This way, when the zoom changes, none of the cards will be re-rendered (the style
of the card
component no longer uses the canvasTransform.k
).
<div
...
className="canvas"
style={{
transform: `translateX(${transform.x}px) translateY(${transform.y}px) scale(${transform.k})`,
...
}}>
...
</div>
<div
className="card"
...
</div>
The Canvas
still needed to re-render whenever the pan or zoom changed, and it is possible to prevent this with useRef
, and updating the CSS transform with direct JavaScript Dom manipulation in the d3-zoom event handler. This doesn’t make a significant improvement to the performance though, and is a definite hack, so the trade off is not worthwhile.
Both zooming and panning get a bit slower when the canvas is zoomed very far out and there are (a lot) more cards visible on the screen, just due to the browser having to render them all. It's still workable at 100,000 cards though. There are things you can do about this. An easy option is limiting the maximum zoom extent. This is a functional change, so potentially something that doesn’t meet requirements, but it is easy to do in d3-zoom using scaleExtent
:
zoom<HTMLDivElement>().scaleExtent([0.1, 100])
Another option is to create a bitmap for very low zoom levels and render that as a single element. This may be difficult, but it means that there will be no change to the functionality.
How to Optimize Dragging Cards Around the Canvas
Starting a drag
The useDraggable
hook from DndContext
causes some re-renders when starting a drag operation.
It is possible to improve this by changing the Draggable
component to just have this hook (and the things that use it) and having a DraggableInner
component for everything else (inside a memo
). This works well for reducing the re-renders, in that the DraggableInner
almost never get re-rendered, and improves the speed of starting a drag operation. However, it was still fairly slow, and the time was all under the DndContext
.
A better option is to create a new NonDraggable
component, that looks exactly like the Draggable
component, but does not hook up with DndContext
. These cards are shown on the Canvas, and have an onMouseEnter
event, to swap in the Draggable
component for the active card, so that dragging continued to work.
const onMouseEnter = useCallback(() => {
setHoverCard(card);
}, []);
This works well, and significantly improves the speed when starting a drag operation, but it was still quite slow with large numbers of cards. Nearly nothing was getting re-rendered, but there is still a time cost to when using memo
, as it needs to check whether components have changed.
To fix this, we create an AllCards
component, that contains all the cards on the canvas as NonDraggable
components. Because it always renders all the cards, it nearly never needs to be re-rendered, and it is used with memo
. So instead of each individual card using a memo
(with the associated time cost), there is now just one component using a memo
. To make it so that the dragging still works, the active Draggable
component is rendered on top, obscuring the NonDraggable
component beneath it. There is also a Cover
component beneath that, so that when the Draggable
component is dragged away, the NonDraggable
component underneath remains hidden.
Original code, where each card is a Draggable
component:
<DndContext ...>
{cards.map((card) => (
<Draggable card={card} key={card.id} canvasTransform={transform} />
))}
</DndContext>
Optimized code, where the AllCards
component renders all the cards as NonDraggable
components, and then a Cover
and a Draggable
component for the active card.
<AllCards cards={cards} setHoverCard={setHoverCard} />
<DndContext ...>
<Cover card={hoverCard} />
<Draggable card={hoverCard} canvasTransform={transform} />
</DndContext>
This works very well. With a low number of cards, the speed is about the same, but with a high numbers of cards, it’s about twenty times faster.
There is now a new potential performance issue with the onMouseEnter
event that swaps in the Draggable
component for the active card, but this just adds two components to the Dom, and is very quick even with large numbers of cards.
Finishing a drag
Finishing a drag operation is hard to optimize, as the position of a card changes, and that does need to re-render, which means that the AllCards
component has to re-render as well.
You can see original code below. Even when using memo
with the Draggable component, the end drag operation still takes 2500ms with 100,000 cards, mostly due to the complexity of the Draggable
component and its integration with DndKit.
<DndContext ...>
{cards.map((card) => (
<Draggable card={card} key={card.id} canvasTransform={transform} />
))}
</DndContext>
However, we now use the NonDraggable
components, which all memo
successfully, and only the dragged card is re-rendered. There is still a time cost using the memo
, and this is the slowest part of the solution, but it leads to an increase in speed to 500ms with 100,000 cards.
const NonDraggable = memo(...)
const AllCards = memo((cards, setHoverCard) => {
<>
{cards.map((card) => {
<NonDraggable card={card} key={card.id} setHoverCard={setHoverCard} />);
})}
</>;
});
Results
The base unoptimized version started to get slow between 1000 and 5000 cards. Standard optimizations improved this to around 10,000 cards, and the more optimization took it to about 100,000 cards. The trade off is that the code becomes significantly more complicated, which makes it harder to understand and modify, especially for people new to the codebase.
Pan (ms) | Zoom (ms) | Start drag (ms) | End drag (ms) | Card hover (ms) | ||
1000 cards | Base | 3 | 4 | 200 | 50 | - |
Basic optimization | 2 | 3 | 200 | 30 | - | |
Intensive optimization | 10 | 10 | 7 | 15 | 2 | |
5000 cards | Base | 20 | 150 | 450 | 200 | - |
Basic optimization | 20 | 150 | 200 | 80 | - | |
Intensive optimization | 10 | 10 | 25 | 40 | 2 | |
10,000 cards | Base | 50 | 300 | 900 | 400 | - |
Basic optimization | 50 | 300 | 400 | 180 | - | |
Intensive optimization | 25 | 25 | 50 | 50 | 2 | |
50,000 cards | Base | 1000 | 1500 | 4000 | 1800 | - |
Basic optimization | 1000 | 1500 | 1900 | 900 | - | |
Intensive optimization | 150 | 150 | 150 | 250 | 5 | |
100,000 cards | Base | - | - | - | - | - |
Basic optimization | 3000 | 4500 | 5000 | 2500 | - | |
Intensive optimization | 150 | 250 | 300 | 500 | 15 |
Summary
It is unusual to display 100,000 or more items on screen in a standard React App, but in a highly graphical codebase, it becomes much more likely.
With these numbers, the browser rendering engine is likely to take a significant amount of time, so it is best to use the performance API to measure performance, instead of the usual React tools.
Standard React optimization strategies do work and improve the situation, but there is a need to go further, by finding ways to avoid renders, and even to avoid too many memo
comparisons.