Sensor calibration


The calibration screen for 10 Femto Bolt cameras, after completing calibration.

In this setup step, the positions and orientations of all cameras are calibrated with the help of one or more calibration patterns. This allows the program to merge the data of all cameras consistently to create 3D reconstructions.

If only a single camera is used, then this step is not necessary, and it will automatically be skipped during the guided recording setup.

Sensor calibration requires the use of at least one calibration pattern. If you do not have a calibration pattern yet, please follow this link to learn how to create one.


A single calibration pattern is sufficient to perform calibration. However, we recommend using multiple calibration patterns for best results, as this helps reduce calibration effort and improve accuracy.

Camera calibration instructions

A few things should be kept in mind while calibrating cameras.

First, while a calibration image is being taken, all patterns should remain completely static. This may be achieved by mounting the pattern(s) on a tripod or other kind of mount. For example, this photo shows two calibration patterns on a tripod, mounted with adjustable orientation:


A calibration pattern must be visible simultaneously in at least two cameras to advance the calibration. You may imagine that the calibration image, if taken successfully, will then “connect” these cameras.

For a successful calibration, all cameras need to be “connected”. It is okay if they are connected indirectly. For example, if there are three cameras A, B, and C, and both A and B, as well as B and C are connected, then a complete calibration can be computed for the three cameras.

However, it is strongly advisable to take more calibration images than the minimum in order to obtain improved accuracy. Move the calibration pattern(s) to a different position between each take, such that each new image will provide new calibration information.

Not each attempt to take a calibration image will be successful. This can be for different reasons, such as:

  • The pattern(s) not being visible in at least two cameras.

  • The image being rejected because the pattern(s) move too fast.

  • The wrong pattern type being selected.

If you are unsure why an image was rejected, please have a look at the calibration log. It will contain one line for each attempt to take a calibration image, which may for example look like this:


After a calibration image is taken successfully, the bottom panel highlights the cameras in which the calibration pattern(s) were detected. It also shows the total number of measurements (“matches”) per camera in the calibration process so far, as well as an accuracy metric (lower is better):


Example of the bottom panel during calibration. The top row shows the live color images, the bottom row shows the live infrared images. The cameras in which calibration patterns were detected in the last calibration image are highlighted with a brighter background.

If the accuracy is not good in the beginning, do not worry. Taking more calibration images should usually lead to improved accuracy.

For best results, it is advisable to take images with the calibration pattern being where you expect the scene content to be later. For example, if you want to film a person, we recommend to take calibration images with the pattern(s) being where the person’s feet, hip, arms, head, and so on will be later.

We recommend to keep taking images with the pattern(s) in different positions until each camera has at least five to ten measurements. When done, the status display at the top of the bottom panel will become green.

Once you start taking calibration images, the program will display a live 3D point cloud view from all cameras that were connected already, which with two cameras may for example look like this when viewed from the top:



Raw point clouds are displayed as preview in this and the following setup steps because they are fast to display and require no configuration. However, they can contain noticeable artifacts such as noise and color fringes. These point clouds are not representative of the final results.

The calibrated camera positions are displayed as yellow pyramids. The positions where the calibration pattern(s) were successfully observed are displayed as colored points, with the color showing the accuracy of each point: Green points are consistent with the calibration, while yellow and red points indicate worse consistency. The latter may indicate that more measurements are required or that the measurement was an outlier, which may be ignored.


If the cameras are moved in any way after calibrating them, even just a tiny bit, for example by pulling on one of their cables, then the calibration must be re-done. Otherwise, the quality of the results will be degraded.

User interface reference


The “New calibration” button erases any existing calibration data and starts a new calibration.

Below this, the type of calibration pattern to use can be chosen. Once a calibration has been started, the pattern type cannot be changed anymore. Thus, if this drop-down widget is grayed out, to be able to change the pattern type a new calibration must be started by clicking the corresponding button above.

“Open pattern PDF” opens the selected pattern type in the default PDF viewer configured in your operating system, showing how the pattern looks like, and allowing to print it.


Below, there are two ways to take calibration images:

  • Clicking “Take calibration image” to take a calibration image individually (recommended if static placement of the pattern(s) is possible).

  • Clicking “Start continuous calibration”. The program will then take calibration images continuously until you click “Stop calibrating”.


Setting up remote control allows you to take calibration images conveniently from your phone or other external device.


In “continuous calibration” mode, the program measures the pattern motion and rejects images if a pattern moves too fast. If you have to do the calibration with a hand-held pattern, this may end up rejecting all of your images unless you hold the pattern very steadily. In this case, consider increasing the “Pattern motion threshold” under “Advanced” to relax this condition.

The pattern motion threshold refers to how much apparent motion of the calibration pattern(s) the program will allow before rejecting a calibration image. This applies to continuous calibration mode only; when taking individual calibration images, the program does not check for pattern motion.

For best calibration results, the pattern(s) should be completely still while taking a calibration image, but this is not always feasible. A smaller threshold value leads to images being discarded more strictly, which may help achieve higher-quality calibration results. However, this can make it hard to take calibration images with the pattern(s) being hand-held. The default value is: 5.

Create calibration recording is mostly useful to send sample data to ScannedReality in case you experience issues with calibration. There is no need to use this setting during normal operation.


The calibration log contains one entry for each attempt to take a calibration image. Each entry shows whether the attempt was successful, or gives the reason for why it was not successful. The most recent attempt is shown at the bottom.


Example of the bottom panel during calibration. The top row shows the live color images, the bottom row shows the live infrared images. The cameras in which calibration patterns were detected in the last calibration image are highlighted with a brighter background.

The bottom panel shows live 2D streams of all color and infrared cameras that are being calibrated.

On the top of the panel, the current state of the calibration is shown, indicating what the recommended next step is.

For each camera, the number of calibration images (”matches”) is shown that the program was able to use. Notice that not every valid calibration image might be usable directly: For computing a single, consistent calibration for all cameras, the program requires images that show the relative positioning of all the cameras, thus isolated cameras that have no connecting information to other cameras yet might remain at zero until they get “connected” to the others.

The accuracy measurement at the bottom of each camera’s column is the median distance between 3D points measured by this device’s depth camera and corresponding points reconstructed from pattern matches by the calibration algorithm. This metric indicates how consistent the calibration is with the images of the depth camera. Lower is better.