Recording Setup

Connecting your sensors

Before starting the guided recording setup, all cameras and microphones that you want to use for recording should be connected to the PC (though it is still possible to change the connected cameras during the next step).

Please see the corresponding instructions for your camera type below:

How to physically arrange the cameras is discussed here.

With all sensors connected, click the button “Guided recording setup” in the middle of the home screen. This brings you to the device configuration as the first setup step.

Device configuration


The device configuration setup with 10 connected Femto Bolt cameras and no configured microphones.

This setup step lets you review the connected devices and configure device settings that will remain fixed once the devices are started.

On the top, the list of detected compatible cameras is shown, as well as the settings for the type of cameras used. These settings for example include the image resolution and frames per second. The default values should work well for most purposes. For documentation on these settings, which can also be accessed in the settings tab, refer to sensor settings.

It is still possible to change the connected cameras during this setup step. If you use Azure Kinect cameras, “Refresh cameras” must be clicked to poll for changes. If you use Femto Bolt cameras, changes should appear automatically after a couple of seconds.

Below the cameras, the list of configured microphones is shown. If you would like to record audio, you can configure which microphone the program should use here. For documentation on this, see the microphone settings in the settings tab.

Once you are happy with the device configuration, click the “Start sensors” button on the bottom right to proceed. After waiting for the sensors to start up, you should arrive at the sensor configuration step.


For Femto Bolt cameras, startup can take about half a minute or more. This particularly applies in case the configured sync mode on the cameras needs to be changed, which requires rebooting the cameras. Please be patient.

Sensor configuration


The sensor configuration setup screen for a setup with 10 Femto Bolt cameras, showing the color streams. (Click the image to see it larger.)

In this setup step, the camera settings should be configured to obtain good images.

On the right, live video streams of all cameras are shown. Clicking a stream maximizes it. The type of stream to display (color, depth, or infrared) can be chosen with the “Show camera type” dropdown widget on the left.

Below this, the camera settings can be configured. The available settings depend on the type of cameras that you use. For the purpose of this tutorial, it should be sufficient to change some basic settings: You will likely want to adjust the exposure time, gain, and white balance of the color streams to your lighting conditions.

Notice that the settings are split up into camera driver settings and software settings: Camera driver settings determine how the cameras record images. Software settings determine how the program processes these images. After recording a video, the video’s camera driver settings cannot be changed anymore, but the software settings can still be changed for existing recordings on disk.

For fine-tuning, it is possible to change camera driver settings for each camera individually. This can for example be helpful in case of camera hardware or lighting differences.

Once you are happy with the settings, click “Next” on the bottom right.


Each setting has an integrated description text that shows when hovering the cursor over the gray question mark next to its name. Descriptions of the camera settings are also available on the documentation pages of the cameras.

Background images


The background image configuration screen for a setup with 10 Femto Bolt cameras, after taking background images.

In this step, you may optionally take background images for the cameras. These images should show the scene being empty, without any foreground object in it.

Background images can help the program distinguish foreground from background and reduce the effect of light bleed around oversaturated image areas.

To take background images, ensure that the scene is empty, then click the button on the left. The recorded images will be displayed on the right shortly afterwards. In case you are not happy with the images, click the button again to take a new set of images.

After taking background images, please do not move the cameras anymore, since this would change what the cameras see. If the cameras do get moved, the guided setup can be repeated.

Once you are happy with the result, click the “Next” button on the bottom right to proceed to the next setup step.

Sensor calibration


This setup step is only required if two or more cameras are used. If you use a single camera, then the program will directly skip to floor calibration.


The calibration screen for 10 Femto Bolt cameras, after completing calibration. (Click the image to see it larger.)

In this step, the positions and orientations of all cameras will be calibrated with the help of one or more calibration patterns. This allows the program to merge the data of all cameras consistently to create 3D reconstructions.

This step may look slightly daunting at first, but it is actually not complicated. After you have done it once, it should be quick to complete in the future.


To get started, first click the “New calibration” button on the left. Then, choose the type of calibration pattern that you printed from the drop-down widget below (or if you have not printed a pattern yet, see here how to do that and then return to this step). Clicking “Open pattern PDF” shows how the selected pattern looks like.


A single calibration pattern is sufficient to perform calibration. However, we recommend using multiple patterns for best results, as this helps reduce calibration effort and improve accuracy.

Now, you are ready to take calibration images. A few things should be kept in mind while doing this.

First, while a calibration image is being taken, all patterns should remain completely static. This may be achieved by mounting the pattern(s) on a tripod or other kind of mount. For example, this photo shows two calibration patterns on a tripod, where the tripod can be adjusted to change the pattern height and orientation:


A calibration pattern must be visible simultaneously in at least two cameras to advance the calibration. You may imagine that the calibration image, if taken successfully, will then “connect” these cameras.

For a successful calibration, all cameras need to be “connected”. It is okay if they are connected indirectly. For example, if there are three cameras A, B, and C, and both A and B, as well as B and C are connected, then a complete calibration can be computed for the three cameras.

However, it is strongly advisable to take more calibration images than the minimum in order to obtain improved accuracy. Move the calibration pattern(s) to a different position between each take, such that each new image will provide new calibration information.

With this in mind, let us start taking calibration images. There are two ways to do this:

  • Clicking “Take calibration image” to take a calibration image individually. We generally recommend this if static placement of the pattern(s) is possible.

  • Clicking “Start continuous calibration”. The program will then take calibration images continuously until you click “Stop calibrating”.


Setting up remote control allows you to take calibration images conveniently from your phone or other external device. This can strongly reduce the calibration effort.


In “continuous calibration” mode, the program measures the pattern motion and rejects images if a pattern moves too fast. If you have to do the calibration with a hand-held pattern, this may end up rejecting all of your images unless you hold the pattern very steadily. In this case, consider increasing the “Pattern motion threshold” setting under “Advanced” to relax this condition.

Not each attempt to take a calibration image will be successful. This can be for different reasons, such as:

  • The pattern(s) not being visible in at least two cameras.

  • The image being rejected because the pattern(s) move too fast.

  • The wrong pattern type being selected.

If you are unsure why an image was rejected, please have a look at the calibration log. It will contain one line for each attempt to take a calibration image, which may for example look like this:


After a calibration image is taken successfully, the bottom panel highlights the cameras in which the calibration pattern(s) were detected. It also shows the total number of measurements (“matches”) per camera in the calibration process so far, as well as an accuracy metric (lower is better):


Example of the bottom panel during calibration. The top row shows the live color images, the bottom row shows the live infrared images. The cameras in which calibration patterns were detected in the last calibration image are highlighted with a brighter background.

If the accuracy is not good in the beginning, do not worry. Taking more calibration images should usually lead to improved accuracy.

For best results, it is advisable to take images with the calibration pattern being where you expect the scene content to be later. For example, if you want to film a person, we recommend to take calibration images with the pattern(s) being where the person’s feet, hip, arms, head, and so on will be later.

Once you start taking calibration images, the program will display a live 3D point cloud view from all cameras that were connected already, which with two cameras may for example look like this when viewed from the top:



Raw point clouds are displayed as a preview in this and the following setup steps because they are fast to display and require no configuration. However, they can contain noticeable artifacts such as noise and color fringes. These point clouds are not representative of the final results.

The calibrated camera positions are displayed as yellow pyramids. The positions where the calibration pattern(s) were successfully observed are displayed as colored points, with the color showing the accuracy of each point: Green points are consistent with the calibration, while yellow and red points indicate worse consistency. The latter may indicate that more measurements are required or that the measurement was an outlier, which may be ignored.

The 3D view can be rotated by dragging the cursor with the left mouse button pressed. It can be moved by dragging the cursor with the middle or right mouse button pressed, and can be zoomed with the mouse wheel.

We recommend to keep taking images with the pattern(s) in different positions until each camera has at least five to ten measurements. When done, the status display at the top of the bottom panel will become green.

While all of the above may sound complicated at first, obtaining a good calibration is not difficult and you should be able to do this with the knowledge above. If you want to read about the calibration step in more detail, see its documentation page.

Do not worry if the 3D display is not upright yet - this will be calibrated in the next step.


If the cameras are moved in any way after calibrating them, even just a tiny bit for example by pulling on one of their cables, then the calibration must be re-done. Otherwise, the quality of the results will be degraded.

Floor configuration


The floor configuration screen for a setup with two Azure Kinect cameras, after marking a number of points (displayed as white dots) on the floor.

In this step, the floor may be calibrated in order to obtain upright 3D reconstructions with a well-defined floor height.

In case your cameras are set up such that the floor is not visible, you may still want to use this setup step to mark a different horizontal surface such that you get upright 3D reconstructions.

In the 3D view, a live point cloud of the scene will be displayed. To calibrate the floor, left-click at least three points on the floor in the 3D view. Each clicked point will be marked with a blinking dot. If you want to remove a marked point again, right-click it.

Once you have marked at least three points, a grid will be displayed which shows where the floor will be calibrated if you accept the calibration. In addition, an arrow in the center of the grid will show which direction is “up”. This direction is automatically determined to point towards the cameras.

You may optionally mark more than three points to increase the accuracy by averaging.

If the floor is not visible in your camera setup, you may mark any other horizontal surface instead. In that case, the automatically determined ‘up’ direction may be incorrect, thus you may have to click the “Flip up direction” button to correct it.

Once you are happy with the displayed result, click “Next” to go to the next step. The calibration will then be applied, thus the point cloud will be upright in the next step.


The floor calibration, as well as the reconstruction volume configuration below, can still be changed later for existing recordings on disk. If desired, it is thus even possible to skip these steps during setup, although we recommend doing them at this point in order to get correct previews and make sure that everything works as intended.

Reconstruction volume configuration


The reconstruction volume configuration setup screen, for a setup with two Azure Kinect cameras.

Almost done! In this final setup step, the reconstruction volume can be configured.

Only those parts of the scene that are inside the reconstruction volume will be included in the final volumetric videos. Thus, make sure to exclude any background objects that should not be part of the results.

The red box in the 3D view shows the current reconstruction volume. Points outside of the volume are displayed in grayscale by default; this can be configured with the “Crop preview to volume” setting on the bottom left.

You can move, rotate, and resize the volume by clicking and dragging on the corresponding handles in the 3D view:


You can alternatively use the sliders on the left to change these properties.

There is one setting which can only be changed via a slider: The “height above floor”. This lets you lift the volume above the floor (with positive values) or move it downwards to intersect the floor (with negative values). This is useful to fine-tune the height at which the reconstruction starts, including or excluding the floor from the reconstruction as desired. We generally recommend to exclude the floor by using a small positive value here.

Note that the position and rotation of the reconstruction volume determines where the final volumetric videos will appear when you later use them, for example, in a game engine plugin:

  • The center point, which is marked by the red cross on the reconstruction volume floor, will be placed at the position of the game engine object that you assign the video to.

  • The direction marked by the red arrow will look forward. This is useful in case your videos have a clearly defined forward direction.

Once you are done, a click on the “Next” button brings you to the recording screen.


If there are still unwanted background objects in the final reconstruction volume that you want to exclude, it might be an option to mask them out.


If you would like to disable the other elements in the 3D view to focus on the red box, you can do this by clicking the “View settings” button on the top right and changing the settings there as desired.

The view settings for example also allow to change the background color of the 3D view, among other things.

Furthermore, it can be useful to set the “Crop preview to volume” setting in the left sidebar to “Show inside points only” in case there are too many background points cluttering the 3D view.