Microsoft Azure Kinect

ScannedReality Studio supports the Azure Kinect Development Kit, or in short Azure Kinect, which is a depth camera that was produced by Microsoft.

Important

The Azure Kinect is not being produced anymore. Microsoft has licensed its sensor technology to partners instead. As the most direct replacement, the Orbbec Femto Bolt features an identical depth sensor as in the Azure Kinect as well as a similar color sensor.

This documentation page covers Azure Kinect-specific topics and is structured as follows:

The first two sections give an overview of the requirements and constraints to consider when planning to buy Azure Kinect cameras:

The next sections discuss camera synchronization and cable extension, with the respective cables required for each task:

Then, assuming that you obtained one or more Azure Kinect cameras and the necessary cables, the setup steps to prepare for video recording are described. They are split into first-time and repeat setup steps:

The documentation page ends with a reference of the camera settings that are available for Azure Kinect cameras in ScannedReality Studio, as well as potential troubleshooting steps in case there are issues with the cameras:


PC requirements

Connection: The Azure Kinect is connected to a PC via USB3. Important: Be aware that the camera supports only Intel, Texas Instruments (TI), and Renesas USB host controllers! See the official documentation and note about ASMedia USB host controller incompatibility for details. In this GitHub issue thread, it was reported that Intel chipsets work well, while many others, including from Renesas, may lead to frame drops. Based on this, we can as of now only recommend Intel-based systems for use with Azure Kinect cameras.

Also, notice that separate USB ports on a PC may share bandwidth internally, which may cause issues when trying to connect multiple cameras to these ports. Thus, if possible, it is best to use ports with a separate controller for each port, such as in this PCIe extension card.

The USB cable that comes with the camera is ca. 1.5 meters long, which in general is likely too short for practical use for volumetric video recording. See the section on cable extension below for notes on extending the cable length.

Processing: The Azure Kinect requires the GPU of the host computer to process the depth images as they are being recorded. Thus, if you plan on using multiple cameras, make sure that you use a capable GPU. See Microsoft’s page on system requirements.

Disk write speed: Since the native format of the color images as recorded by the Azure Kinect is MJPEG, the bandwidth of data produced by the Azure Kinect varies not only depending on its operating mode, but also depending on the scene content.

To give a rough idea of the write bandwidth that is required in practice, we tested several typical operation modes and report the maximum bandwidths that we observed below. Please be aware that these are rough estimates which are meant to be used with a safety margin.

  • Depth: Narrow FOV unbinned, Color: 3840 x 2160 pixels, 30 fps: 90 MB / second

  • Depth: Narrow FOV unbinned, Color: 2560 x 1440 pixels, 30 fps: 55 MB / second

  • Depth: Narrow FOV unbinned, Color: 2048 x 1536 pixels, 30 fps: 54 MB / second

Other requirements

Power connection: If the camera is connected to a PC using the included USB-C-to-A cable for data, then it additionally must be plugged into a power outlet with the included power cable. Notice that the USB cable that goes to the power plug should not be plugged into a PC’s USB port, as that does not deliver sufficient power for all operating modes.

Alternatively, the camera may be powered by a single USB-C-to-C cable (not included) for both power and data. See Microsoft’s documentation for details.

Just like the data cable, the included power cable is likely too short (ca. 1.5 meters) for practical use for volumetric video recording. Standard power extension cables may be used to get around this.

Temperature range: Be aware that according to Microsoft’s hardware specifications, the camera must only be operated in temperatures between 10 and 25 degrees Celsius.

Sunlight: Sunlight may affect the camera’s active infrared sensing.

Interference: Cameras whose fields-of-view overlap must be synchronized to avoid interference. This can be achieved with audio cables, which need to be bought separately. With typical camera operation modes and timings, at most 10 cameras can be synchronized tightly while avoiding interference. See the next section on synchronization below.

Synchronization

All Azure Kinect cameras whose fields-of-view overlap must be synchronized to avoid interference of their active sensing.

To synchronize the devices, the white plastic cover on their back must be removed to expose the sync in and sync out ports. Then, standard 3.5mm audio cables (up to a total length of 10 meters) may be used to connect the devices in a daisy-chain or star configuration, using either one of the cameras as master, or an external sync trigger; see Microsoft’s documentation for details.

There is no need to configure master / subordinate devices in ScannedReality Studio, as those will be recognized and assigned suitable timing offsets automatically. With typical camera operating modes, theoretically up to 10 cameras can be tightly synchronized while avoiding interference.

Note

In practice, the clocks on Azure Kinect cameras can be slightly imprecise, leading to a risk of interference if using more than 8 cameras simultaneously. This issue is fixed in the Orbbec Femto Bolt.

Cable extension

As mentioned above, both the data cable and the power cable included with the camera are ca. 1.5 meters long, and are thus in general likely too short for volumetric video recording.

There are multiple options for extending the cable length. One option is to buy an active USB-C-to-C cable that can deliver both data and power. For example, as of the time of writing this, Newnex sells an extension cable that is specifically intended for use with the Azure Kinect. Notice that this will have to be plugged into a USB-C port on the host PC, in contrast to the data cable that is included with the Azure Kinect, which has a USB-A port on the PC side.

Another option is to extend the data (USB-C-to-A) and power cables separately. Notice that you will likely need an active USB cable for a reliable data connection over a longer distance than 1.5 meters. For the power connection, standard power extension cables may be used.

First-time setup

This section describes first-time setup steps for using Azure Kinect cameras.

Updating firmware

We recommend to keep the camera firmware updated to the latest version. See Microsoft’s instructions for how to update the firmware on Azure Kinect devices.

As of the time of writing this, the latest firmware version 1.6.110080014 may be downloaded from here within the source code repository of the Azure Kinect Sensor SDK.

OS configuration (Linux only)

To use Azure Kinect cameras on Linux without root rights, udev rules must be added. Microsoft’s instructions for this may for example be followed by running these commands in a terminal:

curl -sSL https://github.com/microsoft/Azure-Kinect-Sensor-SDK/raw/develop/scripts/99-k4a.rules > /tmp/99-k4a.rules
sudo mv /tmp/99-k4a.rules /etc/udev/rules.d/

Furthermore, if you want to use more than a single camera on one PC, then you likely need to increase the configured kernel USB memory. Please also see Microsoft’s instructions for how to do this.

Recording setup

This section describes how to plug in your Azure Kinect cameras to prepare for video recording, assuming that you obtained the necessary hardware according to the sections above, and followed the first-time setup steps.

We are going to use a daisy-chain connection style for the synchronization cables, with the first camera in the chain acting as the master camera. It does not matter which camera you choose to use as master.

The image below shows the cable connection scheme for this setup, assuming that separate power cables are necessary (this is the case if you use USB-C-to-A cables as data cables, such as the cables included with the Azure Kinect).

This scheme is applicable for one master device and up to eight subordinate devices.

../../_images/azure_kinect_daisy_chain.png

Cabling scheme for Azure Kinects synchronized as daisy chain, with power cables.

Plug in the cables as depicted on this image. Each camera will have the following connections:

  • “Sync in” from the previous camera in the daisy chain (if any)

  • “Sync out” to the next camera in the daisy chain (if any)

  • Data connection to the PC

  • Power connection to a power outlet

That’s it! With these connections in place, you may for example start the guided recording setup.

Settings reference

This section describes the settings that are available for Azure Kinect cameras within ScannedReality Studio.

Settings regarding the cameras’ operating modes can be configured both in the settings tab while the sensors are not active, and in the device configuration screen as part of the guided recording setup.

All other settings can be configured in the sensor configuration screen while the cameras are running, either as part of the guided recording setup, or by going there from the recording screen.

Device configuration

../../_images/settings_screen_sensor_settings_azure_kinect.png

The color and depth cameras of the Azure Kinect can be operated in different modes having different image resolutions, fields-of-view, frame rates, and ranges.

For detailed specifications, please see Microsoft’s documentation:

We generally recommend to use the unbinned narrow field-of-view mode for depth, and a color resolution of 3840 x 2160 (16:9 format, the largest supporting 30 frames per second). A color resolution of 2048 x 1536 (4:3 format, having the largest field-of-view supporting 30 frames per second) might be an alternative if you would like to prioritize having a large field-of-view over a large image resolution, however, we have observed this mode to have worse image quality than the other resolutions, showing horizontal stripes over the image.

See here for figures comparing the available fields-of-view. We do not recommend the wide depth field-of-view modes because the narrow modes fit significantly better to the field-of-view of the color sensor, while also having larger range.

As a default, ScannedReality Studio uses the unbinned narrow field-of-view depth mode with a color resolution of 3840 x 2160 pixels at 30 fps.

Sensor configuration

../../_images/azure_kinect_sensor_configuration.png

Notice that the Azure Kinect only supports a limited number of fixed exposure time settings, which differ depending on the active powerline frequency mode. See here for a full list. The configured value will be rounded up to the next supported value according to this list.

Furthermore, keep in mind that as always, the exposure time is also constrained by the frames-per-second. For example, with 30 FPS, the maximum exposure time is about 33.33 milliseconds (since with exposures taking longer, the FPS would have to be reduced).

The powerline frequency setting should be set to the line frequency used in your area to avoid flickering under electrical lighting. This can be looked up online.

In addition to the camera driver settings, software color correction settings may be configured below. These settings are applied by the ScannedReality Studio application on top of the raw images received from the camera driver. They are applied at the time when the images are being processed, and thus do not affect how they are written into recordings; for existing recordings, these settings can thus still be edited later in the recording details screen.

However, using the software settings will reduce the color precision. If possible, it is therefore preferable to achieve the desired effects by improving the lighting during recording.

Troubleshooting

If you experience generic camera issues, see Microsoft’s Azure Kinect troubleshooting page.

If the cameras generally work but you experience a significant number of dropped frames, consider the following possibilities:

  • Is there an incompatibility of the camera with the USB host controller on the PC? See PC requirements. If possible, try it with other USB host controllers or PCs.

  • Do several USB ports that are used by the cameras share bandwidth internally? If possible, try it with other USB ports that do not share bandwidth.

  • Does the GPU on the host PC not keep up with processing the depth images of all cameras?

  • Does the CPU on the host PC not keep up with receiving data from USB?