Recording, Processing, Playing

Video recording

After finishing the guided recording setup, the recording screen is shown. Before going on to record our first video, let us briefly have a look at this screen and record a test snapshot.

../_images/record_screen.png

The recording screen with a live point cloud preview being shown. (Click the image to see it larger.)

In the “Recording & Reconstruction” section of the side bar to the right, the workspace to record the data to can be chosen. For now, we only have a single workspace, so there is nothing to change there.

Below, the reconstruction settings can be configured, either by choosing a preset (after a preset has been created), or by directly configuring the settings below. Changes can always be reset by clicking the “Set to defaults” button.

For the purpose of this tutorial, we will leave most of these settings at their defaults. However, there is one setting that we will look at: Click the “Other settings” header to reveal the corresponding settings. Among these is the “Transparent back side” setting:

../_images/reconstruction_settings_with_other_expanded.png

If this setting is enabled, then any surfaces that face away from all cameras will be gradually blended out, becoming transparent. We recommend to enable this if your camera setup does not view the subject from 360 degrees. This is because by principle, surfaces that the cameras do not view can only be estimated, and these estimates will likely not be plausible if these areas are large. Therefore, it can be preferable to make them transparent.

Note

Our Unreal plugin is still being updated to support transparency. Videos currently display as opaque in this plugin even if reconstructed with transparency.

Note

Clicking the ‘+’ button to the right of the setting preset selector creates a new preset from the currently configured settings.

Below the “Recording & Reconstruction” section of the side bar, there is the “Snapshot” section:

../_images/record_screen_snapshot_side_bar.png

Snapshots are static 3D meshes. Since creating a snapshot is quick, it is well-suited for testing the recording setup before recording a video. Thus, we will record a test snapshot now.

Since we are not interested in saving the snapshot, at this point we care neither about the snapshot name, nor about saving its raw images.

If you do not have a second person available for testing, then the “clock” buttons on the right of the snapshot buttons may be useful: They allow configuring a timer such that after clicking a snapshot button, the snapshot is taken after a given number of seconds. If a timer is configured, the buttons are highlighted in yellow.

Click “Test snapshot (not saved)” to create, as the button says, a test snapshot that will not be saved to disk. Cheese!

After a couple of seconds, the reconstructed snapshot should appear in the 3D view. For example, with 10 Femto Bolt cameras, it might look like this:

../_images/record_screen_3d_view_snapshot.png

Note

In case your PC does not have a sufficient amount of CPU or GPU memory, you might experience issues here: You might see the reconstruction end in an error message or even crash the program.

In this case, please go to the settings tab and activate the setting “Minimize memory usage” in the “Reconstruction” section if it is not activated yet. This will cause the reconstruction process to use less memory, at the expense of the reconstruction speed. Then, retry creating a snapshot.

After a crash, there is no need to go through the whole recording setup again to retry, as long as none of the cameras have been moved: After restarting the program, click the “Record” tab on the top and choose “Use previous setup”, which will automatically load the last used recording setup and restart the sensors. At this point, you are ready to retry. In general, the program automatically saves the latest recording setup, and it is possible to return to it in this way.

You might want to inspect the snapshot to see whether the reconstruction settings are fine, or whether something needs to be changed. If you want to make changes to the recording setup, you can go back to the setup steps individually by clicking on “Calibration & Configuration” on the top of the side bar to expand the corresponding section, and then clicking the button for the setting that you want to change. However, be aware that repeating the sensor calibration step will most likely require you to repeat the later steps as well, as the coordinate system of the setup may change as a result.

Once you are happy with the settings, it is time to record the first video.

../_images/record_screen_video_side_bar.png

You may enter a descriptive name for the video if you want to, but this is not required. The name of a recording may always be changed later.

As for snapshots, the “clock” button may be used to configure a timer for starting the video. It also allows limiting the video length to a fixed maximum number of seconds.

Once you are ready, click on “Start recording”, which will show the recording window:

../_images/recording_popup.png

We recommend to keep the first video to a couple of seconds duration for quick testing, and then click “■ Stop”. That being said, the following information is visible in the recording window:

  • Duration: The recording duration of the video so far (in the format hh:mm:ss) is shown in a large font on the top left.

  • Dropped frames: The number of images that have been dropped while recording so far, i.e., images which were expected but not recorded by the camera, images which could not be transmitted to the PC, or images which could not be saved to disk. If a significant number of frames get dropped, this may indicate issues with the cameras’ USB connections or with the speed of processing the images.

  • Buffered data: The size of the accumulated video data, in Megabytes, which are pending writing to disk, out of the configured maximum cache size.

  • Size on disk: The size of the recorded data so far, in Gigabytes, out of the free disk space available for the workspace.

  • Estimated remaining: The estimated remaining recording duration until the disk is full. Notice that the images of some cameras such as the Azure Kinect and Femto Bolt are compressed if configured to MJPEG format. This means that their size varies depending on the image content. In that case, this time estimate is only an approximation.

Ideally, both dropped frames and buffered data should stay at zero, as in the screenshot above. It is however normal that individual frames are occasionally dropped. This will hardly be noticeable in the result and is not completely avoidable: Frames may fail to transfer, or the operating system may prioritize other (background) tasks over the recording for short periods of time. A low amount of buffered data is normal as well, and will not cause any issue.

However, if you experience frequent dropped frames, or if the amount of buffered data consistently increases, then there is an issue that should be addressed. Please see here for troubleshooting.

After recording a video, it may immediately be previewed to see how the recording turned out. To obtain the final volumetric video, a recording needs to be processed.

Either of these actions may be done immediately after recording by stopping the recording with the corresponding button “■ Stop & open” or “■ Stop & process”, or by clicking “Open / Process last recorded video” in the side bar afterwards. However, for the purpose of this tutorial, we will go via the Manage & Process tab in order to briefly introduce this tab.

Video previewing and processing

The Manage & Process tab allows you to manage recordings and process them. Go to this screen now by clicking the “Manage & Process” item in the tab bar on the top.

../_images/process_screen_single_recording.png

The Manage & Process tab with a single recording.

On the left side, this screen displays a list of all recordings from all configured workspaces. Currently, there is only the single recording that we just created.

At the top of this screen, there are some options to sort and filter the recordings.

Clicking the recording selects it, showing its details in the side bar on the right:

../_images/process_screen_selected_recording.png

The recording’s name may be edited there, it may be deleted, and its disk folder may be opened in a file manager.

Most importantly, it is also possible to open the recording here, either with the corresponding button, or by simply double-clicking the recording in the recording list. Do this now to proceed to the recording details screen:

../_images/recording_details_screen.png

The recording details screen with a point cloud preview being shown. (Click the image to see it larger.)

The recording details screen allows previewing recordings and offers many possibilities for modifying their recording setup, as well as for processing and exporting them.

To preview the recording, click the Play button on the bottom left. The “Display” dropdown on the top left may be used to switch between the 3D point cloud preview and the raw image streams of any camera.

We are interested in processing the video, thus locate the “Video” section in the side bar on the right:

../_images/recording_details_screen_video.png

It is possible to set a start and end timestamp here for processing a sub-range of the video only. This can be skipped if you would like to process the whole video.

Optionally, enter a descriptive name for the reconstruction, or leave it at the default. Like recording names, reconstruction names can easily be changed afterwards. Usually, giving names to reconstructions is not necessary since only a single reconstruction will be created for a recording. However, it can for example be very useful to distinguish reconstructions in case you are creating videos for different time ranges within a recording, or with different reconstruction settings.

Before starting the processing, you may also want to adjust the reconstruction settings which may be done in the separate section “Reconstruction” above. These are the same settings as mentioned above in the context of snapshots. Each recording saves its own reconstruction settings, which may differ from the reconstruction settings used for live snapshots.

Finally, click “Reconstruct video”. Depending on your PC hardware, this may take a bit, though if you limited the video to a couple of seconds it should likely not take long. During processing, keyframes of the resulting video are shown as preview:

../_images/process_popup.jpg

Once the reconstruction has finished, you will be taken to the View & Edit tab with the new video open.

Note

If you want to later re-open the reconstructed video, you can do this quickly from the Manage & Process tab where it will be listed to the right of its recording. Double-click the video to open it.

Alternatively, the video file (in the .xrv file format) can also be opened by selecting it in the file system from within the View & Edit tab after clicking the “Open” button there.

Playing back the video

../_images/edit_screen.png

Playing back the final processed video.

Congratulations! You just created your first volumetric video in ScannedReality Studio. You can start video playback by clicking the “Play” button on the bottom left, or by pressing the Space key.

Apart from playing videos, here in the View & Edit tab, videos may be cut and concatenated, and there are options for exporting videos (and still frames from videos) in different file formats.

If you would like to use the resulting video file in a game engine or display it on the web, please see deploying videos for information on the available plugins. Each plugin comes with its own documentation.

For directly using the resulting video file in another application, you will need to locate the video file on disk (in the .xrv file format). This may be done by clicking the “Show in file manager” button on the top right in the View & Edit tab.

To recap, an overview of the data flow for video creation is given in the image below: After creating a recording containing raw sensor data, the recording is processed to create snapshots (static 3D meshes) or volumetric videos. Volumetric videos may be exported as mesh sequences, and raw sensor data may be exported as images respectively audio files.

../_images/file_format_flow.svg

That is it for this tutorial! We hope that you gained an overview of the video recording workflow with ScannedReality Studio. More in-depth documentation about the program’s functionality is available in the following documentation chapters.