
LiveScan3D is an open-source, volumetric video capturing software. It was originally developed by Marek Kowalski and Jacek Naruniec and is now being maintained and by BuildingVolumes, a group trying to make volumetric video capture more widely available by developing open source tools for capture, processing and playback.
🧪 Do you want to help us build Livescan3D by testing our public beta? Sign up for the beta preview by writing us a short mail
Functionality
LiveScan3D uses multiple, low-cost RGBD cameras, such as the Kinect Azure, to capture a volumetric representation of the scene. It synchronizes all cameras spatially and temporarily, and then captures the scene into widely used file formats. You can record with up to 9 cameras. The distributed structure of LiveScan allows you to capture with a few low-end PCs, linked vial local network (Server and client structure) instead of one very high end PC, making the capture more economically feasible.

You can capture Pointclouds into the .PLY Format, or raw color & depth data in png. & .tiff frames for further post-processing.
LiveScan also includes a Playback-Application which lets you view your volumetric captures in interactive 3D and realtime, and experiment with the visualization.

Roadmap
At the moment, LiveScan3D is still in a pretty basic state and it is not ready to use in a production setting. The capture quality is low, UI/UX is is a rough shape, and there are many bugs
We’re aiming for a new release in mid 2023 that puts LiveScan3D on a solid base, with an improved workflow, tools and capture quality, aimed towards a production-ready state, that allows small content creators, artists and everybody else to produce volumetric videos.
The new version of LiveScan3D will feature the following improvements:
After this first release, we’re planning to add more features to close the gap to commerically available volumetric video capture software.
- Stable temporary synchronization of up to 9 cameras
- An improved spatial synchronization workflow
- An overhauled UI, for a more comfortable workflow
- Control of camera settings, such as whitebalance and exposure
- A better preview, in 2D as well as 3D
- Pointcloud and raw data capture, for further post-processing
- An improved video playback tool
- Playback in Unity3D
The next big step after the initial release is to offer a meshing & texturing tool, which allows you to capture your scene in much higher detail, and playback your capture in well-known tools such as Blender, Unity or Unreal Engine. This is scheduled for Q1 of 2025. You can follow the development of Livescan3D in my blog!
👩🏽🔬LiveScan3D is an open-source project that is developed by a very small team. If you share our vision and want to help accelerate development by contributing, testing, or managing, we’d be very happy to have you onboard!