<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Christopher Remde</title>
	<atom:link href="https://chrisrem.de/feed/" rel="self" type="application/rss+xml" />
	<link>https://chrisrem.de</link>
	<description>Christopher Remde - XR Developer</description>
	<lastBuildDate>Mon, 20 Oct 2025 14:16:33 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.3</generator>

 
	<item>
		<title>Geometry Sequence Player Package</title>
		<link>https://chrisrem.de/geometry-sequence-player-package/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Tue, 27 May 2025 20:15:14 +0000</pubDate>
				<category><![CDATA[Project]]></category>
		<category><![CDATA[alembic]]></category>
		<category><![CDATA[cache]]></category>
		<category><![CDATA[Geometry]]></category>
		<category><![CDATA[mesh]]></category>
		<category><![CDATA[Package]]></category>
		<category><![CDATA[Player]]></category>
		<category><![CDATA[Plugin]]></category>
		<category><![CDATA[pointcloud]]></category>
		<category><![CDATA[Sequence]]></category>
		<category><![CDATA[textures]]></category>
		<category><![CDATA[Unity]]></category>
		<category><![CDATA[Video]]></category>
		<category><![CDATA[Volumetric]]></category>
		<category><![CDATA[XR]]></category>
		<guid isPermaLink="false">https://chrisrem.de/?p=745</guid>

					<description><![CDATA[Quickstart &#124; Documentation &#124; Unity Asset Store &#124; Unity Forums Thread &#124; License The Geometry Sequence Player is a package for the Unity game engine which enables playback of large geometry sequences inside of Unity. The package can be used to playback pointcloud, mesh or textured mesh sequences. It is available on either Github (for [&#8230;]]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Geometry Sequence Player for Unity" width="500" height="281" src="https://www.youtube.com/embed/5HA_HwtjIu0?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<p><strong><a href="https://buildingvolumes.github.io/Unity_Geometry_Sequence_Player/docs/quickstart/quick-start/">Quickstart</a> | <a href="https://buildingvolumes.github.io/Unity_Geometry_Sequence_Player/">Documentation</a> | <a href="https://u3d.as/3suF">Unity Asset Store</a> | <a href="https://discussions.unity.com/t/released-geometry-sequence-player/921802">Unity Forums Thread</a> | <a href="https://github.com/BuildingVolumes/Unity_Geometry_Sequence_Player#license">License</a></strong></p>



<p>The Geometry Sequence Player is a package for the Unity game engine which enables playback of large geometry sequences inside of Unity. The package can be used to playback pointcloud, mesh or textured mesh sequences.<br><br>It is available on either Github (for non-commercial use) or the Unity Asset Store (for commercial use):</p>



<p></p>



<div class="wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-a5331a9e wp-block-columns-is-layout-flex" style="padding-top:0;padding-right:0;padding-bottom:0;padding-left:0">
<div class="wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:50%">
<figure class="wp-block-image aligncenter size-large is-resized has-custom-border wp-duotone-default-filter" style="margin-top:0;margin-right:0;margin-bottom:0;margin-left:0"><a href="https://github.com/BuildingVolumes/Unity_Geometry_Sequence_Player" target="_blank" rel=" noreferrer noopener"><img fetchpriority="high" decoding="async" width="230" height="225" src="https://chrisrem.de/wp-content/uploads/2025/05/github-mark-white.png" alt="Link to Github Repo" class="wp-image-748" style="border-style:none;border-width:0px;border-radius:0px;width:67px;height:auto"/></a><figcaption class="wp-element-caption"><mark style="background-color:rgba(0, 0, 0, 0);color:#ffffff" class="has-inline-color"><a href="https://github.com/BuildingVolumes/Unity_Geometry_Sequence_Player" data-type="link" data-id="https://github.com/BuildingVolumes/Unity_Geometry_Sequence_Player">Github</a></mark></figcaption></figure>
</div>



<div class="wp-block-column is-vertically-aligned-center is-layout-flow wp-container-core-column-is-layout-a77db08e wp-block-column-is-layout-flow" style="flex-basis:50%">
<figure class="wp-block-image aligncenter size-large wp-duotone-default-filter" style="margin-top:0;margin-bottom:0"><a href="https://assetstore.unity.com/packages/tools/animation/geometry-sequence-player-307918" target="_blank" rel=" noreferrer noopener"><img decoding="async" width="866" height="156" src="https://chrisrem.de/wp-content/uploads/2025/05/AS-Logo.png" alt="" class="wp-image-749" srcset="https://chrisrem.de/wp-content/uploads/2025/05/AS-Logo.png 866w, https://chrisrem.de/wp-content/uploads/2025/05/AS-Logo-300x54.png 300w, https://chrisrem.de/wp-content/uploads/2025/05/AS-Logo-768x138.png 768w" sizes="(max-width: 866px) 100vw, 866px" /></a><figcaption class="wp-element-caption"><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-foreground-color"><a href="https://assetstore.unity.com/packages/tools/animation/geometry-sequence-player-307918" data-type="link" data-id="https://assetstore.unity.com/packages/tools/animation/geometry-sequence-player-307918">Get it on the Asset Store</a></mark></figcaption></figure>
</div>
</div>



<p></p>



<p><strong>What is a geometry sequence?</strong></p>



<p>In a geometry sequence, each frame consists out of an individual mesh or pointcloud, which is shown at short Intervalls to create the illusion of an animation, kind of like a three dimensional flipbook. As each frame is independent from the next one, no limitations apply to your animation, unlike for example skinned meshes or blendshapes, where mesh topology can&#8217;t be changed. This applies for example to:<br><br><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f30a.png" alt="🌊" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Pre-Rendered fluid caches<br><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/26d3-fe0f-200d-1f4a5.png" alt="⛓️‍💥" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Baked Physiscs simulations<br><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f7e7.png" alt="🟧" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Animated Procedural meshes<br><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f64b-200d-2640-fe0f.png" alt="🙋‍♀️" class="wp-smiley" style="height: 1em; max-height: 1em;" /> 4D scans with per frame textures<br><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f39e.png" alt="🎞" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Volumetric videos<br><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f4f7.png" alt="📷" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Pointclouds captured by RGBD sensors (Kinect, Realsense, Lidar)<br><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f310.png" alt="🌐" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Data visualisations</p>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow">
<figure class="wp-block-image size-large wp-duotone-unset-1"><img decoding="async" src="https://media0.giphy.com/media/v1.Y2lkPTc5MGI3NjExb2w2NDlvd2E0ZjdoM2o1YjYyZ3p5NmowcjZnOGFvNDI1cHRlNnF4ZyZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/H7APgpTRaAWa1uWj9d/giphy.gif" alt=""/></figure>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow">
<figure class="wp-block-image size-large wp-duotone-unset-2"><img decoding="async" src="https://media1.giphy.com/media/v1.Y2lkPTc5MGI3NjExOHUwN2x1bjd4bWswcWVucjg0amQ3MmdldDVmYmgwNHdoNGY1eWR0YSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/6vdkqmfFheMQHmspm1/giphy.gif" alt=""/></figure>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow">
<figure class="wp-block-image size-large wp-duotone-unset-3"><img decoding="async" src="https://media1.giphy.com/media/v1.Y2lkPTc5MGI3NjExNnBwY20zenduajFuNDZla3d5OGFtdmI2aWJzaHg5aWc0Yml2bXc4MSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/Pi6XDkVD3Yf5AtGfWI/giphy.gif" alt=""/></figure>
</div>
</div>



<p></p>



<p><strong>Features</strong></p>



<ul class="wp-block-list">
<li>Can import sequences from almost any source, with an included converter that covers most standard file formats.</li>



<li>Easy setup of sequences with an easy-to-use Editor UI and playback controls</li>



<li>Integrated into the Unity Timeline </li>



<li>Granular playback control for interactive experiences is provided through a scripting API</li>



<li>Sequences don&#8217;t need to be loaded into your Unity project or even the RAM, everything is streamed at runtime from disk</li>



<li>Highly efficient playback system that allows you to stream millions of points/polygons per second, even on lower end hardware</li>
</ul>



<p></p>



<p><strong>The package</strong></p>



<p>Originally, this package was created to facilitate the playback of volumetric videos from <a href="https://chrisrem.de/livescan3d/" data-type="post" data-id="246">LiveScan3D</a>, captured with Depth sensors such as the Azure Kinect or Intel realsense. But I realized that it could also be useful for many more applications, such as getting fluid and physics simulations from Blender into Unity!<br><br>I have used the package intensively in many projects myself, so it&#8217;s well proven and tested. The package has been released on Github as open-source code for quite a while already, but to make the development and support a bit more sustainable, I now decided to also offer the package on the Unity Asset Store. <br>If you want to use the package commercially, or if you want to support our development, you can <a href="https://assetstore.unity.com/packages/tools/animation/geometry-sequence-player-307918" data-type="link" data-id="https://assetstore.unity.com/packages/tools/animation/geometry-sequence-player-307918">buy it there</a>! Thank you very much for your support <img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f49a.png" alt="💚" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Sparse camera volumetric video applications. A comparison of visual fidelity, user experience, and adaptability</title>
		<link>https://chrisrem.de/sparse-camera-volumetric-video-applications-a-comparison-of-visual-fidelity-user-experience-and-adaptability/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Mon, 10 Mar 2025 12:04:27 +0000</pubDate>
				<category><![CDATA[Publication]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[Brekel]]></category>
		<category><![CDATA[comparision]]></category>
		<category><![CDATA[Depthkit]]></category>
		<category><![CDATA[Livescan]]></category>
		<category><![CDATA[Livescan3D]]></category>
		<category><![CDATA[mesh]]></category>
		<category><![CDATA[pointcloud]]></category>
		<category><![CDATA[Review]]></category>
		<category><![CDATA[RGBD]]></category>
		<category><![CDATA[Studio]]></category>
		<category><![CDATA[texture]]></category>
		<category><![CDATA[Video]]></category>
		<category><![CDATA[Volumetric]]></category>
		<category><![CDATA[VolumetricCapture]]></category>
		<category><![CDATA[XR]]></category>
		<guid isPermaLink="false">https://chrisrem.de/?p=752</guid>

					<description><![CDATA[A review paper, comparing state of the art sparse camera volumetric video applications in the year 2024. Comparision candidates are the software Depthkit Studio, VolumetricCapture, Brekel Pointcloud v3 and LiveScan3D Christopher Remde Moritz Queisner Igor M. Sauer 🌐 Read Article on Frontiers (Open Acces)📄 Download PDF Abstract Introduction Volumetric video production in commercial studios is [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p><em>A review paper, comparing state of the art sparse camera volumetric video applications in the year 2024. Comparision candidates are the software Depthkit Studio, VolumetricCapture, Brekel Pointcloud v3 and LiveScan3D</em></p>



<div style="height:0px" aria-hidden="true" class="wp-block-spacer"></div>



<div class="wp-block-group is-nowrap is-layout-flex wp-container-core-group-is-layout-ecba6b92 wp-block-group-is-layout-flex" style="margin-top:0;margin-bottom:0">
<p><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://www.sobi.uni-passau.de/mensch-maschine-interaktion/lehrstuhlteam/lehrstuhlinhaberin#c198530"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0003-4584-6379" data-type="link" data-id="https://orcid.org/0000-0003-4584-6379">Christopher Remde</a></p>



<p><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0003-4584-6379" data-type="link" data-id="https://orcid.org/0000-0003-4584-6379"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231">Moritz Queisner</a></p>



<p><a href="https://orcid.org/0000-0001-9355-937X">Igor M. Sauer</a></p>
</div>



<p class="has-text-align-left has-medium-font-size" style="padding-top:0;padding-bottom:0"><strong><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f310.png" alt="🌐" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <a href="https://doi.org/10.3389/frsip.2025.1405808" data-type="link" data-id="https://doi.org/10.3389/frsip.2025.1405808">Read Article on Frontiers (Open Acces)</a></strong><br><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f4c4.png" alt="📄" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <a href="https://chrisrem.de/wp-content/uploads/2025/06/Sparse-Camera-Volumetric-Video-Applications.pdf" data-type="link" data-id="https://chrisrem.de/wp-content/uploads/2025/06/Sparse-Camera-Volumetric-Video-Applications.pdf">Download PDF</a></p>



<div style="height:29px" aria-hidden="true" class="wp-block-spacer"></div>



<figure class="wp-block-video"><video controls src="https://zenodo.org/records/13908942/files/VV_Comparision_Videos.mp4?preview=0"></video></figure>



<div style="height:29px" aria-hidden="true" class="wp-block-spacer"></div>



<h4 class="wp-block-heading">Abstract</h4>



<p><strong>Introduction</strong></p>



<p>Volumetric video production in commercial studios is predominantly produced using a multi-view stereo process that relies on a high two-digit number of cameras to capture a scene. Due to the hardware requirements and associated processing costs, this workflow is resource-intensive and expensive, making it unattainable for creators and researchers with smaller budgets. Low-cost volumetric video systems using RGBD cameras offer an affordable alternative. As these small, mobile systems are a relatively new technology, the available software applications vary in terms of workflow and image quality. In this paper we provide an overview of the technical capabilities of sparse camera volumetric video capture applications and assess their visual fidelity and workflow.</p>



<p><strong>Materials and methods</strong></p>



<p>We selected volumetric video applications that are publicly available, support capture with multiple <em>Microsoft Azure Kinect</em> cameras and run on consumer-grade computer hardware. We compared the features, usability, and workflow of each application and benchmarked them in five different scenarios. Based on the benchmark footage, we analyzed spatial calibration accuracy, artifact occurrence and conducted a subjective perception study with 19 participants from a game design study program to assess the visual fidelity of the captures.</p>



<p><strong>Results</strong></p>



<p>We evaluated three applications, <em>Depthkit Studio</em>, <em>LiveScan3D</em> and <em>VolumetricCapture.</em> We found Depthkit <em>Studio</em> to provide the best experience for novel users, while <em>LiveScan3D</em> and <em>VolumetricCapture</em> require advanced technical knowledge to be operated. The footage captured by <em>Depthkit Studio</em> showed the least amount of artifacts by a larger margin, followed by <em>LiveScan3D</em> and <em>VolumetricCapture</em>. These findings were confirmed by the participants who preferred <em>Depthkit Studio</em> over <em>LiveScan3D</em> and <em>VolumetricCapture</em>.</p>



<p><strong>Discussion</strong></p>



<p>Based on the results, we recommend <em>Depthkit Studio</em> for the highest fidelity captures. <em>LiveScan3D</em> produces footage of only acceptable fidelity but is the only candidate that is available as open-source software. We therefore recommend it as a platform for research and experimentation. Due to the lower fidelity and high setup complexity, we recommend <em>VolumetricCapture</em> only for specific use-cases where its ability to handle a high number of sensors in a large capture volume is required.</p>
]]></content:encoded>
					
		
		<enclosure url="https://zenodo.org/records/13908942/files/VV_Comparision_Videos.mp4?preview=0" length="206890678" type="video/mp4" />

			</item>
		<item>
		<title>Immersive Mixed Reality Training Concept for Mastering Surgical Knot-tying</title>
		<link>https://chrisrem.de/immersive-mixed-reality-training-concept-for-mastering-surgical-knot-tying/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Sat, 08 Mar 2025 12:04:55 +0000</pubDate>
				<category><![CDATA[Publication]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[Brekel]]></category>
		<category><![CDATA[comparision]]></category>
		<category><![CDATA[Depthkit]]></category>
		<category><![CDATA[Livescan]]></category>
		<category><![CDATA[Livescan3D]]></category>
		<category><![CDATA[mesh]]></category>
		<category><![CDATA[pointcloud]]></category>
		<category><![CDATA[Review]]></category>
		<category><![CDATA[RGBD]]></category>
		<category><![CDATA[Studio]]></category>
		<category><![CDATA[texture]]></category>
		<category><![CDATA[Video]]></category>
		<category><![CDATA[Volumetric]]></category>
		<category><![CDATA[VolumetricCapture]]></category>
		<category><![CDATA[XR]]></category>
		<guid isPermaLink="false">https://chrisrem.de/?p=756</guid>

					<description><![CDATA[A concept and prototype implementation of a surgical knot tying trainer. Implemented in Unity3D with Varjo XR-3 hardware and featuring instructions in the form of volumetric videos. Moritz Queisner Christopher Remde Robert Luzsa Igor M. Sauer 🌐 Read article on IEEE📄 Download PDF Abstract This study presents a mixed reality training concept designed to enhance [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p><em>A concept and prototype implementation of a surgical knot tying trainer. Implemented in Unity3D with Varjo XR-3 hardware and featuring instructions in the form of volumetric videos.</em></p>



<div style="height:0px" aria-hidden="true" class="wp-block-spacer"></div>



<div class="wp-block-group is-nowrap is-layout-flex wp-container-core-group-is-layout-ecba6b92 wp-block-group-is-layout-flex" style="margin-top:0;margin-bottom:0">
<p><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0003-4584-6379" data-type="link" data-id="https://orcid.org/0000-0003-4584-6379"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231">Moritz Queisner</a></p>



<p><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://www.sobi.uni-passau.de/mensch-maschine-interaktion/lehrstuhlteam/lehrstuhlinhaberin#c198530"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0003-4584-6379" data-type="link" data-id="https://orcid.org/0000-0003-4584-6379">Christopher Remde</a></p>



<p><a href="https://orcid.org/0000-0001-5674-9158">Robert Luzsa</a></p>



<p><a href="https://orcid.org/0000-0001-9355-937X">Igor M. Sauer</a></p>
</div>



<p class="has-text-align-left has-medium-font-size" style="padding-top:0;padding-bottom:0"><strong><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f310.png" alt="🌐" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <a href="https://ieeexplore.ieee.org/document/10972878" data-type="link" data-id="https://ieeexplore.ieee.org/document/10972878">Read article on IEEE</a></strong><br><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f4c4.png" alt="📄" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <a href="https://chrisrem.de/wp-content/uploads/2025/06/KnotbAR-Conference-Paper.pdf" data-type="link" data-id="https://chrisrem.de/wp-content/uploads/2025/06/KnotbAR-Conference-Paper.pdf">Download PDF</a></p>



<div style="height:29px" aria-hidden="true" class="wp-block-spacer"></div>



<figure class="wp-block-video"><video controls src="https://zenodo.org/records/14712546/files/KnotBAR_SUpplementary_Video.mp4?preview=0"></video></figure>



<div style="height:29px" aria-hidden="true" class="wp-block-spacer"></div>



<h4 class="wp-block-heading">Abstract</h4>



<p>This study presents a mixed reality training concept designed to enhance medical students’ acquisition of surgical knot-tying skills, a fundamental component of surgical training critical for effective wound closure and tissue healing. Utilizing a virtual reality headset with video passthrough functionality, the system provides adaptive visual instructions tailored to the user’s hand movements during the knot-tying process. A prototype was developed based on the concept, featuring three-dimensional videos in which virtual instructor hands demonstrate each step of the procedure. The training concept was derived from an iterative, user-centered process encompassing requirement analysis, prototype development, and evaluation. Key functionalities include the ability to display thread tension and tensile strength, dynamically adapt learning speed to the user’s progress, and deliver personalized feedback by visually augmenting the hands and fingers. Evaluation results indicate that spatial and tangible interactions facilitated by the mixed reality training prototype support the acquisition of practical skills, bridging the gap between digital and physical simulation training.</p>
]]></content:encoded>
					
		
		<enclosure url="https://zenodo.org/records/14712546/files/KnotBAR_SUpplementary_Video.mp4?preview=0" length="246245601" type="video/mp4" />

			</item>
		<item>
		<title>LiveScan3D Prototype</title>
		<link>https://chrisrem.de/livescan3d-prototype/</link>
					<comments>https://chrisrem.de/livescan3d-prototype/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Sat, 30 Mar 2024 02:30:01 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://192.168.178.39:8080/?p=476</guid>

					<description><![CDATA[👋 Hallo Media Lab Bayern! Vielen Dank dass ihr euch meine Bewerbung anschaut! Wenn ihr daran interessiert seit einen aktuellen Prototypen dieses Projekts einmal genauer unter die Lupe zu nehmen, seid ihr hier richtig! Zum testen des Prototypen benötigt ihr einen Windows-PC. Download LiveScan3D Prototype LiveScan3D nimmt das volumetrische Video mithilfe spezieller Kameras, sogennanter Tiefenkameras, [&#8230;]]]></description>
										<content:encoded><![CDATA[<figure class="wp-block-post-featured-image wp-duotone-unset-4"><img loading="lazy" decoding="async" width="2301" height="1283" src="https://chrisrem.de/wp-content/uploads/2024/03/Screenshot-2024-03-30-025955.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="" style="object-fit:cover;" srcset="https://chrisrem.de/wp-content/uploads/2024/03/Screenshot-2024-03-30-025955.png 2301w, https://chrisrem.de/wp-content/uploads/2024/03/Screenshot-2024-03-30-025955-300x167.png 300w, https://chrisrem.de/wp-content/uploads/2024/03/Screenshot-2024-03-30-025955-1024x571.png 1024w, https://chrisrem.de/wp-content/uploads/2024/03/Screenshot-2024-03-30-025955-768x428.png 768w, https://chrisrem.de/wp-content/uploads/2024/03/Screenshot-2024-03-30-025955-1536x856.png 1536w, https://chrisrem.de/wp-content/uploads/2024/03/Screenshot-2024-03-30-025955-2048x1142.png 2048w" sizes="auto, (max-width: 2301px) 100vw, 2301px" /></figure>


<h3 class="wp-block-heading has-text-align-center"><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f44b.png" alt="👋" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Hallo Media Lab Bayern!</h3>



<p>Vielen Dank dass ihr euch meine Bewerbung anschaut! Wenn ihr daran interessiert seit einen aktuellen Prototypen dieses Projekts einmal genauer unter die Lupe zu nehmen, seid ihr hier richtig! Zum testen des Prototypen benötigt ihr einen Windows-PC.</p>



<p class="has-text-align-center"><a href="http://192.168.178.39:8080/wp-content/uploads/2024/03/LiveScan3D_Prototype.zip" data-type="link" data-id="http://192.168.178.39:8080/wp-content/uploads/2024/03/LiveScan3D_Prototype.zip"><strong>Download LiveScan3D Prototyp</strong>e</a></p>



<p>LiveScan3D nimmt das volumetrische Video mithilfe spezieller Kameras, sogennanter <em>Tiefenkameras</em>, oder <em>RGBD-Kameras</em> auf. Diese Kameras nehmen zusätzlich zu der Farbe (RGB) auch die Distanz (D) eines jeden Pixels zur Kamera auf. Momentan wird von LiveScan3D nur das Model <em>Microsoft Azure Kinect</em> unterstützt. </p>



<p><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f449.png" alt="👉" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Ihr habt zufällig eine Azure Kinect bei euch? Großartig, dann müsst ihr nur noch das <a href="https://github.com/microsoft/Azure-Kinect-Sensor-SDK/blob/develop/docs/usage.md" data-type="link" data-id="https://github.com/microsoft/Azure-Kinect-Sensor-SDK/blob/develop/docs/usage.md">Azure Kinect SDK</a> installieren und die Kinect per USB mit eurem PC verbinden.</p>



<p><em><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f449.png" alt="👉" class="wp-smiley" style="height: 1em; max-height: 1em;" /> </em>Ihr habt keine Kinect? Kein Problem! Ich habe euch eine spezielle Version des Prototypen erstellt, in dem die Kinects einfach simuliert werden, ihr benötigt also nichts weiter.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<h4 class="wp-block-heading">Anleitung</h4>



<p>1: Entpackt die Datei</p>



<p>2: Startet nun <strong><em>LiveScanServer.exe</em></strong> und bei vorhandendensein einer Azure Kinect zusätzlich <strong><em>LiveScanClient.exe</em></strong>. Solltet ihr keine Kinect haben, startet ihr stattdessen <strong><em>LiveScanClient_Simulated.exe</em></strong>.</p>



<p>3: Klickt im LiveScanClient oben links auf das <strong>Plus</strong>, um so viele Azure Kinects hinzuzufügen wie ihr angeschlossen habt zu starten. Im simulierten Modus könnt ihr bis zu 4 virtuelle Kinects starten.</p>



<figure class="wp-block-image size-full wp-duotone-unset-5"><img loading="lazy" decoding="async" width="549" height="161" src="http://192.168.178.39:8080/wp-content/uploads/2024/03/Screenshot-2024-03-30-032015.png" alt="" class="wp-image-479" srcset="https://chrisrem.de/wp-content/uploads/2024/03/Screenshot-2024-03-30-032015.png 549w, https://chrisrem.de/wp-content/uploads/2024/03/Screenshot-2024-03-30-032015-300x88.png 300w" sizes="auto, (max-width: 549px) 100vw, 549px" /></figure>



<p>4: Klickt nun im LiveScanClient auf <strong>Connect All</strong>. Damit verbinden sich die Kameras mit dem Server</p>



<p>5: Im 3D-Viewport könnt ihr nun frei die Szene erkunden. Zum drehen der Ansicht, haltet die linke Maustaste gedrückt und bewegt die Maus. Um die Ansicht zu verschieben, haltet die rechte Maustaste gedrückt und bewegt die Maus. Mithilfe des Mausrads kann gezoomt werden.</p>



<figure class="wp-block-image size-large wp-duotone-unset-6"><img decoding="async" src="http://192.168.178.39:8080/wp-content/uploads/2023/03/LivescanServer-1024x706.png" alt="" class="wp-image-254"/></figure>



<p>6: Mithilfe des <strong>Start Capture</strong> Buttons könnt ihr eine Aufnahme starten. Diese wird im entpackten Ordner unter Out/NameDerAufnahme im .ply-Format gespeichert. </p>



<p>7: Probiert gerne die restlichen Einstellungen aus wenn ihr mögt! Bei den virtuellen Kameras können einige Einstellungen nicht geändert werden, insbesondere die Exposure und White Balance.</p>



<p></p>



<h4 class="wp-block-heading">Danke fürs Testen! <img src="https://s.w.org/images/core/emoji/16.0.1/72x72/270c.png" alt="✌" class="wp-smiley" style="height: 1em; max-height: 1em;" /></h4>



<p></p>



<p>Noch fragen oder etwas funktioniert nicht? Wendet euch gerne an mich über die Email-Adresse im Formular <img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
]]></content:encoded>
					
					<wfw:commentRss>https://chrisrem.de/livescan3d-prototype/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>LiveScan3D: Where we are</title>
		<link>https://chrisrem.de/livescan3d-where-we-are/</link>
					<comments>https://chrisrem.de/livescan3d-where-we-are/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Sat, 20 Jan 2024 19:39:58 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<guid isPermaLink="false">http://192.168.178.39:8080/?p=456</guid>

					<description><![CDATA[With the start of 2024, and a new release of LiveScan3D coming closer and closer, I&#8217;d like to use this post to recapture what has been happening so far, where we are now, and most of all, what still needs to happen until the first big update of LiveScan3D. 👉 For those who are unfamiliar [&#8230;]]]></description>
										<content:encoded><![CDATA[<figure class="wp-block-post-featured-image wp-duotone-unset-7"><img loading="lazy" decoding="async" width="1280" height="720" src="https://chrisrem.de/wp-content/uploads/2024/01/WayToRelease_Title.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="" style="object-fit:cover;" srcset="https://chrisrem.de/wp-content/uploads/2024/01/WayToRelease_Title.png 1280w, https://chrisrem.de/wp-content/uploads/2024/01/WayToRelease_Title-300x169.png 300w, https://chrisrem.de/wp-content/uploads/2024/01/WayToRelease_Title-1024x576.png 1024w, https://chrisrem.de/wp-content/uploads/2024/01/WayToRelease_Title-768x432.png 768w" sizes="auto, (max-width: 1280px) 100vw, 1280px" /></figure>


<p>With the start of 2024, and a new release of <a href="https://chrisrem.de/livescan3d/" data-type="link" data-id="chrisrem.de/livescan3d/">LiveScan3D</a> coming closer and closer, I&#8217;d like to use this post to recapture what has been happening so far, where we are now, and most of all, what still needs to happen until the first big update of LiveScan3D.</p>



<p><em><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f449.png" alt="👉" class="wp-smiley" style="height: 1em; max-height: 1em;" /> For those who are unfamiliar with the project, LiveScan3D is a open-source tool for capturing low-cost volumetric video with multiple depth sensors, like the Kinect. Read more about the project <a href="http://192.168.178.39:8080/livescan3d/">here</a>.</em></p>



<p>I&#8217;m writing this blog post partially to make the development process more transparent. But when I&#8217;m honest, the biggest reason is to set a fixed goal for myself and get my feature creep under control.</p>



<hr class="wp-block-separator has-alpha-channel-opacity" style="margin-top:var(--wp--preset--spacing--50);margin-bottom:var(--wp--preset--spacing--50)"/>



<h4 class="wp-block-heading">What happened so far</h4>



<p>Originally, LiveScan3D was a software developed as a research project, and was meant to be mainly used in research aswell, but not for serious volumetric video production. My goal for the first big update is to bring LiveScan3D into a robust, usable and production-ready state, which anyone can use to produce their own volumetric video. Let&#8217;s start with a quick comparison of the last official release version of LiveScan3D and the current development build:</p>



<figure data-wp-context="{&quot;imageId&quot;:&quot;68f64ef383f73&quot;}" data-wp-interactive="core/image" class="wp-block-image size-large is-resized wp-duotone-unset-8 wp-lightbox-container"><img loading="lazy" decoding="async" width="1024" height="288" data-wp-class--hide="state.isContentHidden" data-wp-class--show="state.isContentVisible" data-wp-init="callbacks.setButtonStyles" data-wp-on-async--click="actions.showLightbox" data-wp-on-async--load="callbacks.setButtonStyles" data-wp-on-async-window--resize="callbacks.setButtonStyles" src="http://wp.chrisrem.de/wp-content/uploads/2024/01/WayToRelease_Comparision_Server-1024x288.jpg" alt="" class="wp-image-465" style="width:700px;height:auto" srcset="https://chrisrem.de/wp-content/uploads/2024/01/WayToRelease_Comparision_Server-1024x288.jpg 1024w, https://chrisrem.de/wp-content/uploads/2024/01/WayToRelease_Comparision_Server-300x84.jpg 300w, https://chrisrem.de/wp-content/uploads/2024/01/WayToRelease_Comparision_Server-768x216.jpg 768w, https://chrisrem.de/wp-content/uploads/2024/01/WayToRelease_Comparision_Server-1536x433.jpg 1536w, https://chrisrem.de/wp-content/uploads/2024/01/WayToRelease_Comparision_Server-2048x577.jpg 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /><button
			class="lightbox-trigger"
			type="button"
			aria-haspopup="dialog"
			aria-label="Enlarge"
			data-wp-init="callbacks.initTriggerButton"
			data-wp-on-async--click="actions.showLightbox"
			data-wp-style--right="state.imageButtonRight"
			data-wp-style--top="state.imageButtonTop"
		>
			<svg xmlns="http://www.w3.org/2000/svg" width="12" height="12" fill="none" viewBox="0 0 12 12">
				<path fill="#fff" d="M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z" />
			</svg>
		</button><figcaption class="wp-element-caption">The old server windows (left) vs. the new server window (right)</figcaption></figure>



<figure data-wp-context="{&quot;imageId&quot;:&quot;68f64ef384242&quot;}" data-wp-interactive="core/image" class="wp-block-image size-large wp-duotone-unset-9 wp-lightbox-container"><img loading="lazy" decoding="async" width="1024" height="283" data-wp-class--hide="state.isContentHidden" data-wp-class--show="state.isContentVisible" data-wp-init="callbacks.setButtonStyles" data-wp-on-async--click="actions.showLightbox" data-wp-on-async--load="callbacks.setButtonStyles" data-wp-on-async-window--resize="callbacks.setButtonStyles" src="http://wp.chrisrem.de/wp-content/uploads/2024/01/WayToRelease_Comparision_Client-1024x283.jpg" alt="" class="wp-image-464" srcset="https://chrisrem.de/wp-content/uploads/2024/01/WayToRelease_Comparision_Client-1024x283.jpg 1024w, https://chrisrem.de/wp-content/uploads/2024/01/WayToRelease_Comparision_Client-300x83.jpg 300w, https://chrisrem.de/wp-content/uploads/2024/01/WayToRelease_Comparision_Client-768x212.jpg 768w, https://chrisrem.de/wp-content/uploads/2024/01/WayToRelease_Comparision_Client-1536x424.jpg 1536w, https://chrisrem.de/wp-content/uploads/2024/01/WayToRelease_Comparision_Client-2048x566.jpg 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /><button
			class="lightbox-trigger"
			type="button"
			aria-haspopup="dialog"
			aria-label="Enlarge"
			data-wp-init="callbacks.initTriggerButton"
			data-wp-on-async--click="actions.showLightbox"
			data-wp-style--right="state.imageButtonRight"
			data-wp-style--top="state.imageButtonTop"
		>
			<svg xmlns="http://www.w3.org/2000/svg" width="12" height="12" fill="none" viewBox="0 0 12 12">
				<path fill="#fff" d="M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z" />
			</svg>
		</button><figcaption class="wp-element-caption">The old client windows (left) vs. the new client window (right)</figcaption></figure>



<p>So, quite a bit has changed! Let&#8217;s start with the most visible part, the UI.</p>



<hr class="wp-block-separator has-alpha-channel-opacity" style="margin-top:var(--wp--preset--spacing--50);margin-bottom:var(--wp--preset--spacing--50)"/>



<h4 class="wp-block-heading">UI / UX</h4>



<p>The UI in the old LiveScan version is a bit all over the place. A lot of windows and screen real estate is used, so I spent some time reducing and decluttering the UI. On the server app, which is the main windows with which you&#8217;ll be interacting, I merged the live 3D preview window with the control panel window, and sorted the buttons into a categoric layout. You&#8217;ll also notice a lot more buttons in general, but more on that later. On the client side, you now don&#8217;t need to open one app per camera anymore! You can just open it once, and then add as many cameras as you want with the new tab-based system.</p>



<p>Aesthetically speaking though, the UI will still be based on the relatively ugly windows forms for now. But behind the scenes, I&#8217;ve done a lot of work to untangle the application core from the UI code, so that replacing the UI framework should not be too much work.</p>



<hr class="wp-block-separator has-alpha-channel-opacity" style="margin-top:var(--wp--preset--spacing--50);margin-bottom:var(--wp--preset--spacing--50)"/>



<h4 class="wp-block-heading">Volumetric Capture</h4>



<p>Seen from a bigger perspective, the capture &amp; calibration process is still the same. You calibrate your cameras with printed markers, and LiveScan3D captures the volumetric data as point clouds. But there have been many improvements, which combined, do make a large difference in capture quality. Captures are now temporally synced between cameras and you can control the resolution, depth mode, exposure and white balance. Beside pointclouds, you can now capture the raw image output, which gives you much more options for post-processing.<br>Thanks to incremental improvements in the code, the application performance has been increased a lot. Instead of being able to run only two clients at 30 FPS on a Ryzen 3600, this is now up to four (Even more if you capture raw data).</p>



<hr class="wp-block-separator has-alpha-channel-opacity" style="margin-top:var(--wp--preset--spacing--50);margin-bottom:var(--wp--preset--spacing--50)"/>



<h4 class="wp-block-heading">Playback</h4>



<figure class="wp-block-image size-large has-custom-border wp-duotone-unset-10"><img loading="lazy" decoding="async" width="1024" height="80" src="http://wp.chrisrem.de/wp-content/uploads/2024/01/UnityGeometrySequenceStream-1024x80.png" alt="" class="wp-image-467" style="border-style:none;border-width:0px;border-radius:0px" srcset="https://chrisrem.de/wp-content/uploads/2024/01/UnityGeometrySequenceStream-1024x80.png 1024w, https://chrisrem.de/wp-content/uploads/2024/01/UnityGeometrySequenceStream-300x24.png 300w, https://chrisrem.de/wp-content/uploads/2024/01/UnityGeometrySequenceStream-768x60.png 768w, https://chrisrem.de/wp-content/uploads/2024/01/UnityGeometrySequenceStream-1536x121.png 1536w, https://chrisrem.de/wp-content/uploads/2024/01/UnityGeometrySequenceStream-2048x161.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Capturing volumetric video is nice, but also kind of useless if you can&#8217;t playback the recorded data. So I developed a open-source Unity3D package, creatively named &#8220;<a href="https://buildingvolumes.github.io/Unity_Geometry_Sequence_Streaming/">Unity Geometry Sequence Streaming</a>&#8220;. With it, you can stream large pointcloud and mesh sequences from disk into Unity. It&#8217;s based on Unitys Mesh Jobs API, which makes it able to read and display geometric data very efficiently. Besides volumetric data, you can also use it to playback backed animations, like water and physics simulations, or animations with changing mesh topology. The plugin is already <a href="https://buildingvolumes.github.io/Unity_Geometry_Sequence_Streaming/">published and can be downloaded right now!</a></p>



<hr class="wp-block-separator has-alpha-channel-opacity" style="margin-top:var(--wp--preset--spacing--50);margin-bottom:var(--wp--preset--spacing--50)"/>



<h4 class="wp-block-heading">What still needs to be done</h4>



<p>At the moment, LiveScan3D is still a bit of a loose stack of features, of which not all make sense to use, or are misplaced in the workflow pipeline. I&#8217;ve seen too many open-source projects which are in this state, which makes them confusing and frustrating to use and are ultimately kind of dropped by the community. Because of this, it&#8217;s important for me that LiveScan3D always represents a well-rounded and user-friendly experience. So here&#8217;s what I still want to implement before going for the release (<em>If you want to, you can always check our <a href="https://github.com/orgs/BuildingVolumes/projects/1/views/1">Kanban board here</a>, to see where we are exactly</em>):<br></p>



<p><strong>Untangeling the mess and tying up loose ends</strong>: The server and client applications is where almost all new features have been implemented, even though it doesn&#8217;t really make sense for most of them. These two applications should really only concentrate on configuring and reliantly capturing from the sensors. What happens with the data after that, e.g. any post processing, should be handled in another application, the <em>LiveScan3D Editor</em>.<br>Just as a few examples for the current bad state:</p>



<ul class="wp-block-list">
<li>The temporal synchronization process is happening directly after a capture, which is annoying as you have to wait until you can capture again. If something goes wrong, there is no way to sync your data again.</li>



<li>If you want to capture pointcloud data, the capture applications have to convert, stitch and save the pointclouds, which gives bad performance, while you&#8217;re also loosing access to your raw data. </li>



<li>If you capture raw data then&#8230;you can do nothing with it</li>
</ul>



<p>So the plan is, to pull out all components on the server and client that are not needed for recording and put them into a volumetric video postprocessing software called <strong>LiveScan Editor</strong>. The pipeline for capturing a video should therefore be like:</p>



<ol class="wp-block-list">
<li>Setup your sensors, calibrate and preview your capture space with LiveScan <strong>Server &amp; Client</strong></li>



<li>Capture only the raw data from the sensors with LiveScan <strong>Server &amp; Client</strong></li>



<li>Load your capture into the LiveScan <strong>Editor</strong> and postprocess</li>



<li>Export your capture into the desired format from the LiveScan <strong>Editor</strong></li>
</ol>



<p>Besides the improved user workflow, this gives other major advantages. The capture performance should be boosted by a large margin, as no processing needs to happen anymore at capture time. And as you retain all the raw data from the sensors, you can re-visit your captures later on, when there are more post-processing options released, and improve their quality.</p>



<p><strong>Tests, Tests, Tests</strong>: LiveScan3D has not been the most stable software so far, crashes are frequent. This, of course, is bad in any software, but you especially don&#8217;t want your software to crash during a crucial moment in a capture you worked towards for days (Ask me how I know). We&#8217;re dogfooding LiveScan in the research lab I&#8217;m currently working on, but as I only need to use it occasionally, I don&#8217;t catch all of the bugs. Also, long bug-hunts and manual tests are not really an option for me, as I&#8217;m doing all of this in my spare time. So before the release, I want to write an automatic test suite, that covers most of the functionality. My preparations to separate the UI from the application has been especially shown to be useful here, as I can now test core functions, without having to interact go through the UI. <br>Of course, this alone won&#8217;t do it, so I hope that with some manual tests before the release and with the help of the community, I hope to achieve a stable version of LiveScan pretty soon enough.</p>



<p><strong>Publication</strong>: When the software is finally read, the publication itself is also another work package on it&#8217;s own. Of course, I could go the open-source way, by writing a short how-to into the readme.md together with some incomplete build instructions and call it a day, but that&#8217;s not the way I&#8217;d like to go with LiveScan. From the start, I always wanted it to be a software that can be used by anyone, regardless of their technical skillset. So for this, reason, I&#8217;m creating a small static website right now, which contains everything you need to get started with LiveScan3D. This includes tutorials, documentation, downloads, videos and illustrations and is therefore some work in itself, which I&#8217;m hoping will pay off in the long run though! In my development experience so far, a well written documentation avoids a lot of issues and support needs. After the release, I&#8217;m also hoping to add a forum to the website, where the community can exchange on all topics regarding volumetric video!</p>



<hr class="wp-block-separator has-alpha-channel-opacity" style="margin-top:var(--wp--preset--spacing--50);margin-bottom:var(--wp--preset--spacing--50)"/>



<h4 class="wp-block-heading">Outro</h4>



<p>If anybody read through all of this, thanks a lot! <img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /><br>I hope this gives you a better sense on where the livescan development cycle currently stands and what to expect from a first release. I hope to write more blog post, the closer we get to release and the more features are done! See you soon!</p>
]]></content:encoded>
					
					<wfw:commentRss>https://chrisrem.de/livescan3d-where-we-are/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Training Surgical Knot Tying in Extended Reality: First Results of the Project &#8220;GreifbAR&#8221;</title>
		<link>https://chrisrem.de/training-surgical-knot-tying-in-extended-reality-first-results-of-the-project-greifbar/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Fri, 31 Mar 2023 23:07:00 +0000</pubDate>
				<category><![CDATA[Publication]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[Conference]]></category>
		<category><![CDATA[Poster]]></category>
		<category><![CDATA[Review]]></category>
		<category><![CDATA[Surgery]]></category>
		<category><![CDATA[Training]]></category>
		<category><![CDATA[XR]]></category>
		<guid isPermaLink="false">http://192.168.178.39:8080/?p=447</guid>

					<description><![CDATA[Poster of first results from our &#8220;GreifbAR&#8221;, presented in the &#8220;Würtual Reality XR Meeting 2023&#8221; Conference in Würzburg. A project overview, as well as first user test evaluations are shown. Robert Luzsa Christopher Remde Moritz Queisner Susanne Mayr 🌐 Read Article on ResearchGate📄 Download PDF Abstract Background: Tying surgical knots is a basic but critical [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p><em>Poster of first results from our &#8220;GreifbAR&#8221;, presented in the &#8220;Würtual Reality XR Meeting 2023&#8221; Conference in Würzburg. A project overview, as well as first user test evaluations are shown.</em></p>



<div style="height:0px" aria-hidden="true" class="wp-block-spacer"></div>



<div class="wp-block-group is-nowrap is-layout-flex wp-container-core-group-is-layout-ecba6b92 wp-block-group-is-layout-flex" style="margin-top:0;margin-bottom:0">
<p><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://www.sobi.uni-passau.de/mensch-maschine-interaktion/lehrstuhlteam/lehrstuhlinhaberin#c198530"></a><a href="https://orcid.org/0000-0001-5674-9158">Robert Luzsa</a></p>



<p><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://www.sobi.uni-passau.de/mensch-maschine-interaktion/lehrstuhlteam/lehrstuhlinhaberin#c198530"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0003-4584-6379" data-type="link" data-id="https://orcid.org/0000-0003-4584-6379">Christopher Remde</a></p>



<p><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0003-4584-6379" data-type="link" data-id="https://orcid.org/0000-0003-4584-6379"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231">Moritz Queisner</a></p>



<p><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://www.sobi.uni-passau.de/mensch-maschine-interaktion/lehrstuhlteam/lehrstuhlinhaberin#c198530">Susanne Mayr</a></p>
</div>



<p class="has-text-align-left has-medium-font-size" style="padding-top:0;padding-bottom:0"><strong><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f310.png" alt="🌐" class="wp-smiley" style="height: 1em; max-height: 1em;" /> </strong><a href="https://www.researchgate.net/publication/370004157_Training_Surgical_Knot_Tying_in_Extended_Reality_First_Results_of_the_Project_GreifbAR"><strong>Read Article on ResearchGate</strong></a><br><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f4c4.png" alt="📄" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <a href="http://192.168.178.39:8080/wp-content/uploads/2024/01/Poster_final2.pdf">Download PDF</a></p>



<div style="height:29px" aria-hidden="true" class="wp-block-spacer"></div>



<h4 class="wp-block-heading">Abstract</h4>



<p><strong>Background: </strong></p>



<p>Tying surgical knots is a basic but critical skill that surgeons must master. Currently, knot tying is typically taught through instructor observation or instructional videos. These methods are either very resource consuming or offer little interactivity and customizability. Knot tying training in extended reality (XR) could address these weaknesses and improve the linking of observation and application: For example, spatial awareness and the ability to mentally rotate, i.e. imagine knots from different perspectives, affects learning performance (Brandt &amp; Davies, 2006). XR may support spatial awareness by allowing knot tying from different perspectives superimposed on the real world. </p>



<p><strong>Method: </strong></p>



<p>The project GreifbAR develops an XR-based interactive knot tying training application. This application teaches the process of knot tying and provides individualized feedback based on hand pose and scene recognition. To achieve a learning-friendly design and user acceptance, the requirements of learners and experts must be considered. Therefore, based on a literature review, an online survey with 80 medical students and four interviews with experienced surgeons at Charité &#8211; Universitätsmedizin Berlin were conducted. </p>



<p><strong>Initial results: </strong></p>



<p>The respondents show openness towards knot tying training with XR, yet emphasize the importance of a realistic learning situations and personal guidance. They also report little prior experience with XR. The talk integrates the survey results with findings from technology acceptance research and derives implications for the design of XR-based training systems for knot tying and similar procedural tasks.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>LiveScan3D</title>
		<link>https://chrisrem.de/livescan3d/</link>
					<comments>https://chrisrem.de/livescan3d/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Fri, 03 Mar 2023 18:36:24 +0000</pubDate>
				<category><![CDATA[Project]]></category>
		<guid isPermaLink="false">http://192.168.178.39:8080/?p=246</guid>

					<description><![CDATA[LiveScan3D is an open-source, volumetric video capturing software. It was originally developed by Marek Kowalski and Jacek Naruniec and is now being maintained and by BuildingVolumes, a group trying to make volumetric video capture more widely available by developing open source tools for capture, processing and playback. 🧪 Do you want to help us build [&#8230;]]]></description>
										<content:encoded><![CDATA[
<div class="wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-fe9cc265 wp-block-group-is-layout-flex">
<figure class="wp-block-image size-large wp-duotone-unset-11"><img loading="lazy" decoding="async" width="1024" height="576" src="http://wp.chrisrem.de/wp-content/uploads/2025/01/LivescanServer-1024x576.png" alt="The Livescan3D server window" class="wp-image-566" srcset="https://chrisrem.de/wp-content/uploads/2025/01/LivescanServer-1024x576.png 1024w, https://chrisrem.de/wp-content/uploads/2025/01/LivescanServer-300x169.png 300w, https://chrisrem.de/wp-content/uploads/2025/01/LivescanServer-768x432.png 768w, https://chrisrem.de/wp-content/uploads/2025/01/LivescanServer.png 1047w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption">The Livescan3D server window</figcaption></figure>



<p>LiveScan3D is an open-source, volumetric video capturing software. It was originally developed by <a href="https://ieeexplore.ieee.org/document/7335499" data-type="URL" data-id="https://ieeexplore.ieee.org/document/7335499">Marek Kowalski and Jacek Naruniec</a> and is now being maintained and by <a href="https://github.com/BuildingVolumes" data-type="link" data-id="https://github.com/BuildingVolumes">BuildingVolumes</a>, a group trying to make volumetric video capture more widely available by developing open source tools for capture, processing and playback.</p>
</div>



<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p><em><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f9ea.png" alt="🧪" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Do you want to help us build Livescan3D by testing our public beta?</em> <em>Sign up for the beta preview by <a href="mailto:hey@chrisrem.de" data-type="mailto" data-id="mailto:hey@chrisrem.de">writing us a short mail</a></em></p>
</blockquote>



<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<h4 class="wp-block-heading">Functionality</h4>



<p>LiveScan3D uses multiple, low-cost RGBD cameras, such as the Kinect Azure, to capture a volumetric representation of the scene. It synchronizes all cameras spatially and temporarily, and then captures the scene into widely used file formats. You can record with up to 9 cameras. The distributed structure of LiveScan allows you to capture with a few low-end PCs, linked vial local network (Server and client structure) instead of one very high end PC, making the capture more economically feasible.</p>



<figure class="wp-block-image size-large wp-duotone-unset-12"><img loading="lazy" decoding="async" width="1024" height="697" src="http://wp.chrisrem.de/wp-content/uploads/2023/03/Livescan_All-Clients-e1678219463764-1024x697.png" alt="Multiple LiveScan Client instances, you can open up to 9 clients on 9 different machines." class="wp-image-256" srcset="https://chrisrem.de/wp-content/uploads/2023/03/Livescan_All-Clients-e1678219463764-1024x697.png 1024w, https://chrisrem.de/wp-content/uploads/2023/03/Livescan_All-Clients-e1678219463764-300x204.png 300w, https://chrisrem.de/wp-content/uploads/2023/03/Livescan_All-Clients-e1678219463764-768x523.png 768w, https://chrisrem.de/wp-content/uploads/2023/03/Livescan_All-Clients-e1678219463764.png 1200w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption">Multiple LiveScan Client instances, you can open up to 9 clients on 9 different machines.</figcaption></figure>



<p>You can capture Pointclouds into the .PLY Format, or raw color &amp; depth data in png. &amp; .tiff frames for further post-processing.</p>



<p>LiveScan also includes a Playback-Application which lets you view your volumetric captures in interactive 3D and realtime, and experiment with the visualization.</p>



<figure class="wp-block-image size-full wp-duotone-unset-13"><img loading="lazy" decoding="async" width="921" height="735" src="http://wp.chrisrem.de/wp-content/uploads/2023/03/LiveScanPlayer.png" alt="Livescan Player lets you interactivly view your volumetric captures." class="wp-image-257" srcset="https://chrisrem.de/wp-content/uploads/2023/03/LiveScanPlayer.png 921w, https://chrisrem.de/wp-content/uploads/2023/03/LiveScanPlayer-300x239.png 300w, https://chrisrem.de/wp-content/uploads/2023/03/LiveScanPlayer-768x613.png 768w" sizes="auto, (max-width: 921px) 100vw, 921px" /><figcaption class="wp-element-caption">Livescan Player lets you interactivly view your volumetric captures.</figcaption></figure>



<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<h4 class="wp-block-heading">Roadmap</h4>



<p>At the moment, LiveScan3D is still in a pretty basic state and it is not ready to use in a production setting. The capture quality is low, UI/UX is is a rough shape, and there are many bugs</p>



<p>We’re aiming for a new release in <strong>mid 2023</strong> that puts LiveScan3D on a solid base, with an improved workflow, tools and capture quality, aimed towards a production-ready state, that allows small content creators, artists and everybody else to produce volumetric videos.</p>



<p>The new version of LiveScan3D will feature the following improvements:</p>



<p>After this first release, we&#8217;re planning to add more features to close the gap to commerically available volumetric video capture software. </p>



<ul class="wp-block-list">
<li>Stable temporary synchronization of up to 9 cameras</li>



<li>An improved spatial synchronization workflow</li>



<li>An overhauled UI, for a more comfortable workflow</li>



<li>Control of camera settings, such as whitebalance and exposure</li>



<li>A better preview, in 2D as well as 3D</li>



<li>Pointcloud and raw data capture, for further post-processing</li>



<li>An improved video playback tool</li>



<li>Playback in Unity3D</li>
</ul>



<p><br>The next big step after the initial release is to offer a <strong>meshing &amp; texturing</strong> tool, which allows you to capture your scene in much higher detail, and playback your capture in well-known tools such as Blender, Unity or Unreal Engine. This is scheduled for <strong>Q1 of 2025</strong>. <a href="/blog-posts/" data-type="page" data-id="30">You can follow the development of Livescan3D in my blog!</a></p>



<div style="height:29px" aria-hidden="true" class="wp-block-spacer"></div>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f469-1f3fd-200d-1f52c.png" alt="👩🏽‍🔬" class="wp-smiley" style="height: 1em; max-height: 1em;" />LiveScan3D is an open-source project that is developed by a very small team. If you share our vision and want to help accelerate development by contributing, testing, or managing, we&#8217;d be very happy to have you onboard!</p>
</blockquote>



<div style="height:39px" aria-hidden="true" class="wp-block-spacer"></div>



<h4 class="wp-block-heading">Links</h4>



<p><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f3ae.png" alt="🎮" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <strong><a href="https://discord.gg/BvQdJdJqu6" data-type="URL" data-id="https://discord.gg/BvQdJdJqu6">BuildingVolumes on Discord</a></strong></p>



<p><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f419.png" alt="🐙" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <strong><a href="https://github.com/Elite-Volumetric-Capture-Sqad/LiveScan3D" data-type="URL" data-id="https://github.com/Elite-Volumetric-Capture-Sqad/LiveScan3D">BuildingVolumes on Github</a></strong></p>
]]></content:encoded>
					
					<wfw:commentRss>https://chrisrem.de/livescan3d/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>KlimaVR [WIP]</title>
		<link>https://chrisrem.de/klimavr-wip/</link>
					<comments>https://chrisrem.de/klimavr-wip/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Sat, 21 Jan 2023 14:35:31 +0000</pubDate>
				<category><![CDATA[Project]]></category>
		<guid isPermaLink="false">http://192.168.178.39:8080/?p=237</guid>

					<description><![CDATA[KlimaVR is a Virtual Reality Experience which aims to teach school children about the causes, scale and consequences of climate change. This experience is still under development and will be released for free on the Oculus App Lab for all Quest devices and for PCVR later this year. KlimaVR aims to teach about the climate [&#8230;]]]></description>
										<content:encoded><![CDATA[
<div class="wp-block-group is-layout-constrained wp-block-group-is-layout-constrained">
<figure class="wp-block-image size-large wp-duotone-unset-14"><img loading="lazy" decoding="async" width="1024" height="576" src="http://wp.chrisrem.de/wp-content/uploads/2025/01/WelcomeToCurb-1024x576.png" alt="" class="wp-image-568" srcset="https://chrisrem.de/wp-content/uploads/2025/01/WelcomeToCurb-1024x576.png 1024w, https://chrisrem.de/wp-content/uploads/2025/01/WelcomeToCurb-300x169.png 300w, https://chrisrem.de/wp-content/uploads/2025/01/WelcomeToCurb-768x432.png 768w, https://chrisrem.de/wp-content/uploads/2025/01/WelcomeToCurb-1536x864.png 1536w, https://chrisrem.de/wp-content/uploads/2025/01/WelcomeToCurb-2048x1152.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>KlimaVR is a Virtual Reality Experience which aims to teach school children about the causes, scale and consequences of climate change. This experience is still under development and will be released <strong>for free on the Oculus App Lab</strong> for all Quest devices and <strong>for PCVR</strong> later this year. KlimaVR aims to teach about the climate crisis in three segments:</p>
</div>



<div style="height:49px" aria-hidden="true" class="wp-block-spacer"></div>



<h4 class="wp-block-heading">The Mechanics</h4>



<p>This chapter aims to visualise how the systems of climate change works and which role the different greenhouse-gases play. It explains the effects of CO² over- and underproduction on a global scale and which effects a rising CO²-level in the atmosphere has on our planet, and the ecosphere.</p>



<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<figure class="wp-block-image aligncenter size-large is-resized wp-duotone-unset-15"><img decoding="async" src="https://media1.giphy.com/media/v1.Y2lkPTc5MGI3NjExMjB3ZmNqNWtubzQ1NnU0YnhyYmIycjVseHBhenhocG4yZ3lyeHYyciZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/4um44t9hUVERCyZrYe/giphy.gif" style="width:600px"/></figure>



<div style="height:50px" aria-hidden="true" class="wp-block-spacer"></div>



<h4 class="wp-block-heading">The Scale</h4>



<p>This chapter aims to give an understanding for the scale of our overproduction of CO². How would CO² look like if we could see it? And how much do we produce ourselves, as a country, or as world population in a year? The journey will start with your personal footprint, but grow steadily in scale, over the borders of the city, as we try to visualize the CO² footprint of humankind since the industrial revolution. This is the level we&#8217;re still working on the most. Visualizing the sheer amount of CO² is a huge, but interesting challenge, not only on a technical level, but also from a design perspective.<br></p>



<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<figure class="wp-block-image aligncenter size-large is-resized wp-duotone-unset-16"><img decoding="async" src="https://media3.giphy.com/media/v1.Y2lkPTc5MGI3NjExNDQ3c2dmbXdnZ2VpeXYyaGg4OGhkeWRoNnhybThoc2xncHh4NmpwMSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/FwWOTGPUfJjKFfDbtW/giphy.gif" alt="" style="width:600px"/></figure>



<div style="height:50px" aria-hidden="true" class="wp-block-spacer"></div>



<h4 class="wp-block-heading">The Consequences</h4>



<p>In this chapter, you&#8217;ll be experiencing one of the many consequences of climate change, as you&#8217;re standing inside of a forest fire. It&#8217;s just a short glimpse into the many crisis climate change will bring with it, but it&#8217;s one of the most immersive parts in this app.<br></p>



<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<figure class="wp-block-image aligncenter size-large is-resized has-custom-border wp-duotone-unset-17"><img decoding="async" src="https://media4.giphy.com/media/v1.Y2lkPTc5MGI3NjExd2Z1cHV1ZXc0dWFobXJndjYwdW9xbmlzbDlhYjh2emtqdGphZ240ZCZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/soxT9nByEfFSn1k3qq/giphy.gif" alt="" style="border-style:none;border-width:0px;width:600px"/></figure>



<div style="height:50px" aria-hidden="true" class="wp-block-spacer"></div>



<h4 class="wp-block-heading">Funding</h4>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>This project was funded by the Prototype Fund, a project of the Open Knowledge Foundation, financed by the Federal Ministry for Research and Education of the Federal Republic of Germany </p>
<cite><a href="https://prototypefund.de/" data-type="URL" data-id="https://prototypefund.de/">Prototype Found</a></cite></blockquote>



<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<figure class="wp-block-image size-full is-resized wp-duotone-unset-18"><img loading="lazy" decoding="async" width="828" height="640" src="http://wp.chrisrem.de/wp-content/uploads/2023/01/Screenshot-2023-01-21-153146.png" alt="" class="wp-image-239" style="width:200px;height:auto" srcset="https://chrisrem.de/wp-content/uploads/2023/01/Screenshot-2023-01-21-153146.png 828w, https://chrisrem.de/wp-content/uploads/2023/01/Screenshot-2023-01-21-153146-300x232.png 300w, https://chrisrem.de/wp-content/uploads/2023/01/Screenshot-2023-01-21-153146-768x594.png 768w" sizes="auto, (max-width: 828px) 100vw, 828px" /></figure>
]]></content:encoded>
					
					<wfw:commentRss>https://chrisrem.de/klimavr-wip/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Adaptive Anatomy</title>
		<link>https://chrisrem.de/adaptive-anatomy/</link>
					<comments>https://chrisrem.de/adaptive-anatomy/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Fri, 20 Jan 2023 19:42:02 +0000</pubDate>
				<category><![CDATA[Project]]></category>
		<guid isPermaLink="false">http://192.168.178.39:8080/?p=215</guid>

					<description><![CDATA[Adaptive Anatomy is a augment reality app concept, to explore the possibilities of live MRT/CT overlays during a surgical intervention Concept Surgical practice usually separates between imaging and intervention: i.e. a surgeon has to apply a CT scan to the patient&#8217;s anatomy cognitively. With this concept, we want to show how this cognitive gap can [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>Adaptive Anatomy is a augment reality app concept, to explore the possibilities of live MRT/CT overlays during a surgical intervention</p>



<figure class="wp-block-embed is-type-video is-provider-vimeo wp-block-embed-vimeo wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" title="Adaptive Anatomy Closeup View" src="https://player.vimeo.com/video/791276607?dnt=1&amp;app_id=122963" width="500" height="281" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media"></iframe>
</div></figure>



<div style="height:31px" aria-hidden="true" class="wp-block-spacer"></div>



<h4 class="wp-block-heading">Concept</h4>



<p>Surgical practice usually separates between imaging and intervention: i.e. a surgeon has to apply a CT scan to the patient&#8217;s anatomy cognitively. With this concept, we want to show how this cognitive gap can be briged with the use of Augmented Reality Headsets.</p>



<div style="height:30px" aria-hidden="true" class="wp-block-spacer"></div>



<h4 class="wp-block-heading">Phantom</h4>



<p>A central part of our concept is the Phantom, a 3D-printed anatomic puppet, which is based on real patient data. Not only is the phantom geometry true to reality, the tissue is also true to touch, e.g. a tumor is harder than fatty tissue. This allows us to test Augmented Reality concepts on a surface which is very close to the actual conditions in the operating room. The phantom is fitted with a Marker, which allows us to detect it&#8217;s location and rotation in the real world with a Hololens 2 device.</p>



<figure class="wp-block-image size-large wp-duotone-unset-19"><img loading="lazy" decoding="async" width="1024" height="683" src="https://chrisrem.de/wp-content/uploads/2023/01/Photo-Setup-1024x683.jpg" alt="" class="wp-image-216"/></figure>



<div style="height:50px" aria-hidden="true" class="wp-block-spacer"></div>



<h4 class="wp-block-heading">Augmenting the liver</h4>



<p>We used the Unity3D engine in combination with the Hololens 2, to augment the liver tissue with different experimental visualisations. The Hololens 2 detects the Phantoms presence by scanning a marker positioned on the bottom of it. You can then see different visualisations that are based on the same CT-Data as the phantom puppet. The visualisations are quite experimental and represent more of a testbed, to see which challenges might appear when augmenting human tissue and how our visual system interprets them.</p>



<figure class="wp-block-gallery has-nested-images columns-2 is-cropped wp-block-gallery-24 is-layout-flex wp-block-gallery-is-layout-flex">
<figure class="wp-block-image size-full wp-duotone-unset-20"><img loading="lazy" decoding="async" width="439" height="208" data-id="226" src="http://wp.chrisrem.de/wp-content/uploads/2023/01/Closeups_2-e1674242669493-edited.png" alt="" class="wp-image-226" srcset="https://chrisrem.de/wp-content/uploads/2023/01/Closeups_2-e1674242669493-edited.png 439w, https://chrisrem.de/wp-content/uploads/2023/01/Closeups_2-e1674242669493-edited-300x142.png 300w" sizes="auto, (max-width: 439px) 100vw, 439px" /></figure>



<figure class="wp-block-image size-full wp-duotone-unset-21"><img loading="lazy" decoding="async" width="641" height="304" data-id="225" src="http://wp.chrisrem.de/wp-content/uploads/2023/01/Closeups_1-e1674242655844-edited.png" alt="" class="wp-image-225" srcset="https://chrisrem.de/wp-content/uploads/2023/01/Closeups_1-e1674242655844-edited.png 641w, https://chrisrem.de/wp-content/uploads/2023/01/Closeups_1-e1674242655844-edited-300x142.png 300w" sizes="auto, (max-width: 641px) 100vw, 641px" /></figure>



<figure class="wp-block-image size-full wp-duotone-unset-22"><img loading="lazy" decoding="async" width="452" height="214" data-id="223" src="http://wp.chrisrem.de/wp-content/uploads/2023/01/Closeups_4-e1674242732104-edited.png" alt="" class="wp-image-223" srcset="https://chrisrem.de/wp-content/uploads/2023/01/Closeups_4-e1674242732104-edited.png 452w, https://chrisrem.de/wp-content/uploads/2023/01/Closeups_4-e1674242732104-edited-300x142.png 300w" sizes="auto, (max-width: 452px) 100vw, 452px" /></figure>



<figure class="wp-block-image size-full wp-duotone-unset-23"><img loading="lazy" decoding="async" width="483" height="229" data-id="224" src="http://wp.chrisrem.de/wp-content/uploads/2023/01/Closeups_3-e1674242716202-edited.png" alt="" class="wp-image-224" srcset="https://chrisrem.de/wp-content/uploads/2023/01/Closeups_3-e1674242716202-edited.png 483w, https://chrisrem.de/wp-content/uploads/2023/01/Closeups_3-e1674242716202-edited-300x142.png 300w" sizes="auto, (max-width: 483px) 100vw, 483px" /></figure>
</figure>



<p>One interesting learning we had is, that complete X-Ray Vision, e.g. being able to always see through the phantom body onto the augmented surfaces, is heavily distracting and an unpleasant experience. Therefore we created a mask, which lets you only see the augmented content when you also directly look into the abdominal cavity, the skin being the border for the content.</p>



<div style="height:51px" aria-hidden="true" class="wp-block-spacer"></div>



<h4 class="wp-block-heading">Open-Source Code</h4>



<p class="has-text-align-left">The Code is freely available on our Github. You need Unity 2019.2.21f1 (exactly this version!) and a Hololens 2 device to be able to play this project:<br><br><a href="https://github.com/ExperimentalSurgery/AugmentedSurgeon" data-type="URL" data-id="https://github.com/ExperimentalSurgery/AugmentedSurgeon">View the Code</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://chrisrem.de/adaptive-anatomy/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>VolumetricOR</title>
		<link>https://chrisrem.de/volumetricor/</link>
					<comments>https://chrisrem.de/volumetricor/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Fri, 20 Jan 2023 18:48:55 +0000</pubDate>
				<category><![CDATA[Project]]></category>
		<guid isPermaLink="false">http://192.168.178.39:8080/?p=194</guid>

					<description><![CDATA[VolumetricOR is the exploration of a concept, which allows surgical staff to learn and train surgical interventions in a photorealistic virtual operating room. Situated in the virtual reality environment, users can experience surgical workflows based on Volumetric Video recorded directly in the operating theater. Photogrammetric Operating Room For this project, I photographed and reconstructed an [&#8230;]]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" title="Volumetric Operating Room – Virtual Reality Concept for Surgical Training" width="500" height="281" src="https://www.youtube.com/embed/jFQOU1nyThI?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<p>VolumetricOR is the exploration of a concept, which allows surgical staff to learn and train surgical interventions in a photorealistic virtual operating room. Situated in the virtual reality environment, users can experience surgical workflows based on Volumetric Video recorded directly in the operating theater. </p>



<div style="height:50px" aria-hidden="true" class="wp-block-spacer"></div>



<h4 class="wp-block-heading">Photogrammetric Operating Room</h4>



<p>For this project, I photographed and reconstructed an operating room with Agisoft Metashape, but due to the large amount of reflective, transparent and montone surfaces in this scan, I reconstructed most of the model by hand and then re-projected the textures onto the new surfaces in Metashape. You can use this model for free in your projects, download it on sketchfab:<br></p>



<iframe frameborder="0" src="https://sketchfab.com/models/9ec46c4d615a4581a235eebfb162f574/embed?ui_theme=dark" class="" style="width:100%;max-width:100%;height:320px"></iframe>



<div style="height:49px" aria-hidden="true" class="wp-block-spacer"></div>



<h4 class="wp-block-heading">Volumetric Video Capture inside of the OR</h4>



<p>The biggest challenge for this project was capturing a volumetric video of a real kidney transplantation. We had to gurantee that our setup would not obstruct the surgical stuff in any way, which meant that we had to make the setup as small as possible. Each capture unit contained a Kinect V2 depth sensor in combination with a DSLR camera, and a mini desktop PC.</p>



<figure class="wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-27 is-layout-flex wp-block-gallery-is-layout-flex">
<figure class="wp-block-image size-large wp-duotone-unset-25"><img loading="lazy" decoding="async" width="1024" height="576" data-id="198" src="https://chrisrem.de/wp-content/uploads/2023/01/P_20170920_204903-1024x576.jpg" alt="A DIY unit of a DLSR camera and a Kinect v2 Depth sensor" class="wp-image-198"/><figcaption class="wp-element-caption">A DIY unit of a DLSR camera and a Kinect v2 Depth sensor</figcaption></figure>



<figure class="wp-block-image size-full wp-duotone-unset-26"><img loading="lazy" decoding="async" width="700" height="394" data-id="197" src="https://chrisrem.de/wp-content/uploads/2023/01/P_20170920_214230-e1674242799880.jpg" alt="A local Wifi network was used for communication" class="wp-image-197" srcset="https://chrisrem.de/wp-content/uploads/2023/01/P_20170920_214230-e1674242799880.jpg 700w, https://chrisrem.de/wp-content/uploads/2023/01/P_20170920_214230-e1674242799880-300x169.jpg 300w" sizes="auto, (max-width: 700px) 100vw, 700px" /><figcaption class="wp-element-caption">A local Wifi network was used for communication</figcaption></figure>
</figure>



<p>In total, three of these setups were used and connected via a local Wifi network to control them. The capture was then controlled via a seperate laptop outside of the operating room. We used DepthKit Pro and LiveScan3D as capture software, and then cleaned and imported the sequences into Unreal Engine. The quality of these captures is pretty rough, as you can see in the video at the top, or in this static frame below. This is a state from 2018 however, things improve quickly in this field.</p>



<iframe frameborder="0" src="https://sketchfab.com/models/7194e5e0467f47b6b7fe7a5619d531f9/embed?ui_theme=dark" class="" style="width:100%;max-width:100%;height:320px"></iframe>



<div style="height:50px" aria-hidden="true" class="wp-block-spacer"></div>



<h4 class="wp-block-heading">Publication:</h4>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p><a href="https://wp.chrisrem.de/volumetricor-a-new-approach-to-simulate-surgical-interventions-in-virtual-reality-for-training-and-education/" data-type="link" data-id="https://wp.chrisrem.de/volumetricor-a-new-approach-to-simulate-surgical-interventions-in-virtual-reality-for-training-and-education/">VolumetricOR: A new Approach to Simulate Surgical Interventions in Virtual Reality for Training and Education</a></p>
<cite>Queisner M, Pogorzhelskiy M, Remde C, Pratschke J,&nbsp; Sauer IM, Surgical Innovation. 2022</cite></blockquote>



<div style="height:50px" aria-hidden="true" class="wp-block-spacer"></div>



<h4 class="wp-block-heading">Open Source Code</h4>



<p>The complete project is availble as open-source project. Open the source files in Unreal Engine 4.19. You need a HTC-Vive, or a similar compatible SteamVR-Headset to be able to view the project in VR</p>



<p><a href="https://edoc.hu-berlin.de/handle/18452/21337" data-type="URL" data-id="https://edoc.hu-berlin.de/handle/18452/21337">Download</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://chrisrem.de/volumetricor/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
