<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>XR &#8211; Christopher Remde</title>
	<atom:link href="https://chrisrem.de/tag/xr/feed/" rel="self" type="application/rss+xml" />
	<link>https://chrisrem.de</link>
	<description>Christopher Remde - XR Developer</description>
	<lastBuildDate>Mon, 20 Oct 2025 14:16:33 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.3</generator>

 
	<item>
		<title>Geometry Sequence Player Package</title>
		<link>https://chrisrem.de/geometry-sequence-player-package/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Tue, 27 May 2025 20:15:14 +0000</pubDate>
				<category><![CDATA[Project]]></category>
		<category><![CDATA[alembic]]></category>
		<category><![CDATA[cache]]></category>
		<category><![CDATA[Geometry]]></category>
		<category><![CDATA[mesh]]></category>
		<category><![CDATA[Package]]></category>
		<category><![CDATA[Player]]></category>
		<category><![CDATA[Plugin]]></category>
		<category><![CDATA[pointcloud]]></category>
		<category><![CDATA[Sequence]]></category>
		<category><![CDATA[textures]]></category>
		<category><![CDATA[Unity]]></category>
		<category><![CDATA[Video]]></category>
		<category><![CDATA[Volumetric]]></category>
		<category><![CDATA[XR]]></category>
		<guid isPermaLink="false">https://chrisrem.de/?p=745</guid>

					<description><![CDATA[Quickstart &#124; Documentation &#124; Unity Asset Store &#124; Unity Forums Thread &#124; License The Geometry Sequence Player is a package for the Unity game engine which enables playback of large geometry sequences inside of Unity. The package can be used to playback pointcloud, mesh or textured mesh sequences. It is available on either Github (for [&#8230;]]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Geometry Sequence Player for Unity" width="500" height="281" src="https://www.youtube.com/embed/5HA_HwtjIu0?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<p><strong><a href="https://buildingvolumes.github.io/Unity_Geometry_Sequence_Player/docs/quickstart/quick-start/">Quickstart</a> | <a href="https://buildingvolumes.github.io/Unity_Geometry_Sequence_Player/">Documentation</a> | <a href="https://u3d.as/3suF">Unity Asset Store</a> | <a href="https://discussions.unity.com/t/released-geometry-sequence-player/921802">Unity Forums Thread</a> | <a href="https://github.com/BuildingVolumes/Unity_Geometry_Sequence_Player#license">License</a></strong></p>



<p>The Geometry Sequence Player is a package for the Unity game engine which enables playback of large geometry sequences inside of Unity. The package can be used to playback pointcloud, mesh or textured mesh sequences.<br><br>It is available on either Github (for non-commercial use) or the Unity Asset Store (for commercial use):</p>



<p></p>



<div class="wp-block-columns are-vertically-aligned-center is-layout-flex wp-container-core-columns-is-layout-a5331a9e wp-block-columns-is-layout-flex" style="padding-top:0;padding-right:0;padding-bottom:0;padding-left:0">
<div class="wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:50%">
<figure class="wp-block-image aligncenter size-large is-resized has-custom-border wp-duotone-default-filter" style="margin-top:0;margin-right:0;margin-bottom:0;margin-left:0"><a href="https://github.com/BuildingVolumes/Unity_Geometry_Sequence_Player" target="_blank" rel=" noreferrer noopener"><img fetchpriority="high" decoding="async" width="230" height="225" src="https://chrisrem.de/wp-content/uploads/2025/05/github-mark-white.png" alt="Link to Github Repo" class="wp-image-748" style="border-style:none;border-width:0px;border-radius:0px;width:67px;height:auto"/></a><figcaption class="wp-element-caption"><mark style="background-color:rgba(0, 0, 0, 0);color:#ffffff" class="has-inline-color"><a href="https://github.com/BuildingVolumes/Unity_Geometry_Sequence_Player" data-type="link" data-id="https://github.com/BuildingVolumes/Unity_Geometry_Sequence_Player">Github</a></mark></figcaption></figure>
</div>



<div class="wp-block-column is-vertically-aligned-center is-layout-flow wp-container-core-column-is-layout-a77db08e wp-block-column-is-layout-flow" style="flex-basis:50%">
<figure class="wp-block-image aligncenter size-large wp-duotone-default-filter" style="margin-top:0;margin-bottom:0"><a href="https://assetstore.unity.com/packages/tools/animation/geometry-sequence-player-307918" target="_blank" rel=" noreferrer noopener"><img decoding="async" width="866" height="156" src="https://chrisrem.de/wp-content/uploads/2025/05/AS-Logo.png" alt="" class="wp-image-749" srcset="https://chrisrem.de/wp-content/uploads/2025/05/AS-Logo.png 866w, https://chrisrem.de/wp-content/uploads/2025/05/AS-Logo-300x54.png 300w, https://chrisrem.de/wp-content/uploads/2025/05/AS-Logo-768x138.png 768w" sizes="(max-width: 866px) 100vw, 866px" /></a><figcaption class="wp-element-caption"><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-foreground-color"><a href="https://assetstore.unity.com/packages/tools/animation/geometry-sequence-player-307918" data-type="link" data-id="https://assetstore.unity.com/packages/tools/animation/geometry-sequence-player-307918">Get it on the Asset Store</a></mark></figcaption></figure>
</div>
</div>



<p></p>



<p><strong>What is a geometry sequence?</strong></p>



<p>In a geometry sequence, each frame consists out of an individual mesh or pointcloud, which is shown at short Intervalls to create the illusion of an animation, kind of like a three dimensional flipbook. As each frame is independent from the next one, no limitations apply to your animation, unlike for example skinned meshes or blendshapes, where mesh topology can&#8217;t be changed. This applies for example to:<br><br><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f30a.png" alt="🌊" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Pre-Rendered fluid caches<br><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/26d3-fe0f-200d-1f4a5.png" alt="⛓️‍💥" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Baked Physiscs simulations<br><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f7e7.png" alt="🟧" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Animated Procedural meshes<br><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f64b-200d-2640-fe0f.png" alt="🙋‍♀️" class="wp-smiley" style="height: 1em; max-height: 1em;" /> 4D scans with per frame textures<br><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f39e.png" alt="🎞" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Volumetric videos<br><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f4f7.png" alt="📷" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Pointclouds captured by RGBD sensors (Kinect, Realsense, Lidar)<br><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f310.png" alt="🌐" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Data visualisations</p>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow">
<figure class="wp-block-image size-large wp-duotone-unset-1"><img decoding="async" src="https://media0.giphy.com/media/v1.Y2lkPTc5MGI3NjExb2w2NDlvd2E0ZjdoM2o1YjYyZ3p5NmowcjZnOGFvNDI1cHRlNnF4ZyZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/H7APgpTRaAWa1uWj9d/giphy.gif" alt=""/></figure>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow">
<figure class="wp-block-image size-large wp-duotone-unset-2"><img decoding="async" src="https://media1.giphy.com/media/v1.Y2lkPTc5MGI3NjExOHUwN2x1bjd4bWswcWVucjg0amQ3MmdldDVmYmgwNHdoNGY1eWR0YSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/6vdkqmfFheMQHmspm1/giphy.gif" alt=""/></figure>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow">
<figure class="wp-block-image size-large wp-duotone-unset-3"><img decoding="async" src="https://media1.giphy.com/media/v1.Y2lkPTc5MGI3NjExNnBwY20zenduajFuNDZla3d5OGFtdmI2aWJzaHg5aWc0Yml2bXc4MSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/Pi6XDkVD3Yf5AtGfWI/giphy.gif" alt=""/></figure>
</div>
</div>



<p></p>



<p><strong>Features</strong></p>



<ul class="wp-block-list">
<li>Can import sequences from almost any source, with an included converter that covers most standard file formats.</li>



<li>Easy setup of sequences with an easy-to-use Editor UI and playback controls</li>



<li>Integrated into the Unity Timeline </li>



<li>Granular playback control for interactive experiences is provided through a scripting API</li>



<li>Sequences don&#8217;t need to be loaded into your Unity project or even the RAM, everything is streamed at runtime from disk</li>



<li>Highly efficient playback system that allows you to stream millions of points/polygons per second, even on lower end hardware</li>
</ul>



<p></p>



<p><strong>The package</strong></p>



<p>Originally, this package was created to facilitate the playback of volumetric videos from <a href="https://chrisrem.de/livescan3d/" data-type="post" data-id="246">LiveScan3D</a>, captured with Depth sensors such as the Azure Kinect or Intel realsense. But I realized that it could also be useful for many more applications, such as getting fluid and physics simulations from Blender into Unity!<br><br>I have used the package intensively in many projects myself, so it&#8217;s well proven and tested. The package has been released on Github as open-source code for quite a while already, but to make the development and support a bit more sustainable, I now decided to also offer the package on the Unity Asset Store. <br>If you want to use the package commercially, or if you want to support our development, you can <a href="https://assetstore.unity.com/packages/tools/animation/geometry-sequence-player-307918" data-type="link" data-id="https://assetstore.unity.com/packages/tools/animation/geometry-sequence-player-307918">buy it there</a>! Thank you very much for your support <img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f49a.png" alt="💚" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Sparse camera volumetric video applications. A comparison of visual fidelity, user experience, and adaptability</title>
		<link>https://chrisrem.de/sparse-camera-volumetric-video-applications-a-comparison-of-visual-fidelity-user-experience-and-adaptability/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Mon, 10 Mar 2025 12:04:27 +0000</pubDate>
				<category><![CDATA[Publication]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[Brekel]]></category>
		<category><![CDATA[comparision]]></category>
		<category><![CDATA[Depthkit]]></category>
		<category><![CDATA[Livescan]]></category>
		<category><![CDATA[Livescan3D]]></category>
		<category><![CDATA[mesh]]></category>
		<category><![CDATA[pointcloud]]></category>
		<category><![CDATA[Review]]></category>
		<category><![CDATA[RGBD]]></category>
		<category><![CDATA[Studio]]></category>
		<category><![CDATA[texture]]></category>
		<category><![CDATA[Video]]></category>
		<category><![CDATA[Volumetric]]></category>
		<category><![CDATA[VolumetricCapture]]></category>
		<category><![CDATA[XR]]></category>
		<guid isPermaLink="false">https://chrisrem.de/?p=752</guid>

					<description><![CDATA[A review paper, comparing state of the art sparse camera volumetric video applications in the year 2024. Comparision candidates are the software Depthkit Studio, VolumetricCapture, Brekel Pointcloud v3 and LiveScan3D Christopher Remde Moritz Queisner Igor M. Sauer 🌐 Read Article on Frontiers (Open Acces)📄 Download PDF Abstract Introduction Volumetric video production in commercial studios is [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p><em>A review paper, comparing state of the art sparse camera volumetric video applications in the year 2024. Comparision candidates are the software Depthkit Studio, VolumetricCapture, Brekel Pointcloud v3 and LiveScan3D</em></p>



<div style="height:0px" aria-hidden="true" class="wp-block-spacer"></div>



<div class="wp-block-group is-nowrap is-layout-flex wp-container-core-group-is-layout-ecba6b92 wp-block-group-is-layout-flex" style="margin-top:0;margin-bottom:0">
<p><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://www.sobi.uni-passau.de/mensch-maschine-interaktion/lehrstuhlteam/lehrstuhlinhaberin#c198530"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0003-4584-6379" data-type="link" data-id="https://orcid.org/0000-0003-4584-6379">Christopher Remde</a></p>



<p><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0003-4584-6379" data-type="link" data-id="https://orcid.org/0000-0003-4584-6379"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231">Moritz Queisner</a></p>



<p><a href="https://orcid.org/0000-0001-9355-937X">Igor M. Sauer</a></p>
</div>



<p class="has-text-align-left has-medium-font-size" style="padding-top:0;padding-bottom:0"><strong><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f310.png" alt="🌐" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <a href="https://doi.org/10.3389/frsip.2025.1405808" data-type="link" data-id="https://doi.org/10.3389/frsip.2025.1405808">Read Article on Frontiers (Open Acces)</a></strong><br><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f4c4.png" alt="📄" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <a href="https://chrisrem.de/wp-content/uploads/2025/06/Sparse-Camera-Volumetric-Video-Applications.pdf" data-type="link" data-id="https://chrisrem.de/wp-content/uploads/2025/06/Sparse-Camera-Volumetric-Video-Applications.pdf">Download PDF</a></p>



<div style="height:29px" aria-hidden="true" class="wp-block-spacer"></div>



<figure class="wp-block-video"><video controls src="https://zenodo.org/records/13908942/files/VV_Comparision_Videos.mp4?preview=0"></video></figure>



<div style="height:29px" aria-hidden="true" class="wp-block-spacer"></div>



<h4 class="wp-block-heading">Abstract</h4>



<p><strong>Introduction</strong></p>



<p>Volumetric video production in commercial studios is predominantly produced using a multi-view stereo process that relies on a high two-digit number of cameras to capture a scene. Due to the hardware requirements and associated processing costs, this workflow is resource-intensive and expensive, making it unattainable for creators and researchers with smaller budgets. Low-cost volumetric video systems using RGBD cameras offer an affordable alternative. As these small, mobile systems are a relatively new technology, the available software applications vary in terms of workflow and image quality. In this paper we provide an overview of the technical capabilities of sparse camera volumetric video capture applications and assess their visual fidelity and workflow.</p>



<p><strong>Materials and methods</strong></p>



<p>We selected volumetric video applications that are publicly available, support capture with multiple <em>Microsoft Azure Kinect</em> cameras and run on consumer-grade computer hardware. We compared the features, usability, and workflow of each application and benchmarked them in five different scenarios. Based on the benchmark footage, we analyzed spatial calibration accuracy, artifact occurrence and conducted a subjective perception study with 19 participants from a game design study program to assess the visual fidelity of the captures.</p>



<p><strong>Results</strong></p>



<p>We evaluated three applications, <em>Depthkit Studio</em>, <em>LiveScan3D</em> and <em>VolumetricCapture.</em> We found Depthkit <em>Studio</em> to provide the best experience for novel users, while <em>LiveScan3D</em> and <em>VolumetricCapture</em> require advanced technical knowledge to be operated. The footage captured by <em>Depthkit Studio</em> showed the least amount of artifacts by a larger margin, followed by <em>LiveScan3D</em> and <em>VolumetricCapture</em>. These findings were confirmed by the participants who preferred <em>Depthkit Studio</em> over <em>LiveScan3D</em> and <em>VolumetricCapture</em>.</p>



<p><strong>Discussion</strong></p>



<p>Based on the results, we recommend <em>Depthkit Studio</em> for the highest fidelity captures. <em>LiveScan3D</em> produces footage of only acceptable fidelity but is the only candidate that is available as open-source software. We therefore recommend it as a platform for research and experimentation. Due to the lower fidelity and high setup complexity, we recommend <em>VolumetricCapture</em> only for specific use-cases where its ability to handle a high number of sensors in a large capture volume is required.</p>
]]></content:encoded>
					
		
		<enclosure url="https://zenodo.org/records/13908942/files/VV_Comparision_Videos.mp4?preview=0" length="206890678" type="video/mp4" />

			</item>
		<item>
		<title>Immersive Mixed Reality Training Concept for Mastering Surgical Knot-tying</title>
		<link>https://chrisrem.de/immersive-mixed-reality-training-concept-for-mastering-surgical-knot-tying/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Sat, 08 Mar 2025 12:04:55 +0000</pubDate>
				<category><![CDATA[Publication]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[Brekel]]></category>
		<category><![CDATA[comparision]]></category>
		<category><![CDATA[Depthkit]]></category>
		<category><![CDATA[Livescan]]></category>
		<category><![CDATA[Livescan3D]]></category>
		<category><![CDATA[mesh]]></category>
		<category><![CDATA[pointcloud]]></category>
		<category><![CDATA[Review]]></category>
		<category><![CDATA[RGBD]]></category>
		<category><![CDATA[Studio]]></category>
		<category><![CDATA[texture]]></category>
		<category><![CDATA[Video]]></category>
		<category><![CDATA[Volumetric]]></category>
		<category><![CDATA[VolumetricCapture]]></category>
		<category><![CDATA[XR]]></category>
		<guid isPermaLink="false">https://chrisrem.de/?p=756</guid>

					<description><![CDATA[A concept and prototype implementation of a surgical knot tying trainer. Implemented in Unity3D with Varjo XR-3 hardware and featuring instructions in the form of volumetric videos. Moritz Queisner Christopher Remde Robert Luzsa Igor M. Sauer 🌐 Read article on IEEE📄 Download PDF Abstract This study presents a mixed reality training concept designed to enhance [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p><em>A concept and prototype implementation of a surgical knot tying trainer. Implemented in Unity3D with Varjo XR-3 hardware and featuring instructions in the form of volumetric videos.</em></p>



<div style="height:0px" aria-hidden="true" class="wp-block-spacer"></div>



<div class="wp-block-group is-nowrap is-layout-flex wp-container-core-group-is-layout-ecba6b92 wp-block-group-is-layout-flex" style="margin-top:0;margin-bottom:0">
<p><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0003-4584-6379" data-type="link" data-id="https://orcid.org/0000-0003-4584-6379"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231">Moritz Queisner</a></p>



<p><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://www.sobi.uni-passau.de/mensch-maschine-interaktion/lehrstuhlteam/lehrstuhlinhaberin#c198530"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0003-4584-6379" data-type="link" data-id="https://orcid.org/0000-0003-4584-6379">Christopher Remde</a></p>



<p><a href="https://orcid.org/0000-0001-5674-9158">Robert Luzsa</a></p>



<p><a href="https://orcid.org/0000-0001-9355-937X">Igor M. Sauer</a></p>
</div>



<p class="has-text-align-left has-medium-font-size" style="padding-top:0;padding-bottom:0"><strong><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f310.png" alt="🌐" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <a href="https://ieeexplore.ieee.org/document/10972878" data-type="link" data-id="https://ieeexplore.ieee.org/document/10972878">Read article on IEEE</a></strong><br><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f4c4.png" alt="📄" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <a href="https://chrisrem.de/wp-content/uploads/2025/06/KnotbAR-Conference-Paper.pdf" data-type="link" data-id="https://chrisrem.de/wp-content/uploads/2025/06/KnotbAR-Conference-Paper.pdf">Download PDF</a></p>



<div style="height:29px" aria-hidden="true" class="wp-block-spacer"></div>



<figure class="wp-block-video"><video controls src="https://zenodo.org/records/14712546/files/KnotBAR_SUpplementary_Video.mp4?preview=0"></video></figure>



<div style="height:29px" aria-hidden="true" class="wp-block-spacer"></div>



<h4 class="wp-block-heading">Abstract</h4>



<p>This study presents a mixed reality training concept designed to enhance medical students’ acquisition of surgical knot-tying skills, a fundamental component of surgical training critical for effective wound closure and tissue healing. Utilizing a virtual reality headset with video passthrough functionality, the system provides adaptive visual instructions tailored to the user’s hand movements during the knot-tying process. A prototype was developed based on the concept, featuring three-dimensional videos in which virtual instructor hands demonstrate each step of the procedure. The training concept was derived from an iterative, user-centered process encompassing requirement analysis, prototype development, and evaluation. Key functionalities include the ability to display thread tension and tensile strength, dynamically adapt learning speed to the user’s progress, and deliver personalized feedback by visually augmenting the hands and fingers. Evaluation results indicate that spatial and tangible interactions facilitated by the mixed reality training prototype support the acquisition of practical skills, bridging the gap between digital and physical simulation training.</p>
]]></content:encoded>
					
		
		<enclosure url="https://zenodo.org/records/14712546/files/KnotBAR_SUpplementary_Video.mp4?preview=0" length="246245601" type="video/mp4" />

			</item>
		<item>
		<title>Training Surgical Knot Tying in Extended Reality: First Results of the Project &#8220;GreifbAR&#8221;</title>
		<link>https://chrisrem.de/training-surgical-knot-tying-in-extended-reality-first-results-of-the-project-greifbar/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Fri, 31 Mar 2023 23:07:00 +0000</pubDate>
				<category><![CDATA[Publication]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[Conference]]></category>
		<category><![CDATA[Poster]]></category>
		<category><![CDATA[Review]]></category>
		<category><![CDATA[Surgery]]></category>
		<category><![CDATA[Training]]></category>
		<category><![CDATA[XR]]></category>
		<guid isPermaLink="false">http://192.168.178.39:8080/?p=447</guid>

					<description><![CDATA[Poster of first results from our &#8220;GreifbAR&#8221;, presented in the &#8220;Würtual Reality XR Meeting 2023&#8221; Conference in Würzburg. A project overview, as well as first user test evaluations are shown. Robert Luzsa Christopher Remde Moritz Queisner Susanne Mayr 🌐 Read Article on ResearchGate📄 Download PDF Abstract Background: Tying surgical knots is a basic but critical [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p><em>Poster of first results from our &#8220;GreifbAR&#8221;, presented in the &#8220;Würtual Reality XR Meeting 2023&#8221; Conference in Würzburg. A project overview, as well as first user test evaluations are shown.</em></p>



<div style="height:0px" aria-hidden="true" class="wp-block-spacer"></div>



<div class="wp-block-group is-nowrap is-layout-flex wp-container-core-group-is-layout-ecba6b92 wp-block-group-is-layout-flex" style="margin-top:0;margin-bottom:0">
<p><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://www.sobi.uni-passau.de/mensch-maschine-interaktion/lehrstuhlteam/lehrstuhlinhaberin#c198530"></a><a href="https://orcid.org/0000-0001-5674-9158">Robert Luzsa</a></p>



<p><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://www.sobi.uni-passau.de/mensch-maschine-interaktion/lehrstuhlteam/lehrstuhlinhaberin#c198530"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0003-4584-6379" data-type="link" data-id="https://orcid.org/0000-0003-4584-6379">Christopher Remde</a></p>



<p><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0003-4584-6379" data-type="link" data-id="https://orcid.org/0000-0003-4584-6379"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231">Moritz Queisner</a></p>



<p><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://www.sobi.uni-passau.de/mensch-maschine-interaktion/lehrstuhlteam/lehrstuhlinhaberin#c198530">Susanne Mayr</a></p>
</div>



<p class="has-text-align-left has-medium-font-size" style="padding-top:0;padding-bottom:0"><strong><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f310.png" alt="🌐" class="wp-smiley" style="height: 1em; max-height: 1em;" /> </strong><a href="https://www.researchgate.net/publication/370004157_Training_Surgical_Knot_Tying_in_Extended_Reality_First_Results_of_the_Project_GreifbAR"><strong>Read Article on ResearchGate</strong></a><br><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f4c4.png" alt="📄" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <a href="http://192.168.178.39:8080/wp-content/uploads/2024/01/Poster_final2.pdf">Download PDF</a></p>



<div style="height:29px" aria-hidden="true" class="wp-block-spacer"></div>



<h4 class="wp-block-heading">Abstract</h4>



<p><strong>Background: </strong></p>



<p>Tying surgical knots is a basic but critical skill that surgeons must master. Currently, knot tying is typically taught through instructor observation or instructional videos. These methods are either very resource consuming or offer little interactivity and customizability. Knot tying training in extended reality (XR) could address these weaknesses and improve the linking of observation and application: For example, spatial awareness and the ability to mentally rotate, i.e. imagine knots from different perspectives, affects learning performance (Brandt &amp; Davies, 2006). XR may support spatial awareness by allowing knot tying from different perspectives superimposed on the real world. </p>



<p><strong>Method: </strong></p>



<p>The project GreifbAR develops an XR-based interactive knot tying training application. This application teaches the process of knot tying and provides individualized feedback based on hand pose and scene recognition. To achieve a learning-friendly design and user acceptance, the requirements of learners and experts must be considered. Therefore, based on a literature review, an online survey with 80 medical students and four interviews with experienced surgeons at Charité &#8211; Universitätsmedizin Berlin were conducted. </p>



<p><strong>Initial results: </strong></p>



<p>The respondents show openness towards knot tying training with XR, yet emphasize the importance of a realistic learning situations and personal guidance. They also report little prior experience with XR. The talk integrates the survey results with findings from technology acceptance research and derives implications for the design of XR-based training systems for knot tying and similar procedural tasks.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>VolumetricOR: A New Approach to Simulate Surgical Interventions in Virtual Reality for Training and Education</title>
		<link>https://chrisrem.de/volumetricor-a-new-approach-to-simulate-surgical-interventions-in-virtual-reality-for-training-and-education/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Sun, 09 Jan 2022 22:24:00 +0000</pubDate>
				<category><![CDATA[Publication]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[Learning]]></category>
		<category><![CDATA[Paper]]></category>
		<category><![CDATA[Surgery]]></category>
		<category><![CDATA[Training]]></category>
		<category><![CDATA[Video]]></category>
		<category><![CDATA[Volumetric]]></category>
		<category><![CDATA[XR]]></category>
		<guid isPermaLink="false">http://192.168.178.39:8080/?p=434</guid>

					<description><![CDATA[VolumetricOR explores new means to allow virtual hospitations. We capture volumetric videos from a real surgical interventions and allow medical students to view them inside a photorealistic virtual operating room. Moritz Queisner Michael Pogorzhelskiy Christopher Remde Igor M. Sauer Johann Pratschke 🌐 Read Article on Surgical Innovation📄 Download PDF Abstract Background Surgical training is primarily [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p><em>VolumetricOR</em> explores new means to allow virtual hospitations. We capture volumetric videos from a real surgical interventions and allow medical students to view them inside a photorealistic virtual operating room.</p>



<div style="height:0px" aria-hidden="true" class="wp-block-spacer"></div>



<div class="wp-block-group is-nowrap is-layout-flex wp-container-core-group-is-layout-ecba6b92 wp-block-group-is-layout-flex" style="margin-top:0;margin-bottom:0">
<p><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://www.sobi.uni-passau.de/mensch-maschine-interaktion/lehrstuhlteam/lehrstuhlinhaberin#c198530"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231">Moritz Queisner</a></p>



<p><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://www.sobi.uni-passau.de/mensch-maschine-interaktion/lehrstuhlteam/lehrstuhlinhaberin#c198530"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0003-4584-6379" data-type="link" data-id="https://orcid.org/0000-0003-4584-6379"></a><a href="https://www.interdisciplinary-laboratory.hu-berlin.de/de/content/michael-pogorzhelskiy/index.html">Michael Pogorzhelskiy</a></p>



<p><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0003-4584-6379" data-type="link" data-id="https://orcid.org/0000-0003-4584-6379"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0003-4584-6379" data-type="link" data-id="https://orcid.org/0000-0003-4584-6379">Christopher Remde</a></p>



<p><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://www.sobi.uni-passau.de/mensch-maschine-interaktion/lehrstuhlteam/lehrstuhlinhaberin#c198530"></a><a href="https://orcid.org/0000-0001-9355-937X" data-type="link" data-id="https://orcid.org/0000-0001-9355-937X">Igor M. Sauer</a></p>



<p>Johann Pratschke</p>
</div>



<p class="has-medium-font-size" style="margin-top:var(--wp--preset--spacing--50);margin-bottom:var(--wp--preset--spacing--50);padding-top:0;padding-bottom:0"><strong><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f310.png" alt="🌐" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <a href="https://journals.sagepub.com/doi/10.1177/15533506211054240">Read Article on Surgical Innovation</a></strong><br><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f4c4.png" alt="📄" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <a href="http://192.168.178.39:8080/wp-content/uploads/2024/01/queisner-et-al-2022-volumetricor-a-new-approach-to-simulate-surgical-interventions-in-virtual-reality-for-training-and.pdf">Download PDF</a></p>



<div style="height:17px" aria-hidden="true" class="wp-block-spacer"></div>



<h4 class="wp-block-heading">Abstract</h4>



<p><strong>Background</strong></p>



<p>Surgical training is primarily carried out through observation during assistance or on-site classes, by watching videos as well as by different formats of simulation. The simulation of physical presence in the operating theatre in virtual reality might complement these necessary experiences. A prerequisite is a new education concept for virtual classes that communicates the unique workflows and decision-making paths of surgical health professions (i.e. surgeons, anesthesiologists and surgical assistants) in an authentic and immersive way. For this project, media scientists, designers and surgeons worked together to develop the foundations for new ways of conveying knowledge using virtual reality in surgery.</p>



<p><strong>Materials and method</strong></p>



<p>A technical workflow to record and present volumetric videos of surgical interventions in a photorealistic virtual operating room was developed. Situated in the virtual reality demonstrator called&nbsp;<em>VolumetricOR</em>, users can experience and navigate through surgical workflows as if they are physically present. The concept is compared with traditional video-based formats of digital simulation in surgical training.</p>



<p><strong>Results</strong></p>



<p><em>VolumetricOR</em>&nbsp;let trainees experience surgical action and workflows (a) three-dimensionally, (b) from any perspective and (c) in real scale. This improves the linking of theoretical expertise and practical application of knowledge and shifts the learning experience from observation to participation.</p>



<p><strong>Discussion</strong></p>



<p>Volumetric training environments allow trainees to acquire procedural knowledge before going to the operating room and could improve the efficiency and quality of the learning and training process for professional staff by communicating techniques and workflows when the possibilities of training on-site are limited.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Real world usability analysis of two augmented reality headsets in visceral surgery</title>
		<link>https://chrisrem.de/real-world-usability-analysis-of-two-augmented-reality-headsets-in-visceral-surgery/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Wed, 28 Nov 2018 22:42:00 +0000</pubDate>
				<category><![CDATA[Publication]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[Review]]></category>
		<category><![CDATA[Surgery]]></category>
		<category><![CDATA[XR]]></category>
		<guid isPermaLink="false">http://192.168.178.39:8080/?p=390</guid>

					<description><![CDATA[In this (now quite outdated) article, we compare two Augmented Reality systems in their capability of using them in visceral surgery: The Microsoft Hololens 1, against the Meta 2 (Notably, not the Quest, nor Meta from Facebook) Simon Moosburner Christopher Remde Peter Tang Moritz Queisner Nils Haep Johann Pratschke Igor M. Sauer 🌐 Read Article [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>In this (now quite outdated) article, we compare two Augmented Reality systems in their capability of using them in visceral surgery: The Microsoft Hololens 1, against the Meta 2 (Notably, not the Quest, nor Meta from Facebook) </p>



<div style="height:0px" aria-hidden="true" class="wp-block-spacer"></div>



<div class="wp-block-group is-layout-flex wp-block-group-is-layout-flex" style="margin-top:0;margin-bottom:0">
<p><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://www.sobi.uni-passau.de/mensch-maschine-interaktion/lehrstuhlteam/lehrstuhlinhaberin#c198530"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"><a href="https://orcid.org/0000-0003-1879-4788" data-type="link" data-id="https://orcid.org/0000-0003-1879-4788">Simon Moosburner</a></a></p>



<p><a href="https://orcid.org/0000-0001-5007-7952" data-type="link" data-id="https://orcid.org/0000-0001-5007-7952"><a href="https://orcid.org/0000-0003-4584-6379" data-type="link" data-id="https://orcid.org/0000-0003-4584-6379">Christopher Remde</a></a></p>



<p><a href="https://orcid.org/0000-0003-4584-6379" data-type="link" data-id="https://orcid.org/0000-0003-4584-6379"><a href="https://orcid.org/0000-0001-5007-7952" data-type="link" data-id="https://orcid.org/0000-0001-5007-7952">Peter Tang</a></a></p>



<p><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://www.sobi.uni-passau.de/mensch-maschine-interaktion/lehrstuhlteam/lehrstuhlinhaberin#c198530"></a><a href="https://orcid.org/0000-0001-9355-937X" data-type="link" data-id="https://orcid.org/0000-0001-9355-937X"></a><a href="https://orcid.org/0000-0003-1788-276X"><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231">Moritz Queisner</a></a></p>



<p><a href="https://orcid.org/0000-0003-1879-4788" data-type="link" data-id="https://orcid.org/0000-0003-1879-4788"><a href="https://orcid.org/0000-0001-8149-4011" data-type="link" data-id="https://orcid.org/0000-0001-8149-4011">Nils Haep</a></a></p>



<p>Johann Pratschke</p>



<p><a href="https://orcid.org/0000-0001-7984-7607"><a href="https://orcid.org/0000-0001-9355-937X" data-type="link" data-id="https://orcid.org/0000-0001-9355-937X">Igor M. Sauer</a></a></p>
</div>



<p class="has-medium-font-size" style="padding-top:var(--wp--preset--spacing--20);padding-bottom:var(--wp--preset--spacing--20)"><a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/aor.13396" data-type="link" data-id="https://onlinelibrary.wiley.com/doi/abs/10.1111/aor.13396"><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f310.png" alt="🌐" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <strong>Read Article on Wiley</strong></a><br><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f4c4.png" alt="📄" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <a href="http://wp.chrisrem.de/wp-content/uploads/2025/01/AR_Headset_Analysis.pdf" data-type="link" data-id="http://wp.chrisrem.de/wp-content/uploads/2025/01/AR_Headset_Analysis.pdf">Download PDF</a></p>



<h4 class="wp-block-heading">Abstract</h4>



<p>Recent developments in the field of augmented reality (AR) have enabled new use cases in surgery. Initial set-up of an appropriate infrastructure for maintaining an AR surgical workflow requires investment in appropriate hardware. We compared the usability of the <em>Microsoft HoloLens</em> and <em>Meta 2</em> head mounted displays (HMDs). Fifteen medicine students tested each device and were questioned with a variant of the <em>System Usability Scale</em> (SUS). Two surgeons independently tested the devices in an intraoperative setting. In our adapted SUS, ergonomics, ease of use, and visual clarity of the display did not differ significantly between HMD groups. The field of view (FOV) was smaller in the <em>Microsoft HoloLens</em> than the <em>Meta 2</em> and significantly more study subjects felt limited through the FOV. Intraoperatively, decreased mobility due to the necessity of an AC adapter and additional computing device for the Meta 2 proved to be limiting. Object stability was rated superior in the <em>Microsoft HoloLens</em> than the <em>Meta 2</em> by our surgeons and lead to increased use. In summary, after examination of the <em>Meta 2</em> and the <em>Microsoft HoloLens</em>, we found key advantages in the <em>Microsoft HoloLens</em> which provided palpable benefits in a surgical setting.</p>



<p><a href="http://192.168.178.39:8080/research/" data-type="page" data-id="383">Back to Publication List</a></p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
