<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>texture &#8211; Christopher Remde</title>
	<atom:link href="https://chrisrem.de/tag/texture/feed/" rel="self" type="application/rss+xml" />
	<link>https://chrisrem.de</link>
	<description>Christopher Remde - XR Developer</description>
	<lastBuildDate>Tue, 03 Jun 2025 12:24:49 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.3</generator>

 
	<item>
		<title>Sparse camera volumetric video applications. A comparison of visual fidelity, user experience, and adaptability</title>
		<link>https://chrisrem.de/sparse-camera-volumetric-video-applications-a-comparison-of-visual-fidelity-user-experience-and-adaptability/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Mon, 10 Mar 2025 12:04:27 +0000</pubDate>
				<category><![CDATA[Publication]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[Brekel]]></category>
		<category><![CDATA[comparision]]></category>
		<category><![CDATA[Depthkit]]></category>
		<category><![CDATA[Livescan]]></category>
		<category><![CDATA[Livescan3D]]></category>
		<category><![CDATA[mesh]]></category>
		<category><![CDATA[pointcloud]]></category>
		<category><![CDATA[Review]]></category>
		<category><![CDATA[RGBD]]></category>
		<category><![CDATA[Studio]]></category>
		<category><![CDATA[texture]]></category>
		<category><![CDATA[Video]]></category>
		<category><![CDATA[Volumetric]]></category>
		<category><![CDATA[VolumetricCapture]]></category>
		<category><![CDATA[XR]]></category>
		<guid isPermaLink="false">https://chrisrem.de/?p=752</guid>

					<description><![CDATA[A review paper, comparing state of the art sparse camera volumetric video applications in the year 2024. Comparision candidates are the software Depthkit Studio, VolumetricCapture, Brekel Pointcloud v3 and LiveScan3D Christopher Remde Moritz Queisner Igor M. Sauer 🌐 Read Article on Frontiers (Open Acces)📄 Download PDF Abstract Introduction Volumetric video production in commercial studios is [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p><em>A review paper, comparing state of the art sparse camera volumetric video applications in the year 2024. Comparision candidates are the software Depthkit Studio, VolumetricCapture, Brekel Pointcloud v3 and LiveScan3D</em></p>



<div style="height:0px" aria-hidden="true" class="wp-block-spacer"></div>



<div class="wp-block-group is-nowrap is-layout-flex wp-container-core-group-is-layout-ecba6b92 wp-block-group-is-layout-flex" style="margin-top:0;margin-bottom:0">
<p><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://www.sobi.uni-passau.de/mensch-maschine-interaktion/lehrstuhlteam/lehrstuhlinhaberin#c198530"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0003-4584-6379" data-type="link" data-id="https://orcid.org/0000-0003-4584-6379">Christopher Remde</a></p>



<p><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0003-4584-6379" data-type="link" data-id="https://orcid.org/0000-0003-4584-6379"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231">Moritz Queisner</a></p>



<p><a href="https://orcid.org/0000-0001-9355-937X">Igor M. Sauer</a></p>
</div>



<p class="has-text-align-left has-medium-font-size" style="padding-top:0;padding-bottom:0"><strong><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f310.png" alt="🌐" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <a href="https://doi.org/10.3389/frsip.2025.1405808" data-type="link" data-id="https://doi.org/10.3389/frsip.2025.1405808">Read Article on Frontiers (Open Acces)</a></strong><br><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f4c4.png" alt="📄" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <a href="https://chrisrem.de/wp-content/uploads/2025/06/Sparse-Camera-Volumetric-Video-Applications.pdf" data-type="link" data-id="https://chrisrem.de/wp-content/uploads/2025/06/Sparse-Camera-Volumetric-Video-Applications.pdf">Download PDF</a></p>



<div style="height:29px" aria-hidden="true" class="wp-block-spacer"></div>



<figure class="wp-block-video"><video controls src="https://zenodo.org/records/13908942/files/VV_Comparision_Videos.mp4?preview=0"></video></figure>



<div style="height:29px" aria-hidden="true" class="wp-block-spacer"></div>



<h4 class="wp-block-heading">Abstract</h4>



<p><strong>Introduction</strong></p>



<p>Volumetric video production in commercial studios is predominantly produced using a multi-view stereo process that relies on a high two-digit number of cameras to capture a scene. Due to the hardware requirements and associated processing costs, this workflow is resource-intensive and expensive, making it unattainable for creators and researchers with smaller budgets. Low-cost volumetric video systems using RGBD cameras offer an affordable alternative. As these small, mobile systems are a relatively new technology, the available software applications vary in terms of workflow and image quality. In this paper we provide an overview of the technical capabilities of sparse camera volumetric video capture applications and assess their visual fidelity and workflow.</p>



<p><strong>Materials and methods</strong></p>



<p>We selected volumetric video applications that are publicly available, support capture with multiple <em>Microsoft Azure Kinect</em> cameras and run on consumer-grade computer hardware. We compared the features, usability, and workflow of each application and benchmarked them in five different scenarios. Based on the benchmark footage, we analyzed spatial calibration accuracy, artifact occurrence and conducted a subjective perception study with 19 participants from a game design study program to assess the visual fidelity of the captures.</p>



<p><strong>Results</strong></p>



<p>We evaluated three applications, <em>Depthkit Studio</em>, <em>LiveScan3D</em> and <em>VolumetricCapture.</em> We found Depthkit <em>Studio</em> to provide the best experience for novel users, while <em>LiveScan3D</em> and <em>VolumetricCapture</em> require advanced technical knowledge to be operated. The footage captured by <em>Depthkit Studio</em> showed the least amount of artifacts by a larger margin, followed by <em>LiveScan3D</em> and <em>VolumetricCapture</em>. These findings were confirmed by the participants who preferred <em>Depthkit Studio</em> over <em>LiveScan3D</em> and <em>VolumetricCapture</em>.</p>



<p><strong>Discussion</strong></p>



<p>Based on the results, we recommend <em>Depthkit Studio</em> for the highest fidelity captures. <em>LiveScan3D</em> produces footage of only acceptable fidelity but is the only candidate that is available as open-source software. We therefore recommend it as a platform for research and experimentation. Due to the lower fidelity and high setup complexity, we recommend <em>VolumetricCapture</em> only for specific use-cases where its ability to handle a high number of sensors in a large capture volume is required.</p>
]]></content:encoded>
					
		
		<enclosure url="https://zenodo.org/records/13908942/files/VV_Comparision_Videos.mp4?preview=0" length="206890678" type="video/mp4" />

			</item>
		<item>
		<title>Immersive Mixed Reality Training Concept for Mastering Surgical Knot-tying</title>
		<link>https://chrisrem.de/immersive-mixed-reality-training-concept-for-mastering-surgical-knot-tying/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Sat, 08 Mar 2025 12:04:55 +0000</pubDate>
				<category><![CDATA[Publication]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[Brekel]]></category>
		<category><![CDATA[comparision]]></category>
		<category><![CDATA[Depthkit]]></category>
		<category><![CDATA[Livescan]]></category>
		<category><![CDATA[Livescan3D]]></category>
		<category><![CDATA[mesh]]></category>
		<category><![CDATA[pointcloud]]></category>
		<category><![CDATA[Review]]></category>
		<category><![CDATA[RGBD]]></category>
		<category><![CDATA[Studio]]></category>
		<category><![CDATA[texture]]></category>
		<category><![CDATA[Video]]></category>
		<category><![CDATA[Volumetric]]></category>
		<category><![CDATA[VolumetricCapture]]></category>
		<category><![CDATA[XR]]></category>
		<guid isPermaLink="false">https://chrisrem.de/?p=756</guid>

					<description><![CDATA[A concept and prototype implementation of a surgical knot tying trainer. Implemented in Unity3D with Varjo XR-3 hardware and featuring instructions in the form of volumetric videos. Moritz Queisner Christopher Remde Robert Luzsa Igor M. Sauer 🌐 Read article on IEEE📄 Download PDF Abstract This study presents a mixed reality training concept designed to enhance [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p><em>A concept and prototype implementation of a surgical knot tying trainer. Implemented in Unity3D with Varjo XR-3 hardware and featuring instructions in the form of volumetric videos.</em></p>



<div style="height:0px" aria-hidden="true" class="wp-block-spacer"></div>



<div class="wp-block-group is-nowrap is-layout-flex wp-container-core-group-is-layout-ecba6b92 wp-block-group-is-layout-flex" style="margin-top:0;margin-bottom:0">
<p><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0003-4584-6379" data-type="link" data-id="https://orcid.org/0000-0003-4584-6379"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231">Moritz Queisner</a></p>



<p><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://www.sobi.uni-passau.de/mensch-maschine-interaktion/lehrstuhlteam/lehrstuhlinhaberin#c198530"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0001-5674-9158"></a><a href="https://orcid.org/0000-0001-7917-9231" data-type="link" data-id="https://orcid.org/0000-0001-7917-9231"></a><a href="https://orcid.org/0000-0003-4584-6379" data-type="link" data-id="https://orcid.org/0000-0003-4584-6379">Christopher Remde</a></p>



<p><a href="https://orcid.org/0000-0001-5674-9158">Robert Luzsa</a></p>



<p><a href="https://orcid.org/0000-0001-9355-937X">Igor M. Sauer</a></p>
</div>



<p class="has-text-align-left has-medium-font-size" style="padding-top:0;padding-bottom:0"><strong><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f310.png" alt="🌐" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <a href="https://ieeexplore.ieee.org/document/10972878" data-type="link" data-id="https://ieeexplore.ieee.org/document/10972878">Read article on IEEE</a></strong><br><img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f4c4.png" alt="📄" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <a href="https://chrisrem.de/wp-content/uploads/2025/06/KnotbAR-Conference-Paper.pdf" data-type="link" data-id="https://chrisrem.de/wp-content/uploads/2025/06/KnotbAR-Conference-Paper.pdf">Download PDF</a></p>



<div style="height:29px" aria-hidden="true" class="wp-block-spacer"></div>



<figure class="wp-block-video"><video controls src="https://zenodo.org/records/14712546/files/KnotBAR_SUpplementary_Video.mp4?preview=0"></video></figure>



<div style="height:29px" aria-hidden="true" class="wp-block-spacer"></div>



<h4 class="wp-block-heading">Abstract</h4>



<p>This study presents a mixed reality training concept designed to enhance medical students’ acquisition of surgical knot-tying skills, a fundamental component of surgical training critical for effective wound closure and tissue healing. Utilizing a virtual reality headset with video passthrough functionality, the system provides adaptive visual instructions tailored to the user’s hand movements during the knot-tying process. A prototype was developed based on the concept, featuring three-dimensional videos in which virtual instructor hands demonstrate each step of the procedure. The training concept was derived from an iterative, user-centered process encompassing requirement analysis, prototype development, and evaluation. Key functionalities include the ability to display thread tension and tensile strength, dynamically adapt learning speed to the user’s progress, and deliver personalized feedback by visually augmenting the hands and fingers. Evaluation results indicate that spatial and tangible interactions facilitated by the mixed reality training prototype support the acquisition of practical skills, bridging the gap between digital and physical simulation training.</p>
]]></content:encoded>
					
		
		<enclosure url="https://zenodo.org/records/14712546/files/KnotBAR_SUpplementary_Video.mp4?preview=0" length="246245601" type="video/mp4" />

			</item>
	</channel>
</rss>
