Szczegóły publikacji

Opis bibliograficzny

Mixed reality 6D object pose estimation using deep learning for visual markerless surgical navigation / Michał TROJAK, Artur JURGAS, Maciej Stanuch, Łukasz Kownacki, Andrzej SKALSKI // W: VRW 2025 [Dokument elektroniczny] : 2025 IEEE conference on Virtual Reality and 3D user interfaces workshops : 8–12 March 2025, Saint-Malo, France : proceedings. — Wersja do Windows. — Adobe Reader. — Piscataway : The Institute of Electrical and Electronics Engineers, cop. 2025. — Dod. ISBN: 979-8-3315-2563-7. — e-ISBN: 979-8-3315-1484-6. — S. 947–952. — Wymagania systemowe: Adobe Reader. — Bibliogr. s. 951–952, Abstr. — Publikacja dostępna online od: 2025-04-24. — M. Trojak, A. Jurgas, A. Skalski - dod. afiliacja: MedApp S. A.

Autorzy (5)

Słowa kluczowe

mixed realitypose estimationsurgical navigationaugmented realitycomputer vision

Dane bibliometryczne

ID BaDAP159683
Data dodania do BaDAP2025-05-20
Tekst źródłowyURL
DOI10.1109/VRW66409.2025.00193
Rok publikacji2025
Typ publikacjimateriały konferencyjne (aut.)
Otwarty dostęptak
WydawcaInstitute of Electrical and Electronics Engineers (IEEE)
KonferencjaIEEE Conference on Virtual Reality and 3D User Interfaces 2025

Abstract

Modern surgical navigation systems usually use external trackers to estimate tool pose. They recognize and track markers such as ArUco/QR codes or reflective spheres. As a consequence, there is additional equipment in the operating room that must be calibrated prior to each procedure. This paper introduces a novel approach that uses only visual information from a Microsoft HoloLens 2 mixed reality headset to estimate the pose of the object. The proposed method is based on PVNet to determine the position and orientation of objects in RGB images and transforms the parameters into the headset coordinate system by incorporating depth data from the built-in ToF camera. The experiments demonstrated the angle estimation error equal to 1.91◦ ±1.16◦ for orientation angles between 0◦ and 90◦ and the translation estimation error equal to 0.64±0.19 cm for translations between 0 cm and 45 cm. Those errors meet clinical requirements in terms of accuracy and allow to use the system in practical conditions.

Publikacje, które mogą Cię zainteresować

fragment książki
#159682Data dodania: 20.5.2025
Markerless registration and visualization of volumetric medical data for enhanced surgical precision using Head-Mounted Displays / Artur JURGAS, Michał TROJAK, Maciej Stanuch, Andrzej SKALSKI // W: VRW 2025 [Dokument elektroniczny] : 2025 IEEE conference on Virtual Reality and 3D user interfaces workshops : 8–12 March 2025, Saint-Malo, France : proceedings. — Wersja do Windows. — Adobe Reader. — Piscataway : The Institute of Electrical and Electronics Engineers, cop. 2025. — Dod. ISBN: 979-8-3315-2563-7. — e-ISBN: 979-8-3315-1484-6. — S. 925–930. — Wymagania systemowe: Adobe Reader. — Bibliogr. s. 929–930, Abstr. — Publikacja dostępna online od: 2025-04-24. — A. Jurgas, M. Trojak, A. Skalski - dod. afiliacja: MedApp S. A.
fragment książki
#159686Data dodania: 20.5.2025
From tabletop to Augmented Reality: exploring navigation mesh and waypoint navigation approaches in role-playing game transformation / Jakub Jan Żaba, Magdalena Igras-Cybulska, Sławomir Konrad Tadeja, Marek FRANKOWSKI // W: VRW 2025 [Dokument elektroniczny] : 2025 IEEE conference on Virtual Reality and 3D user interfaces workshops : 8–12 March 2025, Saint-Malo, France : proceedings. — Wersja do Windows. — Adobe Reader. — Piscataway : The Institute of Electrical and Electronics Engineers, cop. 2025. — Dod. ISBN: 979-8-3315-2563-7. — e-ISBN: 979-8-3315-1484-6. — S. 589–595. — Wymagania systemowe: Adobe Reader. — Bibliogr. s. 594–595, Abstr. — Publikacja dostępna online od: 2025-04-24