Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Skin feature point tracking using deep feature encodings
Department of Mechanical Engineering, National Cheng Kung University, Tainan, Taiwan.
Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics. Department of Mechanical Engineering, National Cheng Kung University, Tainan, Taiwan.ORCID iD: 0000-0003-4867-6707
2025 (English)In: International Journal of Machine Learning and Cybernetics, ISSN 1868-8071, E-ISSN 1868-808X, Vol. 16, p. 2503-2521Article in journal (Refereed) Published
Abstract [en]

Facial feature tracking is a key component of imaging ballistocardiography (BCG) where accurate quantification of the displacement of facial keypoints is needed for good heart rate estimation. Skin feature tracking enables video-based quantification of motor degradation in Parkinson’s disease. While traditional computer vision algorithms like Scale Invariant Feature Transform (SIFT), Speeded-Up Robust Features (SURF), and Lucas-Kanade method (LK) have been benchmarks due to their efficiency and accuracy, they often struggle with challenges like affine transformations and changes in illumination. In response, we propose a pipeline for feature tracking, that applies a convolutional stacked autoencoder to identify the most similar crop in an image to a reference crop containing the feature of interest. The autoencoder learns to represent image crops into deep feature encodings specific to the object category it is trained upon. We train the autoencoder on facial images and validate its ability to track skin features in general using manually labelled face and hand videos of small and large motion recorded in our lab. Our evaluation protocol is comprehensive, including quantification of errors in human annotation. The tracking errors of distinctive skin features (moles) are so small that we cannot exclude the fact that they stem from the manual labelling based on a χ2-test. With a mean error of 0.6–3.3 pixels, our method outperformed the other methods in all but one scenario. More importantly, our method was the only one that did not diverge. We also compare our method with the latest state-of-the-art transformer for feature matching by Google—Omnimotion. Our results indicate that our method is superior at tracking different skin features under large motion conditions and that it creates better feature descriptors for tracking, matching, and image registration compared to both traditional algorithms and the latest Omnimotion.

Place, publisher, year, edition, pages
Springer Nature, 2025. Vol. 16, p. 2503-2521
Keywords [en]
Autoencoder, Feature matching, Feature tracking, Image registration, Omnimotion
National Category
Computer graphics and computer vision
Identifiers
URN: urn:nbn:se:umu:diva-231322DOI: 10.1007/s13042-024-02405-yISI: 001339254800001Scopus ID: 2-s2.0-105002934932OAI: oai:DiVA.org:umu-231322DiVA, id: diva2:1910189
Available from: 2024-11-04 Created: 2024-11-04 Last updated: 2025-05-06Bibliographically approved

Open Access in DiVA

fulltext(3337 kB)1 downloads
File information
File name FULLTEXT01.pdfFile size 3337 kBChecksum SHA-512
888c7fc128b3c103f792376489344575baa790e004ba11567210a616cb670dda0d1393b610cc1aa05039e32719e9e605141d48486430608d21a5a2204f3478a0
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records

Nordling, Torbjörn E. M.

Search in DiVA

By author/editor
Nordling, Torbjörn E. M.
By organisation
Department of Applied Physics and Electronics
In the same journal
International Journal of Machine Learning and Cybernetics
Computer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar
Total: 1 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 17 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf