TY - GEN
T1 - Earable Authentication via Acoustic Toothprint
AU - Wang, Zi
AU - Ren, Yili
AU - Chen, Yingying
AU - Yang, Jie
N1 - Funding Information:
We thank the anonymous reviewers for their insightful feedback. This work was partially supported by the NSF Grants CNS-2131143; CNS-1910519; and DGE-1565215.
Publisher Copyright:
© 2021 Owner/Author.
PY - 2021/11/12
Y1 - 2021/11/12
N2 - Earables (ear wearable) are rapidly emerging as a new platform to enable a variety of personal applications. The traditional authentication methods thus become less applicable and inconvenient for earables due to their limited input interface. Earables, however, often feature rich around the head sensing capability that can be leveraged to capture new types of biometrics. In this work, we propose ToothSonic that leverages the toothprint-induced sonic effect produced by a user performing teeth gestures for user authentication. In particular, we design several representative teeth gestures that can produce effective sonic waves carrying the information of the toothprint. To reliably capture the acoustic toothprint, it leverages the occlusion effect of the ear canal and the inward-facing microphone of the earables. It then extracts multi-level acoustic features to represent the intrinsic acoustic toothprint for authentication. The key advantages of ToothSonic are that it is suitable for earables and is resistant to various spoofing attacks as the acoustic toothprint is captured via the private teeth-ear channel of the user that is unknown to others. Our preliminary studies with 20 participants show that ToothSonic achieves 97% accuracy with only three teeth gestures.
AB - Earables (ear wearable) are rapidly emerging as a new platform to enable a variety of personal applications. The traditional authentication methods thus become less applicable and inconvenient for earables due to their limited input interface. Earables, however, often feature rich around the head sensing capability that can be leveraged to capture new types of biometrics. In this work, we propose ToothSonic that leverages the toothprint-induced sonic effect produced by a user performing teeth gestures for user authentication. In particular, we design several representative teeth gestures that can produce effective sonic waves carrying the information of the toothprint. To reliably capture the acoustic toothprint, it leverages the occlusion effect of the ear canal and the inward-facing microphone of the earables. It then extracts multi-level acoustic features to represent the intrinsic acoustic toothprint for authentication. The key advantages of ToothSonic are that it is suitable for earables and is resistant to various spoofing attacks as the acoustic toothprint is captured via the private teeth-ear channel of the user that is unknown to others. Our preliminary studies with 20 participants show that ToothSonic achieves 97% accuracy with only three teeth gestures.
KW - biometrics
KW - earable
KW - toothprint
KW - user authentication
UR - http://www.scopus.com/inward/record.url?scp=85119366237&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85119366237&partnerID=8YFLogxK
U2 - 10.1145/3460120.3485340
DO - 10.1145/3460120.3485340
M3 - Conference contribution
AN - SCOPUS:85119366237
T3 - Proceedings of the ACM Conference on Computer and Communications Security
SP - 2390
EP - 2392
BT - CCS 2021 - Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security
PB - Association for Computing Machinery
T2 - 27th ACM Annual Conference on Computer and Communication Security, CCS 2021
Y2 - 15 November 2021 through 19 November 2021
ER -