Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
HEADBAND WITH BONE CONDUCTION SPEAKERS
Document Type and Number:
WIPO Patent Application WO/2023/230588
Kind Code:
A1
Abstract:
Novel tools and techniques are provided for implementing a headband with bone conduction speakers. In various embodiments, a headband-based speaker system may include a headband portion and bone conduction speaker assemblies disposed on inner surfaces of the headband portion. Each bone conduction speaker assembly includes a vibration plate, a transducer, and a speaker housing. The transducer mechanically couples to the vibration plate to form a bone conduction speaker unit. The speaker housing may enclose a portion of the transducer with an air gap between an interior surface of the speaker housing and at least portions of the transducer. The speaker housing includes a deformable material configured to compress toward the transducer within the air gap when the headband portion is pressed up against the head of the user, without the pressed-up headband portion causing a shift in alignment of the corresponding vibration plate relative to the user's head.

Inventors:
VU TAM (US)
POGONCHEFF GALEN (US)
Application Number:
PCT/US2023/067518
Publication Date:
November 30, 2023
Filing Date:
May 25, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
EARABLE INC (US)
UNIV COLORADO REGENTS (US)
International Classes:
H04R1/10; H04R11/02
Foreign References:
US20200077204A12020-03-05
US20170347181A12017-11-30
US20190014425A12019-01-10
US20140185822A12014-07-03
US20190082264A12019-03-14
US8699742B22014-04-15
Attorney, Agent or Firm:
KING, Chad, E. et al. (US)
Download PDF:
Claims:
Claims

1. A headband-based speaker system, comprising: a headband portion configured to wrap around at least a portion of a head of a user when the headband-based speaker system is worn by the user, the headband portion comprising an inner surface; and one or more bone conduction speaker assemblies disposed on corresponding one or more first portions of the inner surface of the headband portion, each bone conduction speaker assembly comprising a bone conduction speaker device and a deformable speaker housing, the bone conduction speaker device comprising a vibration plate and a transducer, the vibration plate comprising a proximal portion facing the head of the user when the headband-based speaker system is worn by the user and a distal portion, the transducer comprising a proximal portion, one or more side portions, and a distal portion facing the inner surface of the headband portion, the proximal portion of the transducer being mechanically coupled to the distal portion of the vibration plate, the deformable speaker housing enclosing a portion of the transducer with an air gap between an interior surface of the deformable speaker housing and each of the one or more side portions and the distal portion of the transducer, the deformable speaker housing comprising a deformable material configured to compress toward the transducer within the air gap when the headband portion is pressed up against the head of the user when the headband-based speaker system is worn by the user, without the pressed-up headband portion causing a shift in alignment of the corresponding vibration plate relative to the head of the user.

2. The headband-based speaker system of claim 1, wherein, when the headband-based speaker system is worn by the user, each vibration plate and corresponding one of the one or more first portions of the inner surface of the headband portion align with one of parietal bone, temporal bone, sphenoid bone, or frontal bone of the head of the user, while minimizing pressure contact with blood vessels and nerves on the head of the user.

3. The headband-based speaker system of claim 1, wherein, when the headband-based speaker system is worn by the user, the headband portion is wrapped around a forehead of the user and above both ears of the user.

4. The headband-based speaker system of claim 3, further comprising: a pair of ear coverings, each ear covering being attachable to the headband portion and configured to cover an ear of the user when the headband-based speaker system is worn by the user, wherein each ear covering is one of removably attachable to the headband portion, permanently attachable to the headband portion, or integrated with the headband portion, wherein the ear coverings are made of one or more materials comprising at least one of cloth, foam, polyurethane, thermoplastic polyurethane ("TPU"), silicone, polycarbonate ("PC"), polyamide ("PA"), or acrylonitrile butadiene styrene ("ABS").

5. The headband-based speaker system of claim 4, wherein each ear covering further comprises at least one of a sound isolation material, a passive noise cancellation device, or an active noise cancellation device.

6. The headband-based speaker system of claim 4, further comprising: one or more acoustic speakers each disposed in one of a surface portion of the headband portion or a portion of one of the pair of ear coverings, wherein each acoustic speaker has a form factor that is configured to reduce contact pressure on the user when pressed against the head of the user, wherein the one of the surface portion of the headband portion or the portion of one of the pair of ear coverings is selected to minimize pressure contact between each speaker and blood vessels and nerves on the head of the user when the headband-based speaker system is worn by the user and when a portion of the headband portion corresponding to the one of the surface portion of the headband portion or the portion of one of the pair of ear coverings is being pressed up against the head of the user.

7. The headband-based speaker system of claim 3, further comprising: at least one eye covering comprising a pair of eye coverings each configured to cover an eye of the user or a single eye covering configured to cover both eyes of the user, each eye covering being attachable to the headband portion, wherein the at least one eye covering is one of removably attachable to the headband portion, permanently attachable to the headband portion, or integrated with the headband portion, wherein the at least one eye covering is made of one or more materials comprising at least one of cloth, foam, polyurethane, thermoplastic polyurethane ("TPU"), silicone, polycarbonate ("PC"), polyamide ("PA"), or acrylonitrile butadiene styrene ("ABS").

8. The headband-based speaker system of claim 7, wherein the at least one eye covering further comprises at least one of a head-mounted flexible display device, a head-mounted micro-projector display device, a head-mounted flexible semitransparent display device, a virtual reality ("VR") display device, an augmented reality ("AR") display device, or a mixed reality ("MR") display device.

9. The headband-based speaker system of claim 1, wherein each bone conduction speaker assembly further comprises a padding material disposed between the vibration plate and the head of the user when the headband-based speaker system is worn by the user.

10. The headband-based speaker system of claim 9, wherein the headband portion is made of one or more materials comprising at least one of polyurethane, thermoplastic polyurethane ("TPU"), silicone, or polycarbonate ("PC"), wherein the padding material is made of one or more materials comprising at least one of polyurethane, TPU, silicone, or PC, wherein the deformable material of the deformable speaker housing is made of one or more materials comprising at least one of polyamide ("PA") or acrylonitrile butadiene styrene ("ABS").

11. The headband-based speaker system of claim 1, wherein each bone conduction speaker device comprises one of a single-element speaker or a multielement speaker, wherein the multi-element speaker is configured for adjusting at least one of phase modulation, phase cancellation, or beating effects.

12. The headband-based speaker system of claim 1, wherein the one or more bone conduction speaker assemblies are part of a stereo speaker system comprising one of a stereo 2.0 channel speaker system, a stereo 2.1 channel speaker system, a stereo 5.1 channel speaker system, or a stereo 7.1 channel speaker system.

13. The headband-based speaker system of claim 1, wherein each transducer comprises a cross-sectional shape comprising one of an ellipse, a circle, a rectangle, or other polygon.

14. The headband-based speaker system of claim 1, wherein the headband portion further comprises one or more straps that are configured to tighten the headband portion around the head of the user in a closed band, wherein tightening of the headband portion around the head of the user enables at least one of consistent alignment or contact pressure of each vibration plate relative to the head of the user.

15. The headband-based speaker system of claim 1, further comprising: at least one stimulation device disposed on one or more second portions of the inner surface of the headband portion, each stimulation device comprising one of an electrical stimulation device, a vibration-based stimulation device, an audio-based stimulation device, or a lightbased stimulation device, and each stimulation device configured to stimulate a physiological response in the user when activated.

16. The headband-based speaker system of claim 1, further comprising: one or more sensors disposed within the headband portion, each sensor comprising at least one of an electroencephalography ("EEG") sensor, an electrooculography (EOG) sensor, an electromyography ("EMG") sensor, an electrocardiography ("ECG") sensor, a photoplethysmography ("PPG") sensor, an inertial measurement unit ("IMU") sensor, an accelerometer, a gyroscope, a sound sensor, a microphone, a temperature sensor, a moisture sensor, a sweat sensor, an oximeter, a heart rate sensor, a blood pressure sensor, or a light sensor.

17. The headband-based speaker system of claim 1, further comprising: at least one wireless transceiver disposed within the headband portion; and at least one processor disposed within the headband portion, the at least one processor configured to: receive, via the at least one wireless transceiver, wireless audio signals from a user device that is external to, and separate from, the headband-based speaker system; and control playback of audio content through each bone conduction speaker that is housed within each of the one or more bone conduction speaker assemblies, based on the received wireless audio signals.

18. The headband-based speaker system of claim 1, further comprising: a stabilizing structure disposed within the air gap to maintain contact between the vibration plate and the transducer.

19. A bone conduction speaker system, comprising: one or more bone conduction speaker assemblies disposed on corresponding one or more first portions of an inner surface of an article of headwear, each bone conduction speaker assembly comprising a bone conduction speaker device and a deformable speaker housing, the bone conduction speaker device comprising a vibration plate and a transducer, the vibration plate comprising a proximal portion facing a head of a user when the article of headwear is worn by the user and a distal portion, the transducer comprising a proximal portion, one or more side portions, and a distal portion facing the inner surface of the article of headwear, the proximal portion of the transducer being mechanically coupled to the distal portion of the vibration plate, the deformable speaker housing enclosing a portion of the transducer with an air gap between an interior surface of the deformable speaker housing and each of the one or more side portions and the distal portion of the transducer, the deformable speaker housing comprising a deformable material configured to compress toward the transducer within the air gap when the article of headwear is pressed up against the head of the user when the article of headwear is worn by the user without the pressed-up article of headwear causing a shift in alignment of the corresponding vibration plate relative to the head of the user.

20. The bone conduction speaker system of claim 18, wherein the article of headwear comprises one of a headband-based speaker system, a headband, a hat, a cap, a toque, a beanie, a beret, a bonnet, a helmet, a hairband, a pair of goggles, a headset, a virtual reality ("VR") headset, an augmented reality ("AR") headset, a mixed reality ("MR") headset, or a bandana, wherein each bone conduction speaker assembly is one of affixed to, removably attachable to, or integrated with the inner surface of the article of headwear.

21. The bone conduction speaker system of claim 18, wherein each transducer comprises a cross-sectional shape comprising one of an ellipse, a circle, a rectangle, or other polygon.

22. The bone conduction speaker system of claim 18, wherein each bone conduction speaker assembly further comprises a padding material disposed between the vibration plate and the head of the user when the article of headwear is worn by the user, wherein the padding material is made of one or more materials comprising at least one of polyurethane, thermoplastic polyurethane ("TPU"), silicone, or polycarbonate ("PC"), wherein the deformable material of the deformable speaker housing is made of one or more materials comprising at least one of polyamide ("PA") or acrylonitrile butadiene styrene ("ABS").

23. The bone conduction speaker system of claim 18, further comprising: at least one wireless transceiver; and at least one processor communicatively coupled with the at least one wireless transceiver and each of the one or more bone conduction speaker assemblies, the at least one processor configured to: receive, via the at least one wireless transceiver, wireless audio signals from a user device that is external to, and separate from, the bone conduction speaker system; and control playback of audio content through each bone conduction speaker that is housed within each of the one or more bone conduction speaker assemblies, based on the received wireless audio signals.

24. A method, comprising: analyzing, using a computing system, sensor data received from one or more sensors to determine whether a headband portion of a headbandbased speaker system is shifting or has shifted relative to a head of a user on which the headband-based speaker system is being worn, the one or more sensors disposed on a headband portion of a headbandbased speaker system, the one or more sensors comprising at least one of one or more electrophysiological ("EP") sensors or one or more non-EP sensors; based on a determination that the headband portion of a headband-based speaker system is shifting or has shifted relative to the head of the user, determining, using the computing system, whether the headband portion shifting or having been shifted relative to the head of the user has caused at least one of audio signal noise or audio signal distortion in audio signals being output through one or more bone conduction speakers disposed on corresponding one or more portions of the inner surface of the headband portion; based on a determination that at least one of audio signal noise or audio signal distortion in the audio signals being output through the one or more bone conduction speakers has been caused due to the headband portion shifting or having been shifted relative to the head of the user, determining, using the computing system, one or more compensating parameters for adjusting the output of the audio signals to compensate for the at least one of audio signal noise or audio signal distortion; and adjusting, using the computing system, the audio signals being output by the one or more bone conduction speakers based at least in part on the determined one or more compensating parameters.

25. The method of claim 23, wherein the computing system comprises at least one of a microprocessor, a microcontroller, a digital signal processor, a processor of the headband-based speaker system, a processor of one or more user devices among at least one user device, a server computer over a network, a cloud-based computing system over a network, or a distributed computing system, wherein the at least one user device each comprises one of a smart phone, a mobile phone, a tablet computer, a wearable device, a laptop computer, a desktop computer, a dedicated user interface ("Ul") device associated with the headband-based biosensor system, or a dedicated controller device associated with the headbandbased biosensor system.

26. The method of claim 23, wherein the one or more EP sensors each comprises at least one of an electroencephalography ("EEG") sensor, an electrooculography (EOG) sensor, an electromyography ("EMG") sensor, or an electrocardiography ("ECG") sensor, wherein the one or more non-EP sensors each comprises at least one of a photoplethysmography ("PPG") sensor, an inertial measurement unit ("IMU") sensor, an accelerometer, a gyroscope, a sound sensor, a microphone, a temperature sensor, a moisture sensor, a sweat sensor, an oximeter, a heart rate sensor, a blood pressure sensor, or a light sensor.

27. The method of claim 23, wherein each bone conduction speaker comprises a vibration plate and a transducer, the vibration plate comprising a proximal portion facing the head of the user when the headband-based speaker system is worn by the user and a distal portion, the transducer comprising a proximal portion, one or more side portions, and a distal portion facing the inner surface of the headband portion, the proximal portion of the transducer being mechanically coupled to the distal portion of the vibration plate, wherein each bone conduction speaker is housed within a bone conduction speaker assembly, each bone conduction speaker assembly further comprising a deformable speaker housing enclosing a portion of the transducer with an air gap between an interior surface of the deformable speaker housing and each of the one or more side portions and the distal portion of the transducer, the deformable speaker housing comprising a deformable material configured to compress toward the transducer within the air gap when the headband portion is pressed up against the head of the user when the headbandbased speaker system is worn by the user.

28. The method of claim 23, wherein each bone conduction speaker comprises one of a single-element speaker or a multi-element speaker, wherein the multielement speaker is configured for adjusting at least one of phase modulation, phase cancellation, or beating effects.

29. The method of claim 23, wherein the one or more bone conduction speakers are part of a stereo speaker system comprising one of a stereo 2.0 channel speaker system, a stereo 2.1 channel speaker system, a stereo 5.1 channel speaker system, or a stereo 7.1 channel speaker system.

30. The method of claim 23, wherein analyzing the sensor data received from the one or more sensors, determining whether the headband portion shifting or having been shifted relative to the head of the user has caused at least one of audio signal noise or audio signal distortion in audio signals being output through one or more bone conduction speakers, and determining the one or more compensating parameters for adjusting the output of the audio signals to compensate for the at least one of audio signal noise or audio signal distortion are performed in real-time or near-real-time.

31. The method of claim 23, wherein determining the one or more compensating parameters for adjusting the output of the audio signals to compensate for the at least one of audio signal noise or audio signal distortion and adjusting the audio signals being output by the one or more bone conduction speakers are performed based at least in part on machine learning -based noise or distortion compensation, 6 which is based on one of generative models, unsupervised models, semi-supervised

7 models, supervised models, or self-supervised models.

Description:
Headband with Biosensor Data Monitoring

Cross-References

[0001] This application claims the benefit of provisional U.S. Patent Application Nos. 63/345,517 and No. 63/345,520, both filed May 25, 2022, of which the entire disclosure of each is incorporated herein by reference.

[0002] This application may be related to U.S. Patent Application No. --/ - , filed on a date even herewith by Vu et al. and titled "Headband with Bone Conduction Speakers," the entire disclosure of which is incorporated herein by reference.

[0003] Collectively, these incorporated applications are described herein as the "Incorporated Applications.

Copyright Statement

[0004] A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.

Technical Field

[0005] The present disclosure relates, in general, to a headband-based electronics device, and, more particularly, to a headband with biosensor data monitoring including, but not limited to, a headband-based biosensor system and a bone conduction speaker system.

Background

[0006] Conventional head-based biosensor systems that utilize biosensor data monitoring do not allow for comfortable use during day or night, or during activities and rest or sleep. When a user lays on their head against such conventional headbased biosensor systems, the structural form of these systems and/or the biosensor data monitoring typically press against blood vessels and nerves on the head of the

1

SUBSTITUTE SHEET ( RULE 26) user, resulting in discomfort (and possibly injury) to the user. The structure of conventional biosensor data monitoring also fail to mitigate such effects. Also, when the user is active or there is otherwise contact (or impact) with the conventional head-based biosensor systems, resultant movement of the conventional biosensor data monitoring result in misalignment with portions of the head of the user for providing optimal sound transmission.

[0007] Conventional head-based biosensor systems also require different electrodes for each different type of electrophysiological ("EP") sensor, which results in either a large number of electrodes and connection points leading to a processing system, signal losses due to contact loss of particular electrodes with movement and/or compression of the conventional head-based biosensor systems, and/or an overly complicated hardware or mechanical structure that increases risk of failure, and/or the like.

[0008] Hence, there is a need for more robust and scalable solutions for implementing a headband-based electronics device, and, more particularly, implementing a headband with biosensor data monitoring.

Brief Description of the Drawings

[0009] A further understanding of the nature and advantages of particular embodiments may be realized by reference to the remaining portions of the specification and the drawings, in which like reference numerals are used to refer to similar components. In some instances, a sub-label is associated with a reference numeral to denote one of multiple similar components. When reference is made to a reference numeral without specification to an existing sub-label, it is intended to refer to all such multiple similar components.

[0010] Figs. 1A and IB are schematic diagrams illustrating various non-limiting examples of a system for implementing a headband with biosensor data monitoring, in accordance with various embodiments.

[0011] Figs. 2A-2D are schematic diagrams illustrating various non-limiting examples of a headband with biosensor data monitoring, in accordance with various embodiments.

2

SUBSTITUTE SHEET ( RULE 26) [0012] Figs. 3A-3D are schematic diagrams illustrating a non-limiting example of the bone conduction speaker assembly (and its components) of Figs. 1 and 2, in accordance with various embodiments.

[0013] Fig. 4 is a diagram illustrating a non-limiting example of a head of a user with labels for potential locations on the head for alignment and positioning of bone conduction speakers and/or sensors, in accordance with various embodiments.

[0014] Figs. 5A and 5B are diagrams illustrating various non-limiting examples of decomposing mixed signal data into multiple distinct sensor signal data each corresponding to one of the two or more different types of electrophysiological ("EP") sensors, in accordance with various embodiments.

[0015] Figs. 6A and 6B are flow diagrams illustrating a method for implementing a headband with biosensor data monitoring, in accordance with various embodiments.

[0016] Fig. 7 is a block diagram illustrating an example of computer or system hardware architecture, in accordance with various embodiments.

Detailed Description

Overview

[0017] Various embodiments provide tools and techniques for implementing a headband-based electronics device, and, more particularly, for implementing a headband with bone conduction speakers including, but not limited to, a headbandbased speaker system and a bone conduction speaker system. The following detailed description illustrates a few embodiments in further detail to enable one of skill in the art to practice such embodiments. The described examples are provided for illustrative purposes and are not intended to limit the scope of the invention.

[0018] As described in further detail below, some embodiments provide a headband with biosensor data monitoring. The structure and configuration of the headband with biosensor data monitoring enables comfortable use, where pressure contact with blood vessels on the head of the user is minimized or avoided even when the headband is pressed against the head of the user (such as when the user is laying down, sitting back, or otherwise lounging or resting), with the side of the user's head

3

SUBSTITUTE SHEET ( RULE 26) on a pillow, cushion, or other surface. In particular, the materials of the headband portion of the headband facilitate such pressure contact mitigation, even during activities that may otherwise cause shifting of biosensors in conventional headbased biosensor systems. Further, the use of one or more electrodes that each collect raw sensor signal data from each of two or more different types of electrophysiological ("EP") sensor as mixed signal data, and subsequent decomposition of the mixed signal data into their constituent signals provide for a more robust and more compact form factor for the headband-based biosensor system. In this manner, the headband-based biosensor system is enabled to monitor EP data from the user's head regardless of movement of the headband portion of the headband-based biosensor and/or temporary loss of contact, and/or the like. Also, a single contact point (e.g., electrode) that collects different signals for the different EP sensors enables collection of all these types of signals while avoiding use of numerous potential pressure points with the use of a larger number of electrodes (thereby contributing to the comfort in wearing of the headband-based biosensor system). In this manner, acquisition of data from specific regions of the brain and/or acquisition of data from specific facial muscles groups may be implemented in a manner that is robust, sustainable, accurate, and/or precise.

[0019] Some embodiments described herein can be combined with embodiments described in the Incorporated Applications. Merely by way of example, a headband in accordance with some embodiments can include bone conducting speakers and biosensor data monitoring. For instance, some cases, the use of the biosensor data monitoring can be used to adjust sound played by the bone conducting speakers. In similar fashion, various operations of the methods described herein can be combined with operations of the methods described in the Incorporated Applications.

[0020] In the following description, for the purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the described embodiments. It will be apparent to one skilled in the art, however, that other embodiments of the present invention may be practiced without some of these details. In other instances, some structures and devices are shown in block diagram form. Several embodiments are described herein, and while various features are

4

SUBSTITUTE SHEET ( RULE 26) ascribed to different embodiments, it should be appreciated that the features described with respect to one embodiment may be incorporated with other embodiments as well. By the same token, however, no single feature or features of any described embodiment should be considered essential to every embodiment of the invention, as other embodiments of the invention may omit such features.

[0021] Unless otherwise indicated, all numbers used herein to express quantities, dimensions, and so forth used should be understood as being modified in all instances by the term "about." In this application, the use of the singular includes the plural unless specifically stated otherwise, and use of the terms "and" and "or" means "and/or" unless otherwise indicated. Moreover, the use of the term "including," as well as other forms, such as "includes" and "included," should be considered non-exclusive. Also, terms such as "element" or "component" encompass both elements and components comprising one unit and elements and components that comprise more than one unit, unless specifically stated otherwise.

[0022] Various embodiments as described herein - while embodying (in some cases) software products, computer-performed methods, and/or computer systems - represent tangible, concrete improvements to existing technological areas, including, without limitation, headband-based electronic device technology, headband-based biosensor technology, and/or the like. In other aspects, some embodiments can improve the functioning of user equipment or systems themselves (e.g., headband-based electronic device systems, headband-based biosensor systems, etc.), for example, by receiving, using a computing system, first electrophysiological ("EP") sensor data from a first electrode disposed on a first portion of a headband portion of a headband-based biosensor system, the received first EP sensor data from the first electrode comprising first mixed signal data that superimposes raw sensor signal data from each of two or more different types of EP sensors that are communicatively coupled to the first electrode, the two or more different types of EP sensors each comprising one of an electroencephalography ("EEG") sensor, an electrooculography (EOG) sensor, an electromyography ("EMG") sensor, or an electrocardiography ("ECG") sensor; applying, using the computing system, signal processing to the received first EP sensor data to decompose the first mixed signal data into two or more distinct sensor signal data each corresponding to

5

SUBSTITUTE SHEET ( RULE 26) one of the two or more different types of EP sensors; analyzing, using the computing system, at least one of the two or more decomposed, distinct sensor signal data each individually to perceive at least one biological and/or psychological state or condition of a user who was wearing the headband-based biosensor system when the first EP sensor data was collected, wherein the biological state or condition comprises at least one of physiological, neurological, or cognitive state or condition; analyzing, using the computing system, the at least one of the two or more decomposed, distinct sensor signal data in a correlated manner with at least one of one or more other decomposed, distinct sensor signal data or one or more non-EP sensor data to determine whether the perceived at least one biological and/or psychological state or condition of the user is indicative of at least one actual biological and/or psychological state or condition of the user or a false reading; and based on a determination that the perceived at least one biological and/or psychological state or condition of the user is indicative of at least one actual biological and/or psychological state or condition of the user, sending, using the computing system, data regarding the perceived at least one biological and/or psychological state or condition of the user to at least one user device; and/or the like.

[0023] In particular, to the extent any abstract concepts are present in the various embodiments, those concepts can be implemented as described herein by devices and systems, that involve novel functionality, such as, the use of one or more electrodes that each collect raw sensor signal data from each of two or more different types of EP sensor as mixed signal data, and subsequent decomposition of the mixed signal data into their constituent signals provide for a more robust and more compact form factor for the headband-based biosensor system; and a single contact point (e.g., electrode) that collects different signals for the different EP sensors enables collection of all these types of signals while avoiding use of numerous potential pressure points with the use of a larger number of electrodes (thereby contributing to the comfort in wearing of the headband-based biosensor system); and/or the like, to name a few examples, that extend beyond mere conventional computer processing operations. These functionalities can produce tangible results outside of the implementing computer system, including, merely by

6

SUBSTITUTE SHEET ( RULE 26) way of example, comfortable use of the headband-based biosensor system during day or night and during active use or while resting (by using smaller number of electrodes that each collect sensor data for different types of EP sensors, resulting in fewer, and better-designed placement of, potential pressure points due to the electrodes themselves) while enabling acquisition of data from specific regions of the brain and/or acquisition of data from specific facial muscles groups that may be implemented in a manner that is robust, sustainable, accurate, and/or precise, at least some of which may be observed or measured by users, head-based biosensor device manufacturers, and/or manufacturers of head-based electronics devices that include biosensor data monitoring.

[0024] In an aspect, a method may comprise receiving, using a computing system, first electrophysiological ("EP") sensor data from a first electrode disposed on a first portion of a headband portion of a headband-based biosensor system, the received first EP sensor data from the first electrode comprising first mixed signal data that superimposes raw sensor signal data from each of two or more different types of EP sensors that are communicatively coupled to the first electrode, the two or more different types of EP sensors each comprising one of an electroencephalography ("EEG") sensor, an electrooculography (EOG) sensor, an electromyography ("EMG") sensor, or an electrocardiography ("ECG") sensor, and/or the like. The method may also comprise applying, using the computing system, signal processing to the received first EP sensor data to decompose the first mixed signal data into two or more distinct sensor signal data each corresponding to one of the two or more different types of EP sensors; and analyzing, using the computing system, at least one of the two or more decomposed, distinct sensor signal data each individually to perceive at least one biological and/or psychological state or condition of a user who was wearing the headband-based biosensor system when the first EP sensor data was collected. In some cases, the biological state or condition comprises at least one of physiological, neurological, or cognitive state or condition, and/or the like. The method perceives the at least one state or condition by determining that the sensor signals indicate a significant likelihood that the state or condition exists. The method may further comprise analyzing, using the computing system, the at least one of the two or more decomposed, distinct sensor signal data in a correlated manner with at

7

SUBSTITUTE SHEET ( RULE 26) least one of one or more other decomposed, distinct sensor signal data or one or more non-EP sensor data to determine whether the perceived at least one biological and/or psychological state or condition of the user is indicative of at least one actual biological and/or psychological state or condition of the user or a false reading; and, based on a determination that the perceived at least one biological and/or psychological state or condition of the user is indicative of at least one actual biological and/or psychological state or condition of the user, sending, using the computing system, data regarding the perceived at least one biological and/or psychological state or condition of the user to at least one user device.

[0025] In some embodiments, the computing system may comprise at least one of a microprocessor, a microcontroller, a digital signal processor, a processor of the headband-based biosensor system, a processor of one or more user devices among the at least one user device, a server computer over a network, a cloud-based computing system over a network, or a distributed computing system, and/or the like. In some instances, the at least one user device may each comprise one of a smart phone, a mobile phone, a tablet computer, a wearable device, a laptop computer, a desktop computer, a dedicated user interface ("Ul") device associated with the headband-based biosensor system, or a dedicated controller device associated with the headband-based biosensor system, and/or the like. In some cases, the at least one user device may each be associated with one of the user who was wearing the headband-based biosensor system when the first EP sensor data was collected, a family member of the user, a friend of the user, a guardian of the user, one or more medical professionals providing medical care to the user, or one or more other designated entities, and/or the like. In some instances, when the headband-based biosensor system is worn by the user, the headband portion may be wrapped around a forehead of the user and above both ears of the user, or the like. In some cases, the headband portion may be made of one or more materials comprising at least one of polyurethane, thermoplastic polyurethane ("TPU"), silicone, or polycarbonate ("PC"), and/or the like.

[0026] According to some embodiments, the method may further comprise receiving, using the computing system, first non-EP sensor data from each of one or more first non-EP sensors, the received first non-EP sensor data comprising the one

8

SUBSTITUTE SHEET ( RULE 26) or more non-EP sensor data; and analyzing, using the computing system, the received first non-EP sensor data individually, wherein perceiving the at least one biological and/or psychological state or condition of the user may comprise perceiving at least one biological and/or psychological state or condition of the user based at least in part on individual analysis of the at least one of the two or more decomposed, distinct sensor signal data and based at least in part on individual analysis of the first non-EP sensor data. In some instances, the one or more first non-EP sensors may each comprise at least one of a photoplethysmography ("PPG") sensor, an inertial measurement unit ("IMU") sensor, an accelerometer, a gyroscope, a sound sensor, a microphone, a temperature sensor, a moisture sensor, a sweat sensor, an oximeter, a heart rate sensor, a blood pressure sensor, or a light sensor, and/or the like. In some cases, the one or more first non-EP sensors may comprise at least one of one or more non-EP sensors that are disposed within or on the headband-based biosensor system or one or more non-EP sensors that are disposed external, yet communicatively coupled, to the headband-based biosensor system.

[0027] In some instances, perceiving the at least one biological and/or psychological state or condition of the user may comprise perceiving the at least one biological and/or psychological state or condition of the user based at least in part on at least one of: one or more detected events comprising at least one of one or more onetime events, one or more recurring events, one or more short duration events, one or more long duration events, one or more instantaneous events, or two or more concurrent events, and/or the like, the one or more detected events corresponding to at least one of time, frequency, time-frequency, or latent representations, and/or the like, of the at least one of the two or more decomposed, distinct sensor signal data, the first non-EP sensor data, or a combination of the at least one of the two or more decomposed, distinct sensor signal data and the first non-EP sensor data, and/or the like; one or more identified patterns in the at least one of the two or more decomposed, distinct sensor signal data, the first non-EP sensor data, or the combination of the at least one of the two or more decomposed, distinct sensor signal data and the first non-EP sensor data, and/or the like, that are indicative of at least one of one or more cognitive conditions, one or more health conditions, one or more diseases, or one or more physiological phenomena, and/or the like; or a

9

SUBSTITUTE SHEET ( RULE 26) comparison between biological data of the user with corresponding biological data of other users belonging to at least one of one or more general populations, one or more sub-groups, one or more regional groups, one or more ethnic groups, one or more gender groups, or one or more age groups, and/or the like. In some cases, the biological data of each user may comprise the at least one of the two or more decomposed, distinct sensor signal data, the first non-EP sensor data, or the combination of the at least one of the two or more decomposed, distinct sensor signal data and the first non-EP sensor data, and/or the like.

[0028] In some embodiments, the method may further comprise: receiving, using the computing system, second EP sensor data from each of one or more second electrodes disposed on corresponding one or more second portions of the headband portion of the headband-based biosensor system, the one or more second electrodes being separate from the first electrode, the received second EP sensor data from each of the one or more second electrodes comprising one or more second mixed signal data that each superimposes raw sensor signal data from each of two or more different types of EP sensors that are communicatively coupled to a corresponding second electrode among the one or more second electrodes; applying, using the computing system, signal processing to the received second EP sensor data to decompose the second mixed signal data from each second electrode into two or more distinct sensor signal data each corresponding to one of the two or more different types of EP sensors; and individually analyzing, using the computing system, the two or more decomposed, distinct sensor signal data corresponding to each second electrode, wherein perceiving the at least one biological and/or psychological state or condition of the user may comprise perceiving at least one biological and/or psychological state or condition of the user based at least in part on individual analysis of the at least one of the two or more decomposed, distinct sensor signal data and based at least in part on individual analysis of the two or more decomposed, distinct sensor signal data corresponding to each second electrode.

[0029] In some cases, high fidelity of at least one of the first EP sensor data, the second EP sensor data, or the one or more non-EP sensor data, and/or the like, regardless of motion of the user, orientation of a head of the user, or the headbandbased biosensor system being pressed up against the head of the user, may be io

SUBSTITUTE SHEET ( RULE 26) achieved based at least in part on at least one of: one or more first algorithms configured to evaluate channel quality of at least one of a first channel corresponding to the first electrode or one or more second channels corresponding to the one or more second electrodes; one or more second algorithms configured to perform at least one of selecting, referencing, or rejecting one or more portions or components of at least one of the first channel corresponding to the first electrode or one or more second channels corresponding to the one or more second electrodes; one or more third algorithms configured to reduce or suppress at least one of signal artifacts or signal noise in at least one of the first EP sensor data, the second EP sensor data, or the one or more non-EP sensor data, and/or the like; hardware-based amplification of signal data in at least one of the first EP sensor data, the second EP sensor data, or the one or more non-EP sensor data, and/or the like, to improve at least one of signal quality or signal noise suppression, and/or the like; or acquisition of signal data from at least one of the first electrode or one or more second electrodes over a frequency range between 0 Hz and a frequency value corresponding to half of a corresponding sampling rate of each electrode; and/or the like.

[0030] In some instances, at least one of the one or more first algorithms, the one or more second algorithms, the one or more third algorithms, one or more first models corresponding to the one or more first algorithms, one or more second models corresponding to the one or more second algorithms, or one or more third models corresponding to the one or more third algorithms, and/or the like, may be at least one of developed or updated using at least one of supervised machine learning, unsupervised machine learning, semi-supervised machine learning, self-supervised machine learning, reinforcement-based machine learning, statistical modeling, heuristic-based machine learning, or rule-based machine learning, and/or the like.

[0031] According to some embodiments, any sensor signal noise in at least one of the first EP sensor data or the one or more non-EP sensor data due to motion of the user may be reduced based at least in part on at least one of filtering, adaptive filtering, independent component analysis, principal component analysis, blind source separation analysis, or machine learning -based noise filtering, and/or the like. In some cases, the machine learning -based noise filtering may be based on one

11

SUBSTITUTE SHEET ( RULE 26) of generative models, unsupervised models, semi-supervised models, supervised models, or self-supervised models, and/or the like. In some instances, the motion of the user may comprise at least one of micro motions of the user or macro motions of the user, and/or the like.

[0032] In some embodiments, the headband portion may further comprise one or more straps that may be configured to tighten the headband portion around a head of the user in a closed band, wherein sensor signal variances of at least one of the first EP sensor data or the one or more non-EP sensor data due to loose fit of the headband portion around the head of the user compared with a tight fit of the headband portion around the head of the user is compensated based at least in part on at least one of: one or more fourth algorithms configured to monitor signal quality in the at least one of the first EP sensor data or the one or more non-EP sensor data; one or more fifth algorithms configured to reduce noise in the at least one of the first EP sensor data or the one or more non-EP sensor data; placement of the first electrode the first portion of the headband portion, wherein the first portion may be determined to result in sensor signal variances regardless of loose fit or tight fit of the headband portion around the head of the user; or formfactor of the first electrode that is configured to provide contact with skin on the head of the user regardless of loose fit or tight fit of the headband portion around the head of the user; and/or the like.

[0033] According to some embodiments, the signal processing of the received first EP sensor data may comprise multimodal processing comprising at least one of realtime processing (e.g., within one millisecond), near-real-time processing, online processing, offline processing, on-microcontroller-unit ("on-MCU") processing, on- user-device processing, or on-server processing, and/or the like. In some instances, the individual and correlated analysis of the at least one of the two or more decomposed, distinct sensor signal data may comprise multimodal analysis comprising at least one of real-time analysis, near-real-time analysis, online analysis, offline analysis, on-microcontroller-unit ("on-MCU") analysis, on-user-device analysis, or on-server analysis, and/or the like.

[0034] In some embodiments, the method may further comprise activating, using the computing system, at least one stimulation device disposed on one or more third

12

SUBSTITUTE SHEET ( RULE 26) portions of the headband portion, each stimulation device comprising one of an electrical stimulation device, a vibration-based stimulation device, an audio-based stimulation device, or a light-based stimulation device, and/or the like. Each stimulation device may be configured to stimulate a physiological response in the user when activated. In some cases, activating the at least one stimulation device may be performed after determining that the perceived at least one biological and/or psychological state or condition of the user is indicative of at least one actual biological and/or psychological state or condition of the user. In some instances, the method may also comprise receiving, using the computing system, updated first EP sensor data from the first electrode; applying, using the computing system, signal processing to the received updated first EP sensor data to decompose updated mixed signal data into updated two or more distinct sensor signal data each corresponding to one of the two or more different types of EP sensors; analyzing, using the computing system, at least one of the updated two or more decomposed, distinct sensor signal data each individually; analyzing, using the computing system, at least one of the updated two or more decomposed, distinct sensor signal data in a correlated manner with at least one of one or more other updated decomposed, distinct sensor signal data or one or more updated non-EP sensor data; determining, using the computing system, whether and to what extent the perceived at least one biological and/or psychological state or condition of the user has changed; and sending, using the computing system, data regarding any changes to the perceived at least one biological and/or psychological state or condition of the user to the at least one user device; and/or the like.

[0035] In another aspect, a headwear-based biosensor system, may comprise: a first portion; a first electrode disposed on the first portion; at least one processor; and a non-transitory computer readable medium communicatively coupled to the at least one processor. The non-transitory computer readable medium may have stored thereon computer software comprising a set of instructions that, when executed by the at least one processor, causes the at least one processor to: receive first electrophysiological ("EP") sensor data from the first electrode, the received first EP sensor data from the first electrode comprising first mixed signal data that superimposes raw sensor signal data from each of two or more different types of EP

13

SUBSTITUTE SHEET ( RULE 26) sensors that are communicatively coupled to the first electrode, the two or more different types of EP sensors each comprising one of an electroencephalography ("EEG") sensor, an electrooculography (EOG) sensor, an electromyography ("EMG") sensor, or an electrocardiography ("ECG") sensor, and/or the like; apply signal processing to the received first EP sensor data to decompose the first mixed signal data into two or more distinct sensor signal data each corresponding to one of the two or more different types of EP sensors; analyze at least one of the two or more decomposed, distinct sensor signal data each individually to perceive at least one biological and/or psychological state or condition of a user who was wearing the headwear-based biosensor system when the first EP sensor data was collected, wherein the biological state or condition comprises at least one of physiological, neurological, or cognitive state or condition, and/or the like; analyze the at least one of the two or more decomposed, distinct sensor signal data in a correlated manner with at least one of one or more other decomposed, distinct sensor signal data or one or more non-EP sensor data to determine whether the perceived at least one biological and/or psychological state or condition of the user is indicative of at least one actual biological and/or psychological state or condition of the user or a false reading; and, based on a determination that the perceived at least one biological and/or psychological state or condition of the user is indicative of at least one actual biological and/or psychological state or condition of the user, send data regarding the perceived at least one biological and/or psychological state or condition of the user to at least one user device.

[0036] In some embodiments, the headwear-based biosensor system may be one of a stand-alone electronic device having a headband form factor, affixed to one or more portions of an inner surface of an article of headwear, removably attachable to the one or more portions of the inner surface of the article of headwear, or integrated within the inner surface of the article of headwear, and/or the like. In some instances, the article of headwear may comprise one of a headband, a hat, a cap, a toque, a beanie, a beret, a bonnet, a helmet, a hairband, a pair of goggles, a headset, a virtual reality ("VR") headset, an augmented reality ("AR") headset, a mixed reality ("MR") headset, or a bandana, and/or the like.

14

SUBSTITUTE SHEET ( RULE 26) [0037] According to some embodiments, the at least one user device may each comprise one of a smart phone, a mobile phone, a tablet computer, a wearable device, a laptop computer, a desktop computer, a dedicated user interface ("Ul") device associated with the headband-based biosensor system, or a dedicated controller device associated with the headband-based biosensor system, and/or the like. In some cases, the at least one user device may each be associated with one of the user who was wearing the headband-based biosensor system when the first EP sensor data was collected, a family member of the user, a friend of the user, a guardian of the user, one or more medical professionals providing medical care to the user, or one or more other designated entities, and/or the like.

[0038] In yet another aspect, a method may comprise receiving, using a computing system, first electrophysiological ("EP") sensor data from a first electrode disposed on a first portion of a headband portion of a headband-based biosensor system, the received first EP sensor data from the first electrode comprising first mixed signal data that superimposes raw sensor signal data from each of two or more different types of EP sensors that are communicatively coupled to the first electrode, the two or more different types of EP sensors each comprising one of an electroencephalography ("EEG") sensor, an electrooculography (EOG) sensor, an electromyography ("EMG") sensor, or an electrocardiography ("ECG") sensor, and/or the like. The method may also comprise analyzing, using the computing system, the first mixed signal data to perceive at least one biological and/or psychological state or condition of a user who was wearing the headband-based biosensor system when the first EP sensor data was collected, wherein the biological state or condition comprises at least one of physiological, neurological, or cognitive state or condition. The method may further comprise analyzing, using the computing system, the first mixed signal data in a correlated manner with one or more non-EP sensor data to determine whether the perceived at least one biological and/or psychological state or condition of the user is indicative of at least one actual biological and/or psychological state or condition of the user or a false reading; and, based on a determination that the perceived at least one biological and/or psychological state or condition of the user is indicative of at least one actual biological and/or psychological state or condition of the user, sending, using the

15

SUBSTITUTE SHEET ( RULE 26) computing system, data regarding the perceived at least one biological and/or psychological state or condition of the user to at least one user device.

[0039] In some embodiments, the computing system may comprise at least one of a microprocessor, a microcontroller, a digital signal processor, a processor of the headband-based biosensor system, a processor of one or more user devices among the at least one user device, a server computer over a network, a cloud-based computing system over a network, or a distributed computing system, and/or the like. In some instances, the at least one user device may each comprise one of a smart phone, a mobile phone, a tablet computer, a wearable device, a laptop computer, a desktop computer, a dedicated user interface ("Ul") device associated with the headband-based biosensor system, or a dedicated controller device associated with the headband-based biosensor system, and/or the like.

[0040] The method may further comprise receiving, using the computing system, second EP sensor data from each of one or more second electrodes disposed on corresponding one or more second portions of the headband portion of the headband-based biosensor system, the one or more second electrodes being separate from the first electrode, the received second EP sensor data from each of the one or more second electrodes comprising one or more second mixed signal data that each superimposes raw sensor signal data from each of two or more different types of EP sensors that are communicatively coupled to a corresponding second electrode among the one or more second electrodes; and analyzing, using the computing system, the second mixed signal data from each second electrode, wherein perceiving the at least one biological and/or psychological state or condition of the user may comprise perceiving at least one biological and/or psychological state or condition of the user based at least in part on analysis of the first mixed signal data and based at least in part on analysis of the second mixed signal data from each second electrode.

Exemplary Embodiments

[0041] Certain exemplary embodiments are described below. Each of the described embodiments can be implemented separately or in any combination, as would be

16

SUBSTITUTE SHEET ( RULE 26) appreciated by one skilled in the art. Thus, no single embodiment or combination of embodiments should be considered limiting.

[0042] Figs. 1-7 illustrate some of the features of the system and apparatus for implementing a headband-based electronics device, and, more particularly, to systems and apparatuses for implementing a headband with biosensor data monitoring including, but not limited to, a headband-based biosensor system and a bone conduction speaker system, as referred to above. The methods, systems, and apparatuses illustrated by Figs. 1-7 refer to examples of different embodiments that include various components and steps, which can be considered alternatives or which can be used in conjunction with one another in the various embodiments. The description of the illustrated methods, systems, and apparatuses shown in Figs. 1-7 is provided for purposes of illustration and should not be considered to limit the scope of the different embodiments.

[0043] With reference to the figures, Figs. 1A and IB are schematic diagrams illustrating various non-limiting examples 100 and 100' of a system for implementing a headband with biosensor data monitoring, in accordance with various embodiments.

[0044] In the non-limiting embodiment of Fig. 1A, system 100 may comprise a headband-based biosensor system 105, which may include a headband portion 110. The headband portion 110 may be configured to wrap around at least a portion of a head 155a of a user 155 when the headband-based biosensor system 105 is worn by the user 155. The headband portion 110 may include, without limitation, one or more bone conduction speaker assemblies 115a-115n (collectively, "bone conduction speaker assemblies 115" or the like), at least one processor 135, at least one wireless transceiver 140, one or more sensors 145, and one or more stimulation devices 150 (optional), and/or the like.

[0045] In some embodiments, system 100 may further comprise one or more servers 135', one or more user devices 170a-170n (collectively, "user devices 170" or the like), user device(s) 175, and media content server(s) 180 and corresponding database(s) 185, or the like. Headband-based speaker system 105 may communicatively couple with each of the one or more user devices 170 via the at least one wireless transceiver 140 (as depicted in Fig. 1 by the lightning bolt

17

SUBSTITUTE SHEET ( RULE 26) symbol(s) between each user device 170 and the at least one transceiver 140), and may communicatively couple with the server 135', the user device(s) 175, and/or the media content server(s) 180 (and corresponding database(s) 185) via network(s) 190 and via the at least one wireless transceiver 140 (as depicted in Fig. 1 by the lightning bolt symbol(s) between network(s) 190 and the at least one transceiver 140). Similarly, user device(s) 170 may communicatively couple with the server 135', the user device(s) 175, and/or the media content server(s) 180 (and corresponding database(s) 185) via network(s) 190 (as depicted in Fig. 1 by the lightning bolt symbol(s) between network(s) 190 and each user device 170). The at least one transceiver 140 is capable of communicating using protocols including, but not limited to, at least one of Bluetootha communications protocol, Wi-Fi communications protocol, or other 802.11 suite of communications protocols, ZigBee communications protocol, Z wave communications protocol, or other 802.15.4 suite of communications protocols, cellular communications protocol (e.g., 3G, 4G, 4G LTE, 5G, etc.), or other suitable communications protocols, and/or the like.

[0046] In some instances, network(s) 190 may each include, without limitation, one of a local area network ("LAN"), including, without limitation, a fiber network, an Ethernet network, a Token-Ringa network, and/or the like; a wide-area network ("WAN"); a wireless wide area network ("WWAN"); a virtual network, such as a virtual private network ("VPN"); the Internet; an intranet; an extranet; a public switched telephone network ("PSTN"); an infra-red network; a wireless network, including, without limitation, a network operating under any of the IEEE 802.11 suite of protocols, the Bluetootha protocol known in the art, and/or any other wireless protocol; and/or any combination of these and/or other networks. In a particular embodiment, the network(s) 190 may include an access network of the service provider (e.g., an Internet service provider ("ISP")). In another embodiment, the network(s) 190 may include a core network of the service provider and/or the Internet.

[0047] According to some embodiments, the one or more bone conduction speaker assemblies 115 may be disposed on corresponding one or more first portions of an inner surface of the headband portion 110. Each bone conduction speaker assembly

18

SUBSTITUTE SHEET ( RULE 26) 115 may include, but is not limited to, a bone conduction speaker device 120 and a deformable speaker housing 125a. The bone conduction speaker device 120 may include, without limitation, a vibration plate 120a and a transducer 120b. The vibration plate 120a may include, but is not limited to, a proximal portion facing the head of the user when the headband-based biosensor system 105 is worn by the user 155 and a distal portion. The transducer 120b may include, without limitation, a proximal portion, one or more side portions, and a distal portion facing the inner surface of the headband portion 110, the proximal portion of the transducer 120b being mechanically coupled to the distal portion of the vibration plate 120a. The deformable speaker housing 125a may enclose a portion of the transducer 120b with an air gap 125b between an interior surface of the deformable speaker housing 125a and each of the one or more side portions and the distal portion of the transducer 120b (referred to herein as "the hanging chamber" speaker design, or the like). The deformable speaker housing 125a may include, but is not limited to, a deformable material configured to compress toward the transducer 120b within the air gap 125b when the headband portion 110 is pressed up against the head 155a of the user 155 when the headband-based biosensor system 105 is worn by the user 155, without the pressed-up headband portion 110 causing a shift in alignment of the corresponding vibration plate 120a relative to the head 155a of the user 155.

[0048] In some embodiments, each bone conduction speaker assembly 115 may further include a padding material 130 disposed between the vibration plate 120a and the head 155a of the user 155 when the headband-based biosensor system 105 is worn by the user 155. In some instances, the headband portion 110 may be made of one or more materials including, without limitation, at least one of polyurethane, thermoplastic polyurethane ("TPU"), silicone, or polycarbonate ("PC"), and/or the like. In some cases, the padding material 130 may be made of one or more materials including, but not limited to, at least one of polyurethane, TPU, silicone, or PC, and/or the like. According to some embodiments, even if the padding material 130 is made of a material that is similar to the material of the headband portion 110, the material of the padding material 130 may be designed to have a lower hardness rating (e.g., TPU with hardness of 35-40 Shore A, or the like, although not limited to such) compared with that of the material for the headband portion (e.g., TPU with

19

SUBSTITUTE SHEET ( RULE 26) hardness of 60 Shore A, or the like, although not limited to such). In some instances, the deformable material of the deformable speaker housing 125a may be made of one or more materials including, without limitation, at least one of polyamide ("PA") or acrylonitrile butadiene styrene ("ABS"), and/or the like.

[0049] In some cases, each bone conduction speaker device 120 may include, but is not limited to, one of a single-element speaker or a multi-element speaker. In some instances, the multi-element speaker may be configured for adjusting at least one of phase modulation, phase cancellation, or beating effects, and/or the like.

[0050] According to some embodiments, the one or more bone conduction speaker assemblies 110 may be part of a stereo speaker system including, but not limited to, one of a stereo 2.0 channel speaker system, a stereo 2.1 channel speaker system, a stereo 5.1 channel speaker system, or a stereo 7.1 channel speaker system, and/or the like, in which case the processor(s) 135 may utilize synchronization algorithms among the speaker components to enable optimal surround sound, or the like. In some cases, each transducer 120b may include a cross-sectional shape including, without limitation, one of an ellipse, a circle, a rectangle, or other polygon, and/or the like. In some instances, the headband portion 110 may further include one or more straps (shown in Figs. 2A and 2C, or the like) that are configured to tighten the headband portion 110 around the head 155a of the user 155 in a closed band, where tightening of the headband portion 110 around the head 155a of the user 155 enables at least one of consistent alignment or contact pressure of each vibration plate 120a relative to the head 155a of the user 155.

[0051] In some embodiments, when the headband-based biosensor system 105 is worn by the user 155, each vibration plate 120a and corresponding one of the one or more first portions of the inner surface of the headband portion 110 may align with one of parietal bone, temporal bone, sphenoid bone, or frontal bone, and/or the like, of the head 155a of the user 155, while minimizing pressure contact with blood vessels and nerves on the head 155a of the user 155.

[0052] According to some embodiments, when the headband-based biosensor system 105 is worn by the user 155, the headband portion 110 may be wrapped around a forehead of the user 155 and above both ears of the user 155.

20

SUBSTITUTE SHEET ( RULE 26) [0053] In some embodiments, the headband-based biosensor system 105 may further include a pair of ear coverings 160. Each ear covering 160 may be attachable to the headband portion 110 and may be configured to cover an ear of the user 155 when the headband-based biosensor system 105 is worn by the user 155. In some instances, each ear covering 160 may be one of removably attachable to the headband portion 110, permanently attachable to the headband portion 110, or integrated with the headband portion 110, and/or the like. For instance, each ear covering 160 may attach to the headband portion 110 via one of one or more sewed on threads, glue or other adhesive, one or more hook-and-loop fasteners (e.g., Velcro®, or the like), one or more button-based fasteners, one or more magnetic fasteners, one or more wire fasteners, one or more wire clasps, one or more hinges, one or more screws, one or more clips, or other fasteners, and/or the like. In some cases, the pair of ear coverings 160 may be made of one or more materials, including, but not limited to, at least one of cloth, foam, polyurethane, thermoplastic polyurethane ("TPU"), silicone, polycarbonate ("PC"), polyamide ("PA"), or acrylonitrile butadiene styrene ("ABS"), and/or the like. In some instances, each ear covering 160 may further include, without limitation, at least one of a sound isolation material, a passive noise cancellation device, or an active noise cancellation device, and/or the like (collectively, "sound isolation and/or cancellation material 160a" or "sound isolation and/or cancellation device 160a" or the like). In some cases, the headband-based biosensor system 105 may further include, but is not limited to, one or more acoustic speakers 160b, each disposed in one of a surface portion of the headband portion 110 or a portion of one of the pair of ear coverings 160. In some embodiments, the acoustic speakers 160b may be configured to be directional such that sound is focused toward the opening of the ear(s) of the user, without the acoustic speakers being placed in the ear(s) (as this results in undesired pressure contact within and on the ear of the user when pressed up against the user (such as when sleeping or laying on their head against a pillow, cushion, or other surface), or the like). According to some embodiments, directional sound may be achieved using micro-speaker arrays or the like that are configured to generate sound fields of desired directionality and shape, or the like. In some instances, each acoustic speaker 160b may have a form factor that is configured to reduce contact

21

SUBSTITUTE SHEET ( RULE 26) pressure on the user 155 when pressed up against the head 155a of the user 155. In some cases, the one of the surface portion of the headband portion 110 or the portion of one of the pair of ear coverings 160 may be selected to minimize pressure contact between each speaker 120 and blood vessels and nerves on the head 155a of the user 155 when the headband-based biosensor system 105 is worn by the user 155 and when a portion of the headband portion 110 corresponding to the one of the surface portion of the headband portion 110 or the portion of one of the pair of ear coverings 160 is being pressed up against the head 155a of the user 155.

[0054] According to some embodiments, the headband-based biosensor system 105 may further include, but is not limited to, at least one eye covering 165 including a pair of eye coverings 165 each configured to cover an eye of the user 155 or a single eye covering 165 configured to cover both eyes of the user 155, each eye covering 165 being attachable to the headband portion 110. In some instances, the at least one eye covering 165 may be one of removably attachable to the headband portion 110, permanently attachable to the headband portion 110, or integrated with the headband portion 110, and/or the like. For instance, the at least one eye covering 165 may attach to the headband portion 110 via one of one or more sewed on threads, glue or other adhesive, one or more hook-and-loop fasteners (e.g., Velcro®, or the like), one or more button-based fasteners, one or more magnetic fasteners, one or more wire fasteners, one or more wire clasps, one or more hinges, one or more screws, one or more clips, or other fasteners, and/or the like. In some cases, the at least one eye covering 165 may be made of one or more materials including, without limitation, at least one of cloth, foam, polyurethane, thermoplastic polyurethane ("TPU"), silicone, polycarbonate ("PC"), polyamide ("PA"), or acrylonitrile butadiene styrene ("ABS"), and/or the like. In some instances, the at least one eye covering 165 may further include a display device(s) 165a, including, but not limited to, at least one of a head-mounted flexible display device, a headmounted micro-projector display device, a head-mounted flexible semi-transparent display device, a virtual reality ("VR") display device, an augmented reality ("AR") display device, or a mixed reality ("MR") display device, and/or the like.

[0055] In some embodiments, the headband-based biosensor system 105 may include, without limitation, one or more sensors 145 disposed within the headband

22

SUBSTITUTE SHEET ( RULE 26) portion 110. The one or more sensors 145 may include one or more electrophysiological ("EP") sensors 145a, which may each include, without limitation, at least one of an electroencephalography ("EEG") sensor 145b, an electrooculography (EOG) sensor 145c, an electromyography ("EMG") sensor 145d, an electrocardiography ("ECG") sensor 145e, and/or the like. The one or more sensors 145 may each further include at least one of a photoplethysmography ("PPG") sensor 145g, an inertial measurement unit ("IMU") sensor 145h, or one or more other sensors 145i, and/or the like (collectively, "non-EP sensors" or the like). In some instances, the one or more other sensors 145i may each include, but is not limited to, at least one of an accelerometer, a gyroscope, a sound sensor, a microphone, a temperature sensor, a moisture sensor, a sweat sensor, an oximeter, a heart rate sensor, a blood pressure sensor, or a light sensor, and/or the like.

[0056] According to some embodiments, the headband-based biosensor system 105 may further include, but is not limited to, at least one stimulation device 150 disposed on one or more second portions of the inner surface of the headband portion 110, each stimulation device 150 may include, without limitation, one of an electrical stimulation device, a vibration-based stimulation device, an audio-based stimulation device, or a light-based stimulation device, and/or the like, and each stimulation device 150 may be configured to stimulate a physiological response in the user 155 when activated.

[0057] In some embodiments, the headband-based biosensor system 105 may further include, but is not limited to, the at least one wireless transceiver 140 disposed within the headband portion 110; and the at least one processor 135 disposed within the headband portion 110. The at least one processor 135 may be configured to: receive, via the at least one wireless transceiver 140, wireless audio signals from a user device (e.g., user device(s) 170a-170n and/or 175 or media content server(s) via user device(s) 170a-170n and/or 175, or the like) that is external to, and separate from, the headband-based biosensor system 105; and control playback of audio content through each bone conduction speaker 120 that is housed within each of the one or more bone conduction speaker assemblies 115, based on the received wireless audio signals.

23

SUBSTITUTE SHEET ( RULE 26) [0058] In another aspect, a bone conduction speaker system (although not shown in Fig. 1) may comprise one or more bone conduction speaker assemblies (similar to bone conduction speaker assemblies 115a-115n, or the like) disposed on corresponding one or more first portions of an inner surface of an article of headwear (not shown in Fig. 1). Each bone conduction speaker assembly may include, without limitation, a bone conduction speaker device (similar to bone conduction speaker device 120, or the like) and a deformable speaker housing (similar to deformable speaker housing 125a, or the like). The bone conduction speaker device may include, but is not limited to, a vibration plate (similar to vibration plate 120a, or the like) and a transducer (e.g., transducer 120b, or the like). The vibration plate may include, but is not limited to, a proximal portion facing the head (similar to head 155a, or the like) of the user (similar to user 155, or the like) when the article of headwear is worn by the user and a distal portion. The transducer may include, without limitation, a proximal portion, one or more side portions, and a distal portion facing the inner surface of the article of headwear. The proximal portion of the transducer may be mechanically coupled to the distal portion of the vibration plate. The deformable speaker housing may enclose a portion of the transducer with an air gap (similar to air gap 125b, or the like) between an interior surface of the deformable speaker housing and each of the one or more side portions and the distal portion of the transducer. The deformable speaker housing may include, but is not limited to, a deformable material configured to compress toward the transducer within the air gap when the article of headwear is pressed up against the head of the user when the article of headwear is worn by the user without the pressed-up article of headwear causing a shift in alignment of the corresponding vibration plate relative to the head of the user.

[0059] In some embodiments, the article of headwear may include, without limitation, one of a headband-based biosensor system, a headband, a hat, a cap, a toque, a beanie, a beret, a bonnet, a helmet, a hairband, a pair of goggles, a headset, a virtual reality ("VR") headset, an augmented reality ("AR") headset, a mixed reality ("MR") headset, or a bandana, and/or the like. In some cases, each bone conduction speaker assembly may be one of affixed to, removably attachable to, or integrated with the inner surface of the article of headwear. In some instances, each

24

SUBSTITUTE SHEET ( RULE 26) transducer may include, but is not limited to, a cross-sectional shape comprising one of an ellipse, a circle, a rectangle, or other polygon, and/or the like.

[0060] According to some embodiments, each bone conduction speaker assembly may further include a padding material (similar to padding material 130, or the like) disposed between the vibration plate and the head of the user when the article of headwear is worn by the user. In some cases, the padding material may be made of one or more materials including, but not limited to, at least one of polyurethane, thermoplastic polyurethane ("TPU"), silicone, or polycarbonate ("PC"), and/or the like. In some instances, the deformable material of the deformable speaker housing may be made of one or more materials including, without limitation, at least one of polyamide ("PA") or acrylonitrile butadiene styrene ("ABS"), and/or the like.

[0061] In some embodiments, the bone conduction speaker system may further include, but is not limited to, at least one wireless transceiver (similar to wireless transceiver(s) 140, or the like); and at least one processor (similar to processor(s) 135, or the like) communicatively coupled with the at least one wireless transceiver and each of the one or more bone conduction speaker assemblies. The at least one processor may be configured to: receive, via the at least one wireless transceiver, wireless audio signals from a user device (e.g., user device(s) 170a-170n and/or 175, or the like) that is external to, and separate from, the bone conduction speaker system; and control playback of audio content through each bone conduction speaker that is housed within each of the one or more bone conduction speaker assemblies, based on the received wireless audio signals.

[0062] Turning to the sensor functionalities of the headband-based biosensor system 105, in operation, the at least one processor 135, the server 135', or other computing systems (collectively, "computing system" or the like) may receive first EP sensor data from a first electrode disposed on a first portion of a headband portion of a headband-based biosensor system, the received first EP sensor data from the first electrode including first mixed signal data that superimposes raw sensor signal data from each of two or more different types of EP sensors that are communicatively coupled to the first electrode, the two or more different types of EP sensors 145a, or the like. In some instances, the other computing systems may include, but are not limited to, at least one of a microprocessor, a microcontroller, a

25

SUBSTITUTE SHEET ( RULE 26) digital signal processor, a processor of one or more user devices among the at least one user device, a cloud-based computing system over a network, or a distributed computing system, and/or the like.

[0063] The computing system may apply signal processing to the received first EP sensor data to decompose the first mixed signal data into two or more distinct sensor signal data each corresponding to one of the two or more different types of EP sensors; may analyze at least one of the two or more decomposed, distinct sensor signal data each individually to perceive at least one biological and/or psychological state or condition of a user who was wearing the headband-based biosensor system when the first EP sensor data was collected; and may analyze the at least one of the two or more decomposed, distinct sensor signal data in a correlated manner with at least one of one or more other decomposed, distinct sensor signal data or one or more non-EP sensor data to determine whether the perceived at least one biological and/or psychological state or condition of the user is indicative of at least one actual biological and/or psychological state or condition of the user or a false reading. Herein, biological state or condition may include, without limitation, at least one of physiological, neurological, or cognitive state or condition, and/or the like. Based on a determination that the perceived at least one biological and/or psychological state or condition of the user is indicative of at least one actual biological and/or psychological state or condition of the user, the computing system may send data regarding the perceived at least one biological and/or psychological state or condition of the user to at least one user device (e.g., user device(s) 170a- 170n and/or 175, via transceiver(s) 140 and/or network(s) 190, or the like).

[0064] In some instances, the at least one user device may each comprise one of a smart phone, a mobile phone, a tablet computer, a wearable device, a laptop computer, a desktop computer, a dedicated user interface ("Ul") device associated with the headband-based biosensor system, or a dedicated controller device associated with the headband-based biosensor system, and/or the like. In some cases, the at least one user device may each be associated with one of the user who was wearing the headband-based biosensor system when the first EP sensor data was collected, a family member of the user, a friend of the user, a guardian of the user, one or more medical professionals providing medical care to the user, or one

26

SUBSTITUTE SHEET ( RULE 26) or more other designated entities, and/or the like. In some instances, when the headband-based biosensor system is worn by the user, the headband portion may be wrapped around a forehead of the user and above both ears of the user, or the like.

[0065] According to some embodiments, the computing system may receive first non-EP sensor data from each of one or more first non-EP sensors (e.g., PPG sensor 145g, IMU sensor 145h, and/or other sensor(s) 145i, or the like), the received first non-EP sensor data comprising the one or more non-EP sensor data; may analyze the received first non-EP sensor data individually, where perceiving the at least one biological and/or psychological state or condition of the user may comprise perceiving at least one biological and/or psychological state or condition of the user based at least in part on individual analysis of the at least one of the two or more decomposed, distinct sensor signal data and based at least in part on individual analysis of the first non-EP sensor data. In some cases, the one or more first non-EP sensors may include, without limitation, at least one of one or more non-EP sensors that are disposed within or on the headband-based biosensor system 105 or one or more non-EP sensors that are disposed external, yet communicatively coupled, to the headband-based biosensor system 105.

[0066] In some instances, perceiving the at least one biological and/or psychological state or condition of the user may comprise perceiving the at least one biological and/or psychological state or condition of the user based at least in part on at least one of: one or more detected events comprising at least one of one or more onetime events, one or more recurring events, one or more short duration events, one or more long duration events, one or more instantaneous events, or two or more concurrent events, and/or the like, the one or more detected events corresponding to at least one of time, frequency, time-frequency, or latent representations, and/or the like, of the at least one of the two or more decomposed, distinct sensor signal data, the first non-EP sensor data, or a combination of the at least one of the two or more decomposed, distinct sensor signal data and the first non-EP sensor data, and/or the like; one or more identified patterns in the at least one of the two or more decomposed, distinct sensor signal data, the first non-EP sensor data, or the combination of the at least one of the two or more decomposed, distinct sensor

27

SUBSTITUTE SHEET ( RULE 26) signal data and the first non-EP sensor data, and/or the like, that are indicative of at least one of one or more cognitive conditions, one or more health conditions, one or more diseases, or one or more physiological phenomena, and/or the like; or a comparison between biological data of the user with corresponding biological data of other users belonging to at least one of one or more general populations, one or more sub-groups, one or more regional groups, one or more ethnic groups, one or more gender groups, or one or more age groups, and/or the like. In some cases, the biological data of each user may include, but is not limited to, the at least one of the two or more decomposed, distinct sensor signal data, the first non-EP sensor data, or the combination of the at least one of the two or more decomposed, distinct sensor signal data and the first non-EP sensor data, and/or the like.

[0067] In some embodiments, the computing system may receive second EP sensor data from each of one or more second electrodes disposed on corresponding one or more second portions of the headband portion of the headband-based biosensor system, the one or more second electrodes being separate from the first electrode, the received second EP sensor data from each of the one or more second electrodes comprising one or more second mixed signal data that each superimposes raw sensor signal data from each of two or more different types of EP sensors that are communicatively coupled to a corresponding second electrode among the one or more second electrodes. The computing system may apply signal processing to the received second EP sensor data to decompose the second mixed signal data from each second electrode into two or more distinct sensor signal data each corresponding to one of the two or more different types of EP sensors; and may individually analyze the two or more decomposed, distinct sensor signal data corresponding to each second electrode, where perceiving the at least one biological and/or psychological state or condition of the user may comprise perceiving at least one biological and/or psychological state or condition of the user based at least in part on individual analysis of the at least one of the two or more decomposed, distinct sensor signal data and based at least in part on individual analysis of the two or more decomposed, distinct sensor signal data corresponding to each second electrode. Herein, "individually analyzing two or more signals or data" refers to analyzing each of the two or more signals or data in an individual manner.

28

SUBSTITUTE SHEET ( RULE 26) [0068] In some cases, high fidelity of at least one of the first EP sensor data, the second EP sensor data, or the one or more non-EP sensor data, and/or the like, regardless of motion of the user, orientation of a head of the user, or the headbandbased biosensor system being pressed up against the head of the user, may be achieved based at least in part on at least one of: one or more first algorithms configured to evaluate channel quality of at least one of a first channel corresponding to the first electrode or one or more second channels corresponding to the one or more second electrodes; one or more second algorithms configured to perform at least one of selecting, referencing, or rejecting one or more portions or components of at least one of the first channel corresponding to the first electrode or one or more second channels corresponding to the one or more second electrodes; one or more third algorithms configured to reduce or suppress at least one of signal artifacts or signal noise in at least one of the first EP sensor data, the second EP sensor data, or the one or more non-EP sensor data, and/or the like; hardware-based amplification of signal data in at least one of the first EP sensor data, the second EP sensor data, or the one or more non-EP sensor data, and/or the like, to improve at least one of signal quality or signal noise suppression, and/or the like; or acquisition of signal data from at least one of the first electrode or one or more second electrodes over a frequency range between 0 Hz and a frequency value corresponding to half of a corresponding sampling rate of each electrode; and/or the like.

[0069] In some instances, at least one of the one or more first algorithms, the one or more second algorithms, the one or more third algorithms, one or more first models corresponding to the one or more first algorithms, one or more second models corresponding to the one or more second algorithms, or one or more third models corresponding to the one or more third algorithms, and/or the like, may be at least one of developed or updated using at least one of supervised machine learning, unsupervised machine learning, semi-supervised machine learning, self-supervised machine learning, reinforcement-based machine learning, statistical modeling, heuristic-based machine learning, or rule-based machine learning, and/or the like.

[0070] According to some embodiments, any sensor signal noise in at least one of the first EP sensor data or the one or more non-EP sensor data due to motion of the

29

SUBSTITUTE SHEET ( RULE 26) user may be reduced based at least in part on at least one of filtering, adaptive filtering, independent component analysis, principal component analysis, blind source separation analysis, or machine learning -based noise filtering, and/or the like. In some cases, the machine learning -based noise filtering may be based on one of generative models, unsupervised models, semi-supervised models, supervised models, or self-supervised models, and/or the like. In some instances, the motion of the user may include, but is not limited to, at least one of micro motions of the user (including, without limitation, motion due to breathing, motion due to snoring, eye movement (when the user is awake or during REM sleep, etc.), motion due to heart beating, etc.) or macro motions of the user (including, without limitation, walking, running, or performing other physical activities, movement during sleep, etc.) and/or the like.

[0071] In some embodiments, the headband portion may further comprise one or more straps (as shown in Figs. 2A and 2C, or the like, although not limited to the type of straps as shown) that may be configured to tighten the headband portion around a head of the user in a closed band. In such cases, sensor signal variances of at least one of the first EP sensor data or the one or more non-EP sensor data due to loose fit of the headband portion around the head of the user compared with a tight fit of the headband portion around the head of the user may be compensated based at least in part on at least one of: one or more fourth algorithms configured to monitor signal quality in the at least one of the first EP sensor data or the one or more non- EP sensor data; one or more fifth algorithms configured to reduce noise in the at least one of the first EP sensor data or the one or more non-EP sensor data; placement of the first electrode the first portion of the headband portion, where the first portion may be determined to result in sensor signal variances regardless of loose fit or tight fit of the headband portion around the head of the user; or formfactor of the first electrode that is configured to provide contact with skin on the head of the user regardless of loose fit or tight fit of the headband portion around the head of the user; and/or the like.

[0072] According to some embodiments, the signal processing of the received first EP sensor data may comprise multimodal processing comprising at least one of realtime processing, near-real-time processing, online processing, offline processing, on-

30

SUBSTITUTE SHEET ( RULE 26) microcontroller-unit ("on-MCU") processing, on-user-device processing, or on-server processing, and/or the like. In some instances, the individual and correlated analysis of the at least one of the two or more decomposed, distinct sensor signal data may comprise multimodal analysis comprising at least one of real-time analysis, near- real-time analysis, online analysis, offline analysis, on-microcontroller-unit ("on- MCU") analysis, on-user-device analysis, or on-server analysis, and/or the like.

[0073] In some embodiments, the computing system may activate at least one stimulation device (e.g., stimulation device(s) 150, or the like) that is disposed on one or more third portions of the headband portion. In some cases, activating the at least one stimulation device may be performed after determining that the perceived at least one biological and/or psychological state or condition of the user is indicative of at least one actual biological and/or psychological state or condition of the user. In some instances, the computing system may receive updated first EP sensor data from the first electrode; may apply signal processing to the received updated first EP sensor data to decompose updated mixed signal data into updated two or more distinct sensor signal data each corresponding to one of the two or more different types of EP sensors; may analyze at least one of the updated two or more decomposed, distinct sensor signal data each individually; may analyze at least one of the updated two or more decomposed, distinct sensor signal data in a correlated manner with at least one of one or more other updated decomposed, distinct sensor signal data or one or more updated non-EP sensor data; may determine whether and to what extent the perceived at least one biological and/or psychological state or condition of the user has changed; and may send data regarding any changes to the perceived at least one biological and/or psychological state or condition of the user to the at least one user device; and/or the like.

[0074] Alternative to decomposing the EP sensor data then analyzing based on the decomposed EP sensor data, according to some embodiments, analysis (whether correlated or not) may be performed on the mixed signal data corresponding to each electrode. In such cases, algorithms, machine learning approaches, and/or learning models (as described herein) may be used to facilitate as well as enhance results of analysis based on the mixed signal data.

31

SUBSTITUTE SHEET ( RULE 26) [0075] Alternatively, or additionally, perception of at least one biological and/or psychological state or condition of the user may be performed based on correlated analysis of the EP sensor data (regardless of whether decomposed or mixed signal data is used) and the non-EP sensor data, rather than based on individual analysis of each type of data.

[0076] Although Fig. 1 is directed to a headband-based biosensor system, the various embodiments are not so limited, and a headwear-based biosensor system having similar functionality and at least some of the components of the abovedescribed headband-based biosensor system may be used in a similar manner as described with respect to the headband-based biosensor system. In some embodiments, the headwear-based biosensor system may be one of a stand-alone electronic device having a headband form factor, affixed to one or more portions of an inner surface of an article of headwear, removably attachable to the one or more portions of the inner surface of the article of headwear, or integrated within the inner surface of the article of headwear, and/or the like. In some instances, the article of headwear may include, but is not limited to, one of a headband, a hat, a cap, a toque, a beanie, a beret, a bonnet, a helmet, a hairband, a pair of goggles, a headset, a virtual reality ("VR") headset, an augmented reality ("AR") headset, a mixed reality ("MR") headset, or a bandana, and/or the like.

[0077] Referring to the non-limiting example 100' of Fig. IB, sensor data from at least one of EP sensor(s) 145a, PPG sensor(s) 145g, IMU sensor(s) 145h, microphone(s) 145j, and/or other sensor(s) 145i, or the like may be processed and/or analyzed by processor 135", in some cases, using digital signal processor 135a", and, in some instances, using inference algorithms 135b" (similar to first through fifth algorithms as described above), which may be part of the code that is stored in memory 135c", or the like. According to some embodiments, prior to being processed and/or analyzed by processor 135", the sensor data may be processed through one or more hardware-based signal quality improvement mechanisms 195, including, but not limited to, hardware-based amplification circuits (including, but not limited to, operational amplifier ("op-amp") circuits, or the like) to amplify signal data in the sensor data to improve at least one of signal quality or signal noise suppression, and/or the like. As shown in Fig. IB, processor 135" may be at least

32

SUBSTITUTE SHEET ( RULE 26) one of local processing 135 (with DSP 135a and inference algorithms 135b, or the like) on a local device (e.g., either headband-based biosensor system 105 and/or user device 170 among user devices 170a-170n of Fig. 1A, or the like) or remote processing 135' (with DSP 135a', inference algorithms 135b', and database 135c', or the like) on a cloud-based computing system(s) over a network 190', or the like. The components, features, and functionalities of example 100' of Fig. IB are otherwise similar, if not identical, to the corresponding components, features, and functionalities of example 100 of Fig. 1A, the description of such corresponding components, features, and functionalities of example 100 of Fig. 1A are applicable to the components, features, and functionalities of example 100' of Fig. IB.

[0078] These and other functions of the system 100 (and its components) are described in greater detail below with respect to Figs. 2-6.

[0079] Figs. 2A-2D (collectively, "Fig. 2") are schematic diagrams illustrating various non-limiting examples 200 and 200' of a headband with biosensor data monitoring, in accordance with various embodiments.

[0080] As shown in the non-limiting embodiment 200 of Figs. 2A-2C, a headbandbased biosensor system 205 (similar to headband-based biosensor system 105 of Fig. 1, or the like) may include a headband portion 210 (similar to headband portion 110 of Fig. 1, or the like). Headband portion 210 may include, without limitation, a front cover 210a (also referred to as "outer surface" or the like), a back cover 210b (also referred to as "inner surface" or the like), a backstrap 210c (also referred to as "strap(s)" or the like), a power button 210d, one or more volume buttons 210e, or a cable port 210f (which may include, but is not limited to, a USB port or other data and/or power supply port, or the like), and/or the like, headband-based biosensor system 205 may further include, without limitation, one or more bone conduction speakers or speaker assemblies 215 (similar to bone conduction speaker assemblies 115a-115n of Fig. 1, or the like), headband-based biosensor system 205 may further include one or more sensors 245, including, but not limited to, one or more spider electrodes 245a and one or more behind-the-ear electrodes 245b, or the like. In some embodiments, the one or more spider electrodes 245a, which may be made of a flexible conductive material (including, without limitation, conductive silicone, or the like), may be configured to extend through any head hair of a user wearing the

33

SUBSTITUTE SHEET ( RULE 26) headband-based biosensor system 205 to contact skin on the head of the user, and may be configured to provide optimal conductivity (through said head hair and with the contacted skin) while providing comfort and stability. In some embodiments, the one or more spider electrodes 245a may further be configured to achieve quick stabilization times (e.g., on the order of less than 1 minute stabilization) of signal detection. Similarly, the one or more behind-the-ear electrodes 245b may be configured to make contact with the skin on the head of the user that is, as the name suggests, behind the ear(s) of the user.

[0081] According to some embodiments, each of the one or more spider electrodes 245a and the one or more behind-the-ear electrodes 245b may be further configured to monitor, track, and/or collect biosensor data of the user, the biosensor data including, but not limited to, at least one of electroencephalography ("EEG") sensor data, electrooculography (EOG) sensor data, electromyography ("EMG") sensor data, electrocardiography ("ECG") sensor data, or photoplethysmography ("PPG") sensor data, and/or the like. Although not shown in Figs. 2A-2C, in some embodiments, the one or more sensors 245 may further include, without limitation, at least one of one or more forehead sensors disposed on the back cover 210b (and positioned to align with the forehead of the user when the headband-based biosensor system is worn by the user), one or more motion and/or orientation - based sensors disposed within the headband portion 210, or one or more other sensors disposed within and/or on a surface of the headband 210, and/or the like. The one or more forehead sensors (similar to sensors or electrodes 245c to 245g of Fig. 2D, or the like) may also be configured to monitor, track, and/or collect biosensor data of the user (including, but not limited to, at least one of EEG sensor data, the EOG sensor data, or the EMG sensor data, and/or the like). Herein, "electrode" may refer to an electrically conductive contact surface that collects electrophysiological signals from the user (in this case, from the head of the user), while "sensor" may refer to one of the sensors 145a-145j in Fig. 1, or the like. In the case that only one sensor is communicatively coupled to one electrode, "electrode" and "sensor" are synonymous and interchangeable. However, in the case that many sensors are communicatively coupled to one electrode, each "sensor" should be individually referenced, while the "electrode" refers to an electrically conductive

34

SUBSTITUTE SHEET ( RULE 26) contact surface that collects and mixes (or superimposes) the individual sensor signals from each of the communicatively coupled "sensors."

[0082] The raw data collected by each of these sensors 245 may contain all these biosensor data mixed on top of each other. A processor(s) (similar to processor(s) 135 of Fig. 1, or the like) of the headband-based biosensor system 205 and/or the processor(s) of an external device (e.g., user device(s) 170a-170n and/or 175 of Fig. 1, or the like) may apply algorithms for filtering, for data cleaning, and/or for otherwise signal processing the collected raw data (which conventional devices would treat as noise signals) to extract (and in some cases, amplify) individual signals each corresponding to the at least one of the EEG sensor data, the EOG sensor data, or the EMG sensor data, and/or the like.

[0083] In some instances, the one or more motion and/or orientation -based sensors may include, but are not limited to, at least one of one or more inertial measurement unit ("IMU") sensors, one or more accelerometers, or one or more gyroscopes, and/or the like. The one or more motion and/or orientation -based sensors may be configured to track motion and/or relative motion as well as orientation of the headband-based biosensor system, and thus the head of the user. Correlation of data associated with motion and/or orientation of the head of the user (via motion and/or orientation of the headband-based biosensor system) with the biosensor data of the user (collected via the biosensors) [collectively, "correlated data" or the like] may be useful in monitoring the health and physiological status of the user. Where audio signals that are output through the bone conduction speakers and/or the acoustic speakers are intended for therapy purposes for the user, such audio signals may be modulated and/or adjusted in response to at least one of the biosensor data, the data associated with motion and/or orientation of the head of the user, and/or the correlated data. In some cases, data associated with motion and/or orientation of the headband-based biosensor system may be used to suppress noise in the biosensor data and/or noise effects in the audio signals (regardless of whether the audio signals are therapybased audio signals, music, voice signals (such as voice signals for a telephone call, voice over Internet protocol ("VoIP") call, a voice chat via software application

35

SUBSTITUTE SHEET ( RULE 26) ("app") chat, or an audio book, and/or the like), or other audio signals (e.g., sounds of nature, white-noise sounds, etc.), and/or the like).

[0084] In some cases, the one or more other sensors may include, without limitation, at least one of a sound sensor, a microphone, a temperature sensor, a moisture sensor, a sweat sensor, an oximeter, a heart rate sensor, a blood pressure sensor, or a light sensor, and/or the like. The sound sensor and/or the microphone may be configured to monitor breathing sounds of the user, other head-based sounds of the user, and/or ambient sounds. In some instances, the breathing sounds, the other head-based sounds, and/or the ambient sounds may be used as inputs for health monitoring and/or therapy purposes for the user. Alternatively, or additionally, the breathing sounds, the other head-based sounds, and/or the ambient sounds may be used to suppress noise effects in the audio signals (regardless of whether the audio signals are therapy-based audio signals, music, voice signals (such as voice signals for a telephone call, voice over Internet protocol ("VoIP") call, a voice chat via software application ("app") chat, or an audio book, and/or the like), or other audio signals (e.g., sounds of nature, white-noise sounds, etc.), and/or the like). In some cases, the microphone may also be used as an input device for voice control and communication. The temperature sensor may be configured to monitor either temperature of the user or ambient temperature, or both, and resultant temperature data may be used for health monitoring and/or therapy purposes for the user. The moisture sensor may be configured to ambient moisture levels around the user (e.g., around the headband-based speaker system, etc.) and/or sweat levels of the user, while the sweat sensor may be configured to monitor sweat levels of the user and/or salinity of the sweat of the user. The oximeter may be configured to monitor blood oxygen levels of the user, while the heart rate sensor may be configured to measure the heart rate or pulse of the user, and a pulse oximeter may be configured to perform both functions. The blood pressure sensor may be configured to measure blood pressure of the user, The light sensor may be configured to monitor ambient light conditions, and resultant light sensor data may be used for health monitoring and/or therapy purposes for the user, and/or (in the case that eye covering(s) with display device is used (such as described above with respect to Fig. 1, or the like) may be used to adjust brightness,

36

SUBSTITUTE SHEET ( RULE 26) contrast, or other display characteristics of the display device of the eye covering(s) to counter effects of the monitored ambient light conditions, or the like. In some cases, the light sensor may also be used to control and modulate light stimulation implemented by the eye covering(s) with display device (such as described above with respect to Fig. 1, or the like), or the like. Alternatively, or additionally, if the eye covering(s) have light dampening mechanisms, such light dampening mechanisms may be controlled based on ambient light conditions monitored by the light sensor. [0085] In some embodiments, the power button 210d may be configured to initiate one or more functions. For example, holding the power button 210d for longer than a second (e.g., 1-3 seconds, or the like) may cause the headband-based biosensor system 205 to switch between a powered on state or a powered off state. Alternatively, or additionally, pressing and releasing (e.g., clicking) the power button 210d may cause the headband-based biosensor system 205 to change (or cycle through) a plurality of modes (including, but not limited to, one or more audio playback modes, one or more voice call connection modes, one or more stimulation modes, or the like). Alternatively, or additionally, pressing and releasing (e.g., clicking) both the power button 210d and one of the volume buttons 210e may cause the headband-based biosensor system 205 to change (or cycle through) the plurality of modes in one direction, while pressing and releasing (e.g., clicking) both the power button 210d and the other of the volume buttons 210e may cause the headband-based biosensor system 205 to change (or cycle through) the plurality of modes in the other direction. According to some embodiments, pressing and releasing (e.g., clicking) one of the volume buttons 210e may cause the headbandbased biosensor system 205 to increase perceived volume of the bone conduction speakers 215 while pressing and releasing (e.g., clicking) the other of the volume buttons 210e may cause the headband-based biosensor system 205 to decrease perceived volume of the bone conduction speakers 215. In some instances, holding one of the volume buttons 210e for longer than a second (e.g., 1-3 seconds, or the like) may cause the headband-based biosensor system 205 either to forward through an audio track or to skip the audio track, while holding the other of the volume buttons 210e for longer than a second (e.g., 1-3 seconds, or the like) may cause the headband-based biosensor system 205 either to rewind through an audio track or to

37

SUBSTITUTE SHEET ( RULE 26) skip back to a previous audio track. The various embodiments are not limited to these particular functions and/or actuation mechanisms of the power button 210d and/or the volume buttons 210e, and may include any suitable set, combination, or other actuation sequences involving the power button 210d and/or the volume buttons 210e for actuating or initiating functions of the headband-based biosensor system 205.

[0086] Similarly, as shown in the non-limiting embodiment 200' of Fig. 2D, a headband-based biosensor system 205' (similar to headband-based biosensor system 105 or 205 of Fig. 1 or Figs. 2A-2C, or the like) may include a headband portion 210 (similar to headband portion 110 or 210 of Fig. 1 or Figs. 2A-2C, or the like). Headband portion 210 may include, without limitation, a front cover 210a (also referred to as "outer surface" or the like), a back cover 210b (also referred to as "inner surface" or the like), or a cable port 210f (which may include, but is not limited to, a USB port or other data and/or power supply port, or the like), and/or the like, headband-based biosensor system 205' may further include, without limitation, one or more bone conduction speakers or speaker assemblies 215 (similar to bone conduction speaker assemblies 115a-115n or 215 of Fig. 1 or Figs. 2A-2C, or the like), headband-based biosensor system 205' may further include one or more sensors 245, including, but not limited to, one or more spider electrodes 245a, one or more behind-the-ear electrodes 245b, a left forehead sensor 245c, a ground electrode 245d, a reference electrode 245e, a photoplethysmography ("PPG") sensor 245f, and/or a right forehead sensor 245g, and/or the like. In some embodiments, each of the sensors 245c, 245d, 245e, and 245g (collectively, in some cases together with PPG sensor 245f, "forehead sensors" or the like) may be configured to make contact with the skin on the forehead of the user. In some embodiments, each of the one or more spider electrodes 245a, the one or more behind-the-ear electrodes 245b, the left forehead sensor 245c, and the right forehead sensor 245g are each configured to monitor, track, and/or collect biosensor data of the user (including, but not limited to, at least one of EEG sensor data, the EOG sensor data, or the EMG sensor data, and/or the like), as described in detail above with respect to Figs. 2A- 2C. The ground electrode 245d may be used a stabilization source during signal acquisition, or the like, while the reference electrode 245e may be used (in some

38

SUBSTITUTE SHEET ( RULE 26) cases, as the default case) as the reference source to compare with electrical activity as detected by the behind-the-ear sensor(s) 245b, the left forehead sensor 245c, and/or the right forehead sensor 245g, and/or the like, headband-based biosensor system 205' may further include a microphone 210g.

[0087] According to some embodiments, although not shown, the headband portion may further comprise a flexible printed circuit board ("PCB") substrate(s) disposed between the front cover 210a and the back cover 210b. The processor(s) (similar to processor(s) 135 of Fig. 1A, or the like), the transceiver(s) (similar to transceiver(s) 140 of Fig. 1A, or the like), the memory devices (similar to memory 135c" of Fig. IB, or the like), signal quality improvement systems (similar to HW-based signal quality improvement mechanism(s) 195 of Fig. IB, or the like), power supply (not shown; e.g., battery(ies), other portable power sources, etc.) and/or the like, may be disposed on corresponding portions of the flexible PCB substrate(s), which allows flexibility and compression of the headband portion of the headband-based biosensor system without damaging the control circuitry disposed on the flexible PCB substrate(s). In some embodiments, sensors (similar to sensors 145 and 145a- 145j of Fig. 1, or the like) may each be one of disposed on the flexible PCB substrate(s), communicatively coupled to one or more components on the flexible PCB substrate(s) via connection wired connection to the flexible PCB substrate(s), or wirelessly connected or connectable to the transceiver that is disposed on the flexible PCB substrate(s), or the like. In some cases, one or more electronically actuated switches may be disposed on the flexible PCB substrate(s) to selectively turn off particular sensors and/or to selectively block signal lines leading to the particular sensors or to selectively cause signal lines to be open-circuited between the electrodes (i.e., contact points or channels, etc.) and the particular sensors (effectively providing for "selective deactivation of sensors" or the like). In this manner, at least one of sensor calibration, signal decomposition tests, or signal distinction of signals monitored by other sensors, and/or the like, may be easily performed.

[0088] Many signals - including, but not limited to, brain (or brain wave) signals, eye motion signals, facial muscle signals, head motion signals, head posture signals, and other head-related signals, or the like - are not possible to measure without making

39

SUBSTITUTE SHEET ( RULE 26) direct contact with the head. For the various embodiments, positions of electrodes are carefully selected so that they can optimally (and comfortably, for the user) capture the Brain, Eye, and Facial muscle signals (e.g., EMG sensor signals, or the like). In some cases, electrode locations may be driven by medical advisory and data driven studies. In some instances, forehead electrode location(s) enables EEG sensing from the frontal lobe of the brain. Additionally, offset from the center of the forehead enables acquisition of eye movement activity in comparable quality to dedicated EOG electrodes used in clinical studies. Common mode sensing ("CMS") electrode(s) may be placed at the center of the forehead because it is a stable location for the reference electrode as it is a stable contact point when the user is active, laying down, or working, and/or the like. Additionally, hair rarely covers this region, so its contact is infrequently disturbed. Electrodes on the sides of the head enable EEG sensing directly over the temporal lobe. Additionally, their proximity to the mastoid bone enables them to act as an additional source for signal referencing. These electrodes sit directly on top of the temporalis muscle group for high fidelity EMG data acquisition for head based muscle activity sensing. EEG, EMG, and EOG signals have been compared against those from corresponding FDA approved clinical equipment, which act as sources of ground truth for improvement and optimization of the EP sensors described herein. Similarly, the position of PPG sensor is carefully selected to provide robust sensing of heart and respiratory data. Documented experiments were performed to evaluate signal quality at different horizontal and vertical positions along the forehead, and the signals were compared against those from corresponding FDA approved clinical equipment, which act as sources of ground truth for improvement and optimization of the PPG sensor. Likewise, the position of IMU sensor is carefully selected to provide sensing of head movement and respiratory data. The signals were compared against those from corresponding FDA approved clinical equipment, which act as sources of ground truth for improvement and optimization of the IMU sensor.

[0089] A Headband is a form factor that is easy for a user to sleep with when implemented correctly (e.g., with the headband portion of the headband-based biosensor system being well-padded and thin, with stress points appropriately positioned at suitable locations around the head of the user when worn). Sleep

40

SUBSTITUTE SHEET ( RULE 26) posture may impact sensing quality of electrodes in typical biosensing devices. Careful selection of electrode locations and algorithms built for signal quality sensing and dynamic re-referencing mitigate issues associated with disruptions to electrode subsets. Studies have been conducted by Applicant that demonstrate improvement of algorithm availability and accuracy resulting from these considerations.

[0090] The electrodes, according to various embodiments, are designed so that they work well with various demographics, hairstyle, and head shape of users, which universal compatibility is supported by mechanical design and formfactor of the head band itself, as well as being supported by the electrodes and sensor positioning. The various embodiments also accommodate for different user behaviors of wearing the headband-based biosensor system - for example, tight versus loose fit, which is solved by a set of algorithms (including, but not limited to, first through fifth algorithms described above with respect to Fig. 1, or the like) for signal quality monitoring and evaluations, as well as for noise reduction (via hardware, mechanical, and/or software-based approaches). Herein, hardware ("HW") approaches may include, without limitation, three-fold cascaded amplifying ("3CA") approaches. In some cases, mechanical approaches may include, but are not limited to, hanging chamber design of bone conduction speaker (as described above with respect to Fig. 1, or the like), placement of electrodes (placed in locations that will generalize well across different head sizes, shapes, etc.), formfactor of electrodes (e.g., spider electrodes penetrate into hair and can make connection even if the band is being loosely worn due to the spider "arms," or the like), and/or the like. The headband is designed to fit the various sizes of the head.

[0091] The way that the EMG signal is captured, according to the various embodiments, is different from conventional EMG signal capturing techniques and systems. In particular, in some embodiments, a through-hair sensor (e.g., spider electrode, or the like) is used. For the hardware, mechanical, and/or software-based approaches: sensor data from IMU and all 4 channels (or electrodes) may be combined and processed. For example, if the IMU signal is strong, then correlation with the EMG signal may indicate that the EMG signal does not actually correspond to facial EMG. Some application dependent examples include, but are not limited to:

41

SUBSTITUTE SHEET ( RULE 26) (A) deriving known events and indicators in the PPG data to inform signal separation for analysis of heart-related information; (B) using movement data (e.g., acquired from the IMU sensor) to suppress noise in EEG, EOG, and EMG signals for sensing and inferencing in dynamic environments; (C) synthesizing information from a subset of signal data for the detection of conditions, diseases, mental states, etc. (explicitly, an example of this is the use of brain, heart, and respiratory data together to detect the presence of obstructive sleep apnea, or the like); (D) referencing IMU sensor data to determine if high frequency activity is truly EMG signal data or movement related artifacts; or the like.

[0092] The way that noise caused by human motion (including, but not limited to, micro motions such as breathing and heartbeat and/or macro motion such as walking, etc.) is reduced with HW solutions is different from conventional approaches. In particular, the various embodiments reduce such noise using 3CA (as described above). In some cases, the various embodiments may also utilize software ("SW") approaches, including, but not limited to, at least one of adaptive filtering, independent component analysis, principal component analysis, blind source separation, filtering, or machine learning -based approaches, and/or the like. In some instances, the machine learning -based approaches may include, without limitation, at least one of generative models, unsupervised models, semi-supervised models, supervised models, or self-supervised models, and/or the like. In some embodiments, sensor synthesis (i.e., combining information from various sources) may be used for signal improvement, and/or sensor synthesis may be used for noise suppression, or the like.

[0093] A headband formfactor enables the following locations of the head of users for EP and non-EP sensors to monitor: (1) the channel right above the ear may be used to monitor a big muscle group called the temporalis muscles that is part of the masticatory muscle group (including the masseter muscle), which is critical for many mental stages (such as stress, meditation, etc.) as well as sleep stages; (2) the channel on the forehead above the eye may be used to monitor electrical signals corresponding to EMG and/or EOG activity (corresponding to eye muscle movement and electrical activity, etc.) that reflect anger, focus, and the like, which are important for predicting mental stages of the user; (3) the offset forehead channels

42

SUBSTITUTE SHEET ( RULE 26) may be used for acquisition of data relevant to lateral eye movement; (4) one or more locations along the headband portion allows for easily capturing breathing sounds that would otherwise be very difficult to do from other locations (empirical data collected in lab settings support this, demonstrating the ability to capture such signals from various locations on the head and with various HW including, but not limited to, piezoelectric sensors, condenser electret, and micro-electromechanical system ("MEMS") -based microphones, and/or the like); (5) one or more locations along the headband portion (e.g., forehead, or the like) allows for sweat to be captured well; and/or the like.

[0094] The various embodiments provide for Multi/Single Channel

Sensing. Hereinbelow, "channel" and "electrode" are used interchangeably. In general, a channel corresponds to a dimension of data (for example, an IMU sensor may have 3 channels corresponding to x, y, and z axes, while a PPG sensor may have different channels corresponding to the frequency of the light being measured). In terms of the EP signals, however, an electrode that senses data is considered a channel. In some embodiments, two channels may be used to monitor the left side and the right side of the head of the user or to monitor the forehead region and the temporalis muscle group region, or the like. Alternatively, six channels may be used to monitor the left and right temporalis muscles using two spider electrodes 245a (one on each side for through-hair contact with the skin), two behind-the-ear electrodes 245b (one on each side for skin contact behind the ears), and two forehead electrodes 245c and 245g (one on either side of the forehead), or the like. Alternatively, a single electrode with multiple channels or contact points may be used to communicatively couple these six channels. Algorithms may then be used to extract different information or sensor data signals from each electrode communicatively coupled to multiple sensors (such as the single electrode described above). Here, the use of multiple location contact points (regardless of whether these are implemented as separate, individual channels or as consolidated (or communicatively coupled) channels) enables the headband-based biosensor system to monitor EP data from the user's head regardless of movement of the headband portion of the headband-based biosensor and/or temporary loss of contact, and/or the like. In some cases, a single contact point (e.g., electrode) that collects different

43

SUBSTITUTE SHEET ( RULE 26) signals for the different EP sensors enables collection of all these types of signals while avoiding use of numerous potential pressure points with the use of a larger number of electrodes (thereby contributing to the comfort in wearing of the headband-based biosensor system). In this manner, acquisition of data from specific regions of the brain and/or acquisition of data from specific facial muscles groups may be implemented in a manner that is robust, sustainable, accurate, and/or precise.

[0095] The various embodiments may also provide for monitoring of some, if not all of, the following types of head-based signals: (a) Brain signals (or brain wave signals) - based on EEG data; (b) Eye motion signals - based on EOG data; (c) Facial muscle contraction signals - based on EMG data; (d) Head motion and position (e.g., posture) signals - based on IMU data; (e) Saturation of peripheral oxygen ("SpO2"), heartrate ("HR"), heartrate variability ("HRV") signals - based on PPG data or pulse oximeter data; (f) Breathing and other head-based sound signals - based on microphone data or other sound sensor data, based on a vibration-based sensor data, based on an electrical sensor data (such as data from a piezoelectric sensor, etc.), based on PPG data, based on IMU data, and/or based on a combination of these sensor data; (g) Breathing motion signals - based on IMU data and/or EMG data; (h) Heart signals - based on ECG data (also known as EKG data) and/or PPG data; (i) Blood pressure signals - based on blood pressure monitor data; (j) Sweat level and/or skin conductivity signals - based on sweat sensor data or moisture sensor data; (k) Temperature signals - based on temperature sensor data; and/or the like. This list of potential sensors and corresponding usable applications to each of the listed biological processes is merely illustrative and not exhaustive.

[0096] The various embodiments may also provide for high fidelity sensing during all times of day, by using at least one of the following: (i) Algorithms for channel quality evaluation; (ii) Algorithms for channel (or partial channel) rejection, selection, and/or referencing; (iii) Algorithms for artifact reduction and noise suppression; (iv) HW- based amplification for noise suppression and signal quality improvement; (v) Acquisition of signals from a large frequency range (e.g., 0 Hz - Hz) (which improves on the monitoring capabilities of conventional head-based systems that appear to ignore lower frequency data);

44

SUBSTITUTE SHEET ( RULE 26) [0097] The various embodiments may also provide for multimodal processing of the collected signals, including, but not limited to, at least one of: Real-time; Offline; On MCU; On mobile device; or On server; and/or the like.

[0098] Regarding electrode and sensor location criteria, Applicant has conducted many experiments to validate the locations of all sensors on the headband-based biosensor system, as shown and described in Fig. 2, for example. These locations have been validated by comparing important signal characteristics from the time, frequency, and time-frequency domains to those from clinically validated and FDA approved medical equipment (acting as a source of ground truth). Additionally, sensor placement decisions have also been driven by medical expertise and human biology. As examples, the frontal, temporal, and parietal bones of the head offer optimal locations for placement of sensing equipment on users from a wide demographic. Current electrodes placed over the frontal, temporal, and parietal lobes of the brain enable sensing of activity in these lobes that is highly relevant to cognition, focus, learning, memory, emotion, processing, etc. In addition, the temporalis muscle is active in a wide variety of head-based movement and muscular activities.

[0099] Regarding algorithm design, algorithms have been developed to sense aspects of user health and cognition. These algorithms may be developed to: (a) Detect events (including, but not limited to, one-time, recurring, short, long, instantaneous, and/or the like) that are apparent in the time, frequency, timefrequency, or latent representations of the sensor data; (b) Identify patterns reflective of cognitive or health conditions, diseases, phenomenon, and/or the like; (c) Assess a user's biological data in comparison to general populations, sub-groups, etc.; and/or the like. In some cases, these algorithms may be designed using at least one of: manually annotated data; unlabeled data; data that has been annotated automatically with algorithms or semi-automatically by algorithms and manual annotators; statistical and historical data from users, surveys, and/or reports; and/or the like. In some embodiments, algorithms and/or models may be developed using at least one of: Supervised, unsupervised, semi-supervised, self-supervised, or reinforcement learning approaches, and/or the like; Statistical modeling; Heuristic-

45

SUBSTITUTE SHEET ( RULE 26) based approaches; or Rule-based approaches driven by expert or common knowledge, and/or the like.

[0100] These and other functions of the examples 200 and 200' (and their components) are described in greater detail herein with respect to Figs. 1 and 3-6.

[0101] Figs. 3A-3D (collectively, "Fig. 3") are schematic diagrams illustrating a nonlimiting example 300 of the bone conduction speaker assembly (and its components) of Figs. 1 and 2, in accordance with various embodiments.

[0102] In the non-limiting embodiment 300 of Figs. 3A-3D, a bone conduction speaker assembly 315 (similar to bone conduction speaker assemblies 115a-115n and 215 of Figs. 1 and 2, or the like) is sandwiched between a tail cap 330 (which corresponds to padding material 130 in Fig. 1, or the like) and (an inner surface) of headband portion 310 of a headband-based biosensor system (such as headbandbased biosensor systems 105, 205, and 205' of Figs. 1, 2A-2C, and 2D, or the like). The bone conduction speaker assembly 315 may include, without limitation, a bone conduction speaker device 320 (similar to bone conduction speaker device 120 of Fig. 1, or the like; including, but not limited to, vibration plate 320a and transducer 320b (similar to vibration plate 120a and transducer 120b, respectively of Fig. 1, or the like), a deformable speaker housing 325a (similar to deformable speaker housing 125a of Fig. 1, or the like), and an air gap 325b (similar to air gap 125b of Fig. 1, or the like) between an interior surface of the deformable speaker housing 325a and each of one or more side portions and a distal portion of the transducer 320b (e.g., "the hanging chamber" speaker design, or the like). The deformable speaker housing 325a may include, but is not limited to, a deformable material configured to compress toward the transducer 320b within the air gap 325b when the headband portion 310 is pressed up against the head of the user when the headband-based biosensor system is worn by the user, without the pressed-up headband portion 310 causing a shift in alignment of the corresponding vibration plate 320a relative to the head of the user.

[0103] Fig. 3A depicts a perspective view of a portion of headband portion 310 showing the tail cap 330, while Fig. 3B depicts a see-through view of the tail cap 330 showing the bone conduction speaker assembly 315 (and its components) disposed within tail cap 330 and sandwiched between tail cap 330 and the headband portion

46

SUBSTITUTE SHEET ( RULE 26) 310. Fig. 3C depicts an exploded view showing the relative positions of the headband portion 310, the tail cap 330, the bone conduction speaker 320, and the deformable speaker housing 325a. Fig. 3D depicts a cross-section view the relative positions of the headband portion 310, the tail cap 330, the bone conduction speaker 320, and the deformable speaker housing 325a, as well as showing the air gap 325b between the transducer 320b and the deformable speaker housing 325a. In the non-limiting embodiment of Fig. 3D, the vibration plate 320a is fixed in place (e.g., using adhesive, or the like) to an interior surface of the tail cap 330 (as also shown with respect to Figs. 3E and 3F below).

[0104] These and other functions of the example 300 (and its components) are described in greater detail herein with respect to Figs. 1, 2, and 4-6.

[0105] Fig. 4 is a diagram illustrating a non-limiting example 400 of a head of a user with labels for potential locations on the head for alignment and positioning of bone conduction speakers and/or sensors, in accordance with various embodiments.

[0106] With reference to Fig. 4, a head 405 of a user is shown with labels numbered 1 through 8 indicating locations or regions 410 on the head 405. In some cases, the locations or regions 410 denote potential locations for which sensors or bone conduction speakers may be aligned or placed.

[0107] In the various embodiments, a headband-based biosensor system (such as headband-based biosensor systems 105, 205, and 205' of Figs. 1, 2A-2C, and 2D, or the like) may have a headband portion (similar to headband portions 110, 210, and 310 of Figs. 1-3, or the like) that wrap around the head 405 of the user, in some cases, overlapping with locations or regions #3 - #7 (410), as shown in Fig. 4, when worn with the headband portion wrapping around the forehead of the user and above both ears of the user. In some embodiments, the headband-based biosensor system is designed such that the bone conduction speakers or bone conduction speaker assembly (similar to bone conduction speaker devices 120 and 320 and bone conduction speaker assemblies 115, 215, and 315 of Figs. 1-3, or the like) are positioned within the headband portion of the headband-based biosensor system to align with, and to make contact with, at least a portion of location or region #4 (410) in Fig. 4. Location or region #4 (also referred to herein as "behind the ear(s)" or the like) provides the bone conduction speaker with contact with the parietal bone

47

SUBSTITUTE SHEET ( RULE 26) and/or the temporal bone, while minimizing pressing or pressure contact on blood vessels and nerves on the head 405 of the user when the headband portion is pressed up against the head of the user when the headband-based biosensor system is worn by the user (such as when the user is lying down, resting their head on a cushion, pillow, or other surface while sleeping or while resting, or resting their head on a seatback cushion (e.g., headrest portion of an office chair, a car seat, or an airplane seat, etc.). The structure of the bone conduction speaker assembly (e.g., the hanging chamber speaker design) in conjunction with the deformable materials of the headband-based biosensor system (including the materials of the headband portion, the materials of the tail cap or padding material, and/or the materials of the deformable speaker housing 325a, and/or the like), as described in detail above with respect to Figs. 1-3, also aid in minimizing the pressing or pressure contact.

[0108] In alternative embodiments, the bone conduction speaker assemblies may be modular components that are disposed on corresponding one or more first portions of an inner surface of an article of headwear. In some embodiments, the article of headwear may include, without limitation, one of a headband-based biosensor system, a headband, a hat, a cap, a toque, a beanie, a beret, a bonnet, a helmet, a hairband, a pair of goggles, a headset, a virtual reality ("VR") headset, an augmented reality ("AR") headset, a mixed reality ("MR") headset, or a bandana, and/or the like. In some cases, each bone conduction speaker assembly may be one of affixed to, removably attachable to, or integrated with the inner surface of the article of headwear. In some instances, each transducer may include, but is not limited to, a cross-sectional shape comprising one of an ellipse, a circle, a rectangle, or other polygon, and/or the like. In such embodiments, the modular bone conduction speaker assemblies may be affixed to, removably attachable to, and/or integrated with the inner surface of the article of headwear to align with, and to make contact with, at least a portion of location or region #4 (410) in Fig. 4, for similar reasons as described above.

[0109] Alternatively, the bone conduction speaker assemblies may be modular components that may each be temporarily affixed to the skin of the head 405 of the user (such as, but not limited to, affixing to at least a portion of location or region #4 (410) in Fig. 4 for the reasons described above) via use of temporary adhesive, or the

48

SUBSTITUTE SHEET ( RULE 26) like. In some instances, the temporary adhesive may include, but is not limited to, adhesives used for medical bandages, medical electrode adhesives, hydrogel adhesives, or stretchable hydrogel adhesives, and/or the like.

[0110] Figs. 5A and 5B (collectively, "Fig. 5") are diagrams illustrating various nonlimiting examples 500 and 500' of decomposing mixed signal data into multiple distinct sensor signal data each corresponding to one of the two or more different types of electrophysiological ("EP") sensors, in accordance with various embodiments.

[0111] In the non-limiting examples 500 and 500' of Figs. 5A and 5B, a sample visualization of mixed signal data 505 or 525 is shown. The mixed signal data 505 or 525 contains or superimposes an EEG signal 510 or 530, an EMG signal 515 or 535, and an EOG signal 520 or 540 (i.e., superimposing, in this case, signals from three different types of EP sensors). In accordance with the various embodiments, signal processing is performed on the mixed signal data 505 ro 525 to decompose the mixed signal data 505 or 525 into (in these cases) three distinct sensor signals corresponding to the EEG signal 510 or 530, the EMG signal 515 or 535, and the EOG signal 520 or 540, as shown in Fig. 5. Once decomposed, the individual or distinct sensor signals may be analyzed individual (and, in some cases, in a correlated manner) to perceive or identify at least one biological and/or psychological state or condition of the user.

[0112] For example, analysis of EMG signal 515 in Fig. 5A may result in determination of very low muscular activity, as indicated by the absence of any high amplitude activity throughout the depicted EMG signal 515. On the other hand, analysis of the EOG signal 520 in Fig. 5A may result in determination of rapid eye movements, as indicated by the signal activity toward the end of the depicted EOG signal 520. Correlated analysis of both the EMG signal 515 and the EOG signal 520 may result in perceiving that the user is in the rapid eye movement ("REM") stage of sleep.

[0113] In another example, analysis of EMG signal 535 in Fig. 5B may result in determination of muscular activity, as indicated by the high amplitude activity toward the beginning of the depicted EMG signal 535. Similarly, analysis of the EOG signal 540 in Fig. 5B may result in determination of eye movements, as indicated by

49

SUBSTITUTE SHEET ( RULE 26) the signal deflections toward the beginning and near the middle of the depicted EOG signal 540. Correlated analysis of both the EMG signal 535 and the EOG signal 540 may result in perceiving that the user is in a wakeful state.

[0114] Although Fig. 5 depicts mixed signal data containing three types of EP signals, the various embodiments are not so limited, and the mixed signal data may contain any number of different types of EP signals as desired or as required. In some embodiments, as described above with respect to Fig. 2, selective deactivation of sensors may allow for at least one of sensor calibration, signal decomposition tests, or signal distinction of signals monitored by other sensors, and/or the like, regardless of the number of EP sensors communicatively coupled to an electrode or channel.

[0115] Figs. 6A and 6B (collectively, "Fig. 6") are flow diagrams illustrating a method 600 for implementing a headband with biosensor data monitoring, in accordance with various embodiments. Method 600 of Fig. 6A continues onto Fig. 6B following the circular marker denoted, "A."

[0116] While the techniques and procedures are depicted and/or described in a certain order for purposes of illustration, it should be appreciated that certain procedures may be reordered and/or omitted within the scope of various embodiments. Moreover, while the method 600 illustrated by Fig. 6 can be implemented by or with (and, in some cases, are described below with respect to) the systems, examples, or embodiments 100, 100', 200, 200', 300, 400, 500, and 500' of Figs. 1A, IB, 2A-2C, 2D, 3, 4, 5A, and 5B, respectively (or components thereof), such methods may also be implemented using any suitable hardware (or software) implementation. Similarly, while each of the systems, examples, or embodiments 100, 100', 200, 200', 300, 400, 500, and 500' of Figs. 1A, IB, 2A-2C, 2D, 3, 4, 5A, and 5B, respectively (or components thereof), can operate according to the method 600 illustrated by Fig. 6 (e.g., by executing instructions embodied on a computer readable medium), the systems, examples, or embodiments 100, 100', 200, 200', 300, 400, 500, and 500' of Figs. 1A, IB, 2A-2C, 2D, 3, 4, 5A, and 5B can each also operate according to other modes of operation and/or perform other suitable procedures.

[0117] In the non-limiting embodiment of Fig. 6A, method 600, at block 605, may comprise receiving, using a computing system, first electrophysiological ("EP")

50

SUBSTITUTE SHEET ( RULE 26) sensor data from a first electrode disposed on a first portion of a headband portion of a headband-based biosensor system. In some cases, the received first EP sensor data from the first electrode may include, without limitation, first mixed signal data that superimposes raw sensor signal data from each of two or more different types of EP sensors that are communicatively coupled to the first electrode, the two or more different types of EP sensors each including, but not limited to, one of an electroencephalography ("EEG") sensor, an electrooculography (EOG) sensor, an electromyography ("EMG") sensor, or an electrocardiography ("ECG") sensor, and/or the like.

[0118] At optional block 610, method 600 may comprise receiving, using the computing system, second EP sensor data from each of one or more second electrodes disposed on corresponding one or more second portions of the headband portion of the headband-based biosensor system, the one or more second electrodes being separate from the first electrode. In some instances, the received second EP sensor data from each of the one or more second electrodes may include, without limitation, one or more second mixed signal data that each superimposes raw sensor signal data from each of two or more different types of EP sensors that are communicatively coupled to a corresponding second electrode among the one or more second electrodes.

[0119] Method 600 may further comprise, at block 615, applying, using the computing system, signal processing to the received EP sensor data (including at least one of the first EP sensor data (from block 605) or the second EP sensor data (from optional block 610; if applicable), and/or the like) to decompose each mixed signal data (e.g., the corresponding at least one of the first mixed signal data from the first electrode or the second mixed signal data from each second electrode (if applicable), and/or the like) into two or more distinct sensor signal data each corresponding to one of the two or more different types of EP sensors. Method 600 may further comprise, at block 620, analyzing the sensor data using the computer system; in some cases, this analysis includes individually analyzing, at least one of two or more decomposed, distinct sensor signal data corresponding to the first electrode and/or at least one of the two or more decomposed, distinct sensor signal data corresponding to each second electrode (if applicable).

51

SUBSTITUTE SHEET ( RULE 26) [0120] Method 600 may further comprise receiving, using the computing system, first non-EP sensor data from each of one or more first non-EP sensors (optional block 625); and analyzing, using the computing system, the received first non-EP sensor data individually (optional block 630).

[0121] At optional block 635a, method 600 may comprise analyzing, using the computing system, the EP sensor signal data corresponding to the first electrode, the sensor data corresponding to each second electrode (if applicable), and the received first non-EP sensor data (if applicable) in a correlated manner. Method 600, at block 635b, may comprise perceiving at least one biological and/or psychological state or condition of a user who was wearing the headband-based biosensor system when the first EP sensor data, the second EP sensor data (if applicable), and/or the first non-EP sensor data (if applicable) were collected, based at least in part on analysis of the sensor signal data corresponding to the first electrode, based at least in part on individual analysis of sensor data corresponding to each second electrode (if applicable), based at least in part on individual analysis of the first non-EP sensor data (if applicable), and/or based at least in part on correlated analysis of the two or more decomposed, distinct sensor signal data corresponding to the first electrode, the two or more decomposed, distinct sensor signal data corresponding to each second electrode (if applicable), and the received first non-EP sensor data (if applicable).

[0122] At block 640, method 600 may comprise determine whether the perceived at least one biological and/or psychological state or condition of the user is indicative of at least one actual biological and/or psychological state or condition of the user or a false reading, based at least in part on correlated analysis of the at least one of the two or more decomposed, distinct sensor signal data with at least one of one or more other decomposed, distinct sensor signal data corresponding to the first electrode, the two or more decomposed, distinct sensor signal data corresponding to each second electrode (if applicable), and/or the first non-EP sensor data (if applicable), or the like. If so, method 600 may continue to the process at block 645. If not method 600 may return to the process at block 605 (and, in some cases, optional block 610). At block 645, method 600 may comprise, based on a determination that the perceived at least one biological and/or psychological state

52

SUBSTITUTE SHEET ( RULE 26) or condition of the user is indicative of at least one actual biological and/or psychological state or condition of the user, sending, using the computing system, data regarding the perceived at least one biological and/or psychological state or condition of the user to at least one user device. Method 600 either may return to the process at block 605 (and, in some cases, optional block 610) or may continue onto the process at block 650 in Fig. 6B following the circular marker denoted, "A." [0123] In some embodiments, the computing system may include, without limitation, at least one of a microprocessor, a microcontroller, a digital signal processor, a processor of the headband-based biosensor system, a processor of one or more user devices among the at least one user device, a server computer over a network, a cloud-based computing system over a network, or a distributed computing system, and/or the like. In some instances, the at least one user device may each include, without limitation, one of a smart phone, a mobile phone, a tablet computer, a wearable device, a laptop computer, a desktop computer, a dedicated user interface ("Ul") device associated with the headband-based biosensor system, or a dedicated controller device associated with the headband-based biosensor system, and/or the like. In some cases, the at least one user device may each be associated with one of the user who was wearing the headband-based biosensor system when the first EP sensor data was collected, a family member of the user, a friend of the user, a guardian of the user, one or more medical professionals providing medical care to the user, or one or more other designated entities, and/or the like. In some instances, when the headband-based biosensor system is worn by the user, the headband portion may be wrapped around a forehead of the user and above both ears of the user, or the like. In some cases, the headband portion may be made of one or more materials including, but not limited to, at least one of polyurethane, thermoplastic polyurethane ("TPU"), silicone, or polycarbonate ("PC"), and/or the like.

[0124] In some instances, the one or more first non-EP sensors may each include, but is not limited to, at least one of a photoplethysmography ("PPG") sensor, an inertial measurement unit ("IMU") sensor, an accelerometer, a gyroscope, a sound sensor, a microphone, a temperature sensor, a moisture sensor, a sweat sensor, an oximeter, a heart rate sensor, a blood pressure sensor, or a light sensor, and/or the

53

SUBSTITUTE SHEET ( RULE 26) like. In some cases, the one or more first non-EP sensors may comprise at least one of one or more non-EP sensors that are disposed within or on the headband-based biosensor system or one or more non-EP sensors that are disposed external, yet communicatively coupled, to the headband-based biosensor system.

[0125] In some instances, perceiving the at least one biological and/or psychological state or condition of the user may comprise perceiving the at least one biological and/or psychological state or condition of the user based at least in part on at least one of: (i) one or more detected events comprising at least one of one or more onetime events, one or more recurring events, one or more short duration events, one or more long duration events, one or more instantaneous events, or two or more concurrent events, and/or the like, the one or more detected events corresponding to at least one of time, frequency, time-frequency, or latent representations, and/or the like, of the at least one of the two or more decomposed, distinct sensor signal data with at least one of one or more other decomposed, distinct sensor signal data corresponding to the first electrode, the two or more decomposed, distinct sensor signal data corresponding to each second electrode (if applicable), and/or the first non-EP sensor data (if applicable), or a combination of the at least one of the two or more decomposed, distinct sensor signal data with at least one of one or more other decomposed, distinct sensor signal data corresponding to the first electrode, the two or more decomposed, distinct sensor signal data corresponding to each second electrode (if applicable), and/or the first non-EP sensor data (if applicable), and/or the like; (ii) one or more identified patterns in the at least one of the at least one of the two or more decomposed, distinct sensor signal data with at least one of one or more other decomposed, distinct sensor signal data corresponding to the first electrode, the two or more decomposed, distinct sensor signal data corresponding to each second electrode (if applicable), and/or the first non-EP sensor data (if applicable), or the combination of the at least one of the two or more decomposed, distinct sensor signal data with at least one of one or more other decomposed, distinct sensor signal data corresponding to the first electrode, the two or more decomposed, distinct sensor signal data corresponding to each second electrode (if applicable), and/or the first non-EP sensor data (if applicable), and/or the like, that are indicative of at least one of one or more cognitive conditions, one or more health

54

SUBSTITUTE SHEET ( RULE 26) conditions, one or more diseases, or one or more physiological phenomena, and/or the like; or (iii) a comparison between biological data of the user with corresponding biological data of other users belonging to at least one of one or more general populations, one or more sub-groups, one or more regional groups, one or more ethnic groups, one or more gender groups, or one or more age groups, and/or the like. In some cases, the biological data of each user may include, without limitation, the at least one of the two or more decomposed, distinct sensor signal data with at least one of one or more other decomposed, distinct sensor signal data corresponding to the first electrode, the two or more decomposed, distinct sensor signal data corresponding to each second electrode (if applicable), and/or the first non-EP sensor data (if applicable), or the combination of the at least one of the two or more decomposed, distinct sensor signal data with at least one of one or more other decomposed, distinct sensor signal data corresponding to the first electrode, the two or more decomposed, distinct sensor signal data corresponding to each second electrode (if applicable), and/or the first non-EP sensor data (if applicable), and/or the like.

[0126] In some cases, high fidelity of at least one of the first EP sensor data, the second EP sensor data, or the first non-EP sensor data, and/or the like, regardless of motion of the user, orientation of a head of the user, or the headband-based biosensor system being pressed up against the head of the user, may be achieved based at least in part on at least one of: (1) one or more first algorithms configured to evaluate channel quality of at least one of a first channel corresponding to the first electrode or one or more second channels corresponding to the one or more second electrodes; (2) one or more second algorithms configured to perform at least one of selecting, referencing, or rejecting one or more portions or components of at least one of the first channel corresponding to the first electrode or one or more second channels corresponding to the one or more second electrodes; (3) one or more third algorithms configured to reduce or suppress at least one of signal artifacts or signal noise in at least one of the first EP sensor data, the second EP sensor data, or the one or more non-EP sensor data, and/or the like; (4) hardwarebased amplification of signal data in at least one of the first EP sensor data, the second EP sensor data, or the one or more non-EP sensor data, and/or the like, to

55

SUBSTITUTE SHEET ( RULE 26) improve at least one of signal quality or signal noise suppression, and/or the like; or (5) acquisition of signal data from at least one of the first electrode or one or more second electrodes over a frequency range between 0 Hz and a frequency value corresponding to half of a corresponding sampling rate of each electrode; and/or the like.

[0127] In some instances, at least one of the one or more first algorithms, the one or more second algorithms, the one or more third algorithms, one or more first models corresponding to the one or more first algorithms, one or more second models corresponding to the one or more second algorithms, or one or more third models corresponding to the one or more third algorithms, and/or the like, may be at least one of developed or updated using at least one of supervised machine learning, unsupervised machine learning, semi-supervised machine learning, self-supervised machine learning, reinforcement-based machine learning, statistical modeling, heuristic-based machine learning, or rule-based machine learning, and/or the like.

[0128] According to some embodiments, any sensor signal noise in at least one of the first EP sensor data, the second EP sensor data, or the first non-EP sensor data, and/or the like, that is due to motion of the user may be reduced based at least in part on at least one of filtering, adaptive filtering, independent component analysis, principal component analysis, blind source separation analysis, or machine learning - based noise filtering, and/or the like. In some cases, the machine learning -based noise filtering may be based on one of generative models, unsupervised models, semi-supervised models, supervised models, or self-supervised models, and/or the like. In some instances, the motion of the user may include, but is not limited to, at least one of micro motions of the user (including, without limitation, motion due to breathing, motion due to snoring, eye movement (when the user is awake or during REM sleep, etc.), motion due to heart beating, etc.) or macro motions of the user (including, without limitation, walking, running, or performing other physical activities, movement during sleep, etc.) and/or the like.

[0129] In some embodiments, the headband portion may further include, but is not limited to, one or more straps that may be configured to tighten the headband portion around a head of the user in a closed band. In such cases, any sensor signal variances of at least one of the first EP sensor data, the second EP sensor data,

56

SUBSTITUTE SHEET ( RULE 26) and/or the first non-EP sensor data that are due to loose fit of the headband portion around the head of the user compared with a tight fit of the headband portion around the head of the user may be compensated based at least in part on at least one of: (a) one or more fourth algorithms configured to monitor signal quality in the at least one of the first EP sensor data or the one or more non-EP sensor data; (b) one or more fifth algorithms configured to reduce noise in the at least one of the first EP sensor data or the one or more non-EP sensor data; (c) placement of the first electrode the first portion of the headband portion, wherein the first portion may be determined to result in sensor signal variances regardless of loose fit or tight fit of the headband portion around the head of the user; or (d) formfactor of the first electrode that is configured to provide contact with skin on the head of the user regardless of loose fit or tight fit of the headband portion around the head of the user; and/or the like. In some cases, the one or more fourth algorithms may be similar, if not identical, to the one or more first algorithms, which is as described above. In some instances, the one or more fifth algorithms may be similar, if not identical, to the one or more third algorithms, which is as described above.

[0130] According to some embodiments, the signal processing of the received first EP sensor data may comprise multimodal processing including, but not limited to, at least one of real-time processing, near-real-time processing, online processing, offline processing, on-microcontroller-unit ("on-MCU") processing, on-user-device processing, or on-server processing, and/or the like. In some instances, the individual and correlated analysis of the at least one of the two or more decomposed, distinct sensor signal data may comprise multimodal analysis including, but not limited to, at least one of real-time analysis, near-real-time analysis, online analysis, offline analysis, on-microcontroller-unit ("on-MCU") analysis, on-user-device analysis, or on-server analysis, and/or the like.

[0131] At block 650 in Fig. 6B (following the circular marker denoted, "A"), method 600 may comprise activating, using the computing system, at least one stimulation device disposed on one or more third portions of the headband portion. In some instances, each stimulation device may include, without limitation, one of an electrical stimulation device, a vibration-based stimulation device, an audio-based stimulation device, or a light-based stimulation device, and/or the like. Each

57

SUBSTITUTE SHEET ( RULE 26) stimulation device may be configured to stimulate a physiological response in the user when activated. In some cases, activating the at least one stimulation device may be performed after determining that the perceived at least one biological and/or psychological state or condition of the user is indicative of at least one actual biological and/or psychological state or condition of the user.

[0132] Method 600 may further comprise: receiving, using the computing system, updated first EP sensor data from the first electrode (block 655); receiving, using the computing system, updated second EP sensor data from each of the one or more second electrodes (optional block 660); applying, using the computing system, signal processing to the received updated EP sensor data (including at least one of the updated first EP sensor data (from block 655) or the updated second EP sensor data (from optional block 660; if applicable), and/or the like) to decompose each updated mixed signal data (e.g., the corresponding at least one of the updated first mixed signal data from the first electrode or the updated second mixed signal data from each second electrode (if applicable), and/or the like) into updated two or more distinct sensor signal data each corresponding to one of the two or more different types of EP sensors (block 665); individually analyzing, using the computing system, at least one of the updated two or more decomposed, distinct sensor signal data corresponding to the first electrode and/or at least one of the updated two or more decomposed, distinct sensor signal data corresponding to each second electrode (if applicable) (block 670); receiving, using the computing system, updated first non-EP sensor data from each of one or more first non-EP sensors (optional block 675); and analyzing, using the computing system, the received updated first non-EP sensor data individually (optional block 680); analyzing, using the computing system, the updated two or more decomposed, distinct sensor signal data corresponding to the first electrode, the updated two or more decomposed, distinct sensor signal data corresponding to each second electrode (if applicable), and the received updated first non-EP sensor data (if applicable) in a correlated manner (optional block 685a); determining, using the computing system, whether and to what extent the perceived at least one biological and/or psychological state or condition of the user has changed (block 685b); and sending, using the computing system, data regarding any changes to the perceived at least one biological and/or psychological state or

58

SUBSTITUTE SHEET ( RULE 26) condition of the user to the at least one user device (block 690); and/or the like. In some cases, determining, using the computing system, whether and to what extent the perceived at least one biological and/or psychological state or condition of the user has changed may be based at least in part on individual analysis of each of at least one of the two or more decomposed, distinct sensor signal data corresponding to the first electrode, the two or more decomposed, distinct sensor signal data corresponding to each second electrode (if applicable), or the received first non-EP sensor data (if applicable), and/or based at least in part on correlated analysis of the at least one of the updated two or more decomposed, distinct sensor signal data corresponding to the first electrode with at least one of updated one or more other decomposed, distinct sensor signal data corresponding to the first electrode, the updated two or more decomposed, distinct sensor signal data corresponding to each second electrode (if applicable), and/or the updated first non-EP sensor data (if applicable), or the like

[0133] Alternative to decomposing the EP sensor data then analyzing based on the decomposed EP sensor data, according to some embodiments, analysis (whether correlated or not) may be performed on the mixed signal data corresponding to each electrode. In such cases, algorithms, machine learning approaches, and/or learning models (as described herein) may be used to facilitate as well as enhance results of analysis based on the mixed signal data.

[0134] Alternatively, or additionally, perception of at least one biological and/or psychological state or condition of the user may be performed based on correlated analysis of the EP sensor data (regardless of whether decomposed or mixed signal data is used) and the non-EP sensor data, rather than based on individual analysis of each type of data.

[0135] Although Fig. 6 is directed to a headband-based biosensor system, the various embodiments are not so limited, and a headwear-based biosensor system having similar functionality and at least some of the components of the abovedescribed headband-based biosensor system may be used in a similar manner as described with respect to the headband-based biosensor system. In some embodiments, the headwear-based biosensor system may be one of a stand-alone electronic device having a headband form factor, affixed to one or more portions of

59

SUBSTITUTE SHEET ( RULE 26) an inner surface of an article of headwear, removably attachable to the one or more portions of the inner surface of the article of headwear, or integrated within the inner surface of the article of headwear, and/or the like. In some instances, the article of headwear may include, but is not limited to, one of a headband, a hat, a cap, a toque, a beanie, a beret, a bonnet, a helmet, a hairband, a pair of goggles, a headset, a virtual reality ("VR") headset, an augmented reality ("AR") headset, a mixed reality ("MR") headset, or a bandana, and/or the like.

[0136] Examples of System and Hardware Implementation

[0137] Fig. 7 is a block diagram illustrating an example of computer or system hardware architecture, in accordance with various embodiments. Fig. 7 provides a schematic illustration of one embodiment of a computer system 700 of the service provider system hardware that can perform the methods provided by various other embodiments, as described herein, and/or can perform the functions of computer or hardware system (i.e., headband-based biosensor systems 105, 205, and 205', server(s) 135', user devices 170a-170n and 175, and media content server(s) 180, etc.), as described above. It should be noted that Fig. 7 is meant only to provide a generalized illustration of various components, of which one or more (or none) of each may be utilized as appropriate. Fig. 7, therefore, broadly illustrates how individual system elements may be implemented in a relatively separated or relatively more integrated manner.

[0138] The computer or hardware system 700 - which might represent an embodiment of the computer or hardware system (i.e., headband-based biosensor systems 105, 205, and 205', server(s) 135', user devices 170a-170n and 175, and media content server(s) 180, etc.), described above with respect to Figs. 1-6 - is shown comprising hardware elements that can be electrically coupled via a bus 705 (or may otherwise be in communication, as appropriate). The hardware elements may include one or more processors 710, including, without limitation, one or more general-purpose processors and/or one or more special-purpose processors (such as microprocessors, digital signal processing chips, graphics acceleration processors, and/or the like); one or more input devices 715, which can include, without limitation, a mouse, a keyboard, and/or the like; and one or more output devices

60

SUBSTITUTE SHEET ( RULE 26) 720, which can include, without limitation, a display device, a printer, and/or the like.

[0139] The computer or hardware system 700 may further include (and/or be in communication with) one or more storage devices 725, which can comprise, without limitation, local and/or network accessible storage, and/or can include, without limitation, a disk drive, a drive array, an optical storage device, solid-state storage device such as a random access memory ("RAM") and/or a read-only memory ("ROM"), which can be programmable, flash-updateable, and/or the like. Such storage devices may be configured to implement any appropriate data stores, including, without limitation, various file systems, database structures, and/or the like.

[0140] The computer or hardware system 700 might also include a communications subsystem 730, which can include, without limitation, a modem, a network card (wireless or wired), an infra-red communication device, a wireless communication device and/or chipset (such as a Bluetooth device, an 802.11 device, a Wi-Fi device, a WiMax device, a WWAN device, cellular communication facilities, etc.), and/or the like. The communications subsystem 730 may permit data to be exchanged with a network (such as the network described below, to name one example), with other computer or hardware systems, and/or with any other devices described herein. In many embodiments, the computer or hardware system 700 will further comprise a working memory 735, which can include a RAM or ROM device, as described above.

[0141] The computer or hardware system 700 also may comprise software elements, shown as being currently located within the working memory 735, including an operating system 740, device drivers, executable libraries, and/or other code, such as one or more application programs 745, which may comprise computer programs provided by various embodiments (including, without limitation, hypervisors, VMs, and the like), and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein. Merely by way of example, one or more procedures described with respect to the method(s) discussed above might be implemented as code and/or instructions executable by a computer (and/or a processor within a computer); in an aspect, then, such code and/or instructions can be used to configure and/or adapt a general

61

SUBSTITUTE SHEET ( RULE 26) purpose computer (or other device) to perform one or more operations in accordance with the described methods.

[0142] A set of these instructions and/or code might be encoded and/or stored on a non-transitory computer readable storage medium, such as the storage device(s) 725 described above. In some cases, the storage medium might be incorporated within a computer system, such as the system 700. In other embodiments, the storage medium might be separate from a computer system (i.e., a removable medium, such as a compact disc, etc.), and/or provided in an installation package, such that the storage medium can be used to program, configure, and/or adapt a general purpose computer with the instructions/code stored thereon. These instructions might take the form of executable code, which is executable by the computer or hardware system 700 and/or might take the form of source and/or installable code, which, upon compilation and/or installation on the computer or hardware system 700 (e.g., using any of a variety of generally available compilers, installation programs, compression/decompression utilities, etc.) then takes the form of executable code.

[0143] It will be apparent to those skilled in the art that substantial variations may be made in accordance with particular requirements. For example, customized hardware (such as programmable logic controllers, field-programmable gate arrays, application-specific integrated circuits, and/or the like) might also be used, and/or particular elements might be implemented in hardware, software (including portable software, such as applets, etc.), or both. Further, connection to other computing devices such as network input/output devices may be employed.

[0144] As mentioned above, in one aspect, some embodiments may employ a computer or hardware system (such as the computer or hardware system 700) to perform methods in accordance with various embodiments of the invention. According to a set of embodiments, some or all of the procedures of such methods are performed by the computer or hardware system 700 in response to processor 710 executing one or more sequences of one or more instructions (which might be incorporated into the operating system 740 and/or other code, such as an application program 745) contained in the working memory 735. Such instructions may be read into the working memory 735 from another computer readable

62

SUBSTITUTE SHEET ( RULE 26) medium, such as one or more of the storage device(s) 725. Merely by way of example, execution of the sequences of instructions contained in the working memory 735 might cause the processor(s) 710 to perform one or more procedures of the methods described herein.

[0145] The terms "machine readable medium" and "computer readable medium," as used herein, refer to any medium that participates in providing data that causes a machine to operate in some fashion. In an embodiment implemented using the computer or hardware system 700, various computer readable media might be involved in providing instructions/code to processor(s) 710 for execution and/or might be used to store and/or carry such instructions/code (e.g., as signals). In many implementations, a computer readable medium is a non-transitory, physical, and/or tangible storage medium. In some embodiments, a computer readable medium may take many forms, including, but not limited to, non-volatile media, volatile media, or the like. Non-volatile media includes, for example, optical and/or magnetic disks, such as the storage device(s) 725. Volatile media includes, without limitation, dynamic memory, such as the working memory 735. In some alternative embodiments, a computer readable medium may take the form of transmission media, which includes, without limitation, coaxial cables, copper wire, and fiber optics, including the wires that comprise the bus 705, as well as the various components of the communication subsystem 730 (and/or the media by which the communications subsystem 730 provides communication with other devices). In an alternative set of embodiments, transmission media can also take the form of waves (including without limitation radio, acoustic, and/or light waves, such as those generated during radio-wave and infra-red data communications).

[0146] Common forms of physical and/or tangible computer readable media include, for example, a floppy disk, a flexible disk, a hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read instructions and/or code.

63

SUBSTITUTE SHEET ( RULE 26) [0147] Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to the processor(s) 710 for execution. Merely by way of example, the instructions may initially be carried on a magnetic disk and/or optical disc of a remote computer. A remote computer might load the instructions into its dynamic memory and send the instructions as signals over a transmission medium to be received and/or executed by the computer or hardware system 700. These signals, which might be in the form of electromagnetic signals, acoustic signals, optical signals, and/or the like, are all examples of carrier waves on which instructions can be encoded, in accordance with various embodiments of the invention.

[0148] The communications subsystem 730 (and/or components thereof) generally will receive the signals, and the bus 705 then might carry the signals (and/or the data, instructions, etc. carried by the signals) to the working memory 735, from which the processor(s) 705 retrieves and executes the instructions. The instructions received by the working memory 735 may optionally be stored on a storage device 725 either before or after execution by the processor(s) 710.

Conclusion

[0149] In the foregoing description, for the purposes of explanation, numerous details are set forth to provide a thorough understanding of the described embodiments. It will be apparent to one skilled in the art, however, that other embodiments may be practiced without some of these details. In other instances, structures and devices are shown in block diagram form without full detail for the sake of clarity. Several embodiments are described herein, and while various features are ascribed to different embodiments, it should be appreciated that the features described with respect to one embodiment may be incorporated with other embodiments as well. By the same token, however, no single feature or features of any described embodiment should be considered essential to every embodiment of the invention, as other embodiments of the invention may omit such features.

[0150] Thus, the foregoing description provides illustration and description of some features and aspect of various embodiments, but it is not intended to be exhaustive or to limit the implementations to the precise form disclosed. One skilled in the art

64

SUBSTITUTE SHEET ( RULE 26) will recognize that modifications may be made in light of the above disclosure or may be acquired from practice of the implementations, all of which can fall within the scope of various embodiments. For example, the methods and processes described herein may be implemented using hardware components, custom integrated circuits (ICs), programmable logic, and/or any combination thereof.

[0151] Further, while various methods and processes described herein may be described with respect to particular structural and/or functional components for ease of description, methods provided by various embodiments are not limited to any particular structural and/or functional architecture but instead can be implemented in any suitable hardware configuration. Similarly, while some functionality is ascribed to one or more system components, unless the context dictates otherwise, this functionality can be distributed among various other system components in accordance with the several embodiments.

[0152] Moreover, while the procedures of the methods and processes described herein are described in a particular order for ease of description, unless the context dictates otherwise, various procedures may be reordered, added, and/or omitted in accordance with various embodiments. Moreover, the procedures described with respect to one method or process may be incorporated within other described methods or processes; likewise, system components described according to a particular structural architecture and/or with respect to one system may be organized in alternative structural architectures and/or incorporated within other described systems. Hence, while various embodiments are described with or without some features for ease of description and to illustrate aspects of those embodiments, the various components and/or features described herein with respect to a particular embodiment can be substituted, added and/or subtracted from among other described embodiments, unless the context dictates otherwise.

[0153] As used herein, the term "component" is intended to be broadly construed as hardware, firmware, or a combination of hardware and software. It will be apparent that systems and/or methods described herein may be implemented in different forms of hardware, firmware, and/or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods does not limit the implementations unless specifically

65

SUBSTITUTE SHEET ( RULE 26) recited in the claims below. Thus, when the operation and behavior of the systems and/or methods are described herein without reference to specific software code, one skilled in the art would understand that software and hardware can be used to implement the systems and/or methods based on the description herein.

[0154] In this disclosure, when an element is referred to herein as being "connected" or "coupled" to another element, it is to be understood that one element can be directly connected to the other element, or have intervening elements present between the elements. In contrast, when an element is referred to as being "directly connected" or "directly coupled" to another element, it should be understood that no intervening elements are present in the "direct" connection between the elements. However, the existence of a direct connection does not preclude other connections, in which intervening elements may be present.

Similarly, while the methods and processes described herein may be described in a particular order for ease of description, it should be understood that, unless the context dictates otherwise, intervening processes may take place before and/or after any portion of the described process, and, as noted above, described procedures may be reordered, added, and/or omitted in accordance with various embodiments.

[0155] In this application, the use of the singular includes the plural unless specifically stated otherwise, and use of the term "and" means "and/or" unless otherwise indicated. Also, as used herein, the term "or" is intended to be inclusive when used in a series and also may be used interchangeably with "and/or," unless explicitly stated otherwise (e.g., if used in combination with "either" or "only one of"). Moreover, the use of the term "including," as well as other forms, such as "includes" and "included," should be considered non-exclusive. Also, terms such as "element" or "component" encompass both elements and components comprising one unit and elements and components that comprise more than one unit, unless specifically stated otherwise.

[0156] Unless otherwise indicated, all numbers used herein to express quantities, dimensions, and so forth should be understood as being modified in all instances by the term "about." As used herein, the articles "a" and "an" are intended to include one or more items and may be used interchangeably with "one or more." Similarly,

66

SUBSTITUTE SHEET ( RULE 26) as used herein, the article "the" is intended to include one or more items referenced in connection with the article "the" and may be used interchangeably with "the one or more." As used herein, the term "set" is intended to include one or more items (e.g., related items, unrelated items, a combination of related and unrelated items, and/or the like), and may be used interchangeably with "one or more." Where only one item is intended, the phrase "only one" or similar language is used. Also, as used herein, the terms "has," "have," "having," or the like are intended to be open- ended terms. Further, the phrase "based on" is intended to mean "based, at least in part, on" unless explicitly stated otherwise. In the foregoing description, satisfying a threshold may, depending on the context, refer to a value being greater than the threshold, greater than or equal to the threshold, less than the threshold, less than or equal to the threshold, equal to the threshold, and/or the like, depending on the context.

[0157] Although particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of various implementations. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Thus, while each dependent claim listed below may directly depend on only one claim, the disclosure of various implementations includes each dependent claim in combination with every other claim in the claim set. No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such.

SUBSTITUTE SHEET ( RULE 26)