Working with biometric data? The FTC is watching you
The Federal Trade Commission (FTC), under the continuing high-profile leadership of its Chair Lina Khan, issued a warning on May 18, that it will not hesitate to combat unfair or deceptive acts related to the collection and use of consumers’ biometric information, and the marketing and use of biometric information technologies.
Chair Lina Khan has been vocal in saying that the FTC will use its broad existing powers in relation to rapidly emerging technologies The release of the FTC's 12-page policy statement on this issue is the latest evidence that the FTC will do just that, and proactively use its existing powers to protect consumer rights.
The policy statement is made under the continually evolving, Section 5 of the Federal Trade Commission Act (FTC Act) (15 USC 45), which prohibits, "unfair or deceptive acts or practices in or affecting commerce. Biometric data and artificial intelligence is a rapidly developing market segment, and the FTC's shot across the bows is directed at players who they fear have a "risk appetite" that is matched by their drive to innovate.
The policy statement calls out several use cases as being of particular concern. For example, using biometric tech to identify individuals in certain locations that could reveal sensitive information about:
- whether they have accessed particular types of healthcare
- their religious practices and orientation
- political leanings
- participation in organised labour.
It calls out facial recognition tech which may have higher error-rates for some people than others, the risks of "deepfakes", and the temptation posed by large databases of biometric info to malicious actors.
The Commission states that it will scrutinize a number of practices for compliance with Section 5 of the FTC Act, including false claims about the accuracy and fairness of biometric tech and whether the tech itself is unfair, taking into account (for example) whether an organisation has assessed foreseeable harms, addressed known or foreseeable risks, engaged in "surreptitious and unexpected collection or use of biometric information", failed to "evaluate the practices and capabilities of third parties", and whether they have provided appropriate training to employees and contractors.
The Commission also noted the trend of individual U.S. states and localities enacting biometric privacy laws, such as the Illinois Biometric Information Privacy Act, which provides a private right of action. Municipalities, such as New York City and Portland, OR, have also passed tailored biometric privacy measures. In addition, various states, including California, Colorado, Utah and Virginia have passed consumer privacy laws that will govern processing of biometric data.
Take-aways
- Organisations developing, or deploying biometric tech, either developed in-house or more likely through a vendor, should manage their risk carefully.
- Organisations executing complex agreements or considering acquisitions or disposals should consider whether the subjects of execution are resilient against enforcement.
- Boards, c-level executives, and managers should be alive to the fact that biometric information technologies should be high up on risk-registers, regardless of the lack of specific, federal biometric legislation. The FTC has already brought enforcement action in the past against photo app maker Everalbum and Facebook, saying that they misrepresented their uses of facial recognition tech.
For more information on this or any tech policy issues, please contact me or any member of our Global Tech Group.