PUBLIC FAQs
Everything you need to know about how our facial recognition services work, how your data is handled, and what your rights are.
Everything you need to know about how our facial recognition services work, how your data is handled, and what your rights are.
Facial Recognition involves the use of sophisticated software to plot the distance between multiple points on a person's face and any identifying features. This is usually done from a digital photograph of a person's face.
All the observed points on a person's face are then encoded using specialist algorithms into a very long string of numbers, known as a biometric template.
Biometric templates cannot be reversed back into a likeness of a face, or used for anything else, they can only be used to compare against another biometric template created by the same algorithm to determine if there is a match between the two templates.
Businesses that use our Facial Recognition software will use one of their existing CCTV cameras, usually at the entrance, to view all the people that enter their premises. The CCTV camera will send a digital picture of the faces of all the people that enter their premises back to the cloud. Our software will then convert all received facial images into biometric templates.
These biometric templates are then compared by our software to the Watchlist of Subjects of Interest that the specific business has stored securely in the cloud.
This whole process takes less than a second, and if the person entering the store is not a match for someone on the business's Watchlist then all images of them and the resultant biometric templates are deleted immediately and permanently.
If the biometric template of a person entering the store is a likely match for a biometric template stored on their Watchlist then our software alerts designated, trained members of the business's staff to firstly verify that the facial match is indeed correct, and then to decide upon the best course of action to follow.
How businesses respond to a verified match to their Watchlist is their decision, not FaiceTech's, but this usually just involves a human continuing to monitor the Subject of Interest within the store to prevent any further incidents occurring.
FaiceTech's software provides businesses that use our software with the necessary tools to manage their own Watchlists, privately and securely. FaiceTech cannot access any of the data, personal or biometric, that is stored on these Watchlists.
As part of the process to decide whether to use Facial Recognition services or not, each business must consider and set out what safeguards it will put in place to ensure that their Watchlists are managed appropriately by their staff.
As part of each business's Data Protection Impact Assessment process they have to put in place:
FaiceTech do not operate shared Watchlists, and each business customer must create and maintain their own specific Watchlists.
FaiceTech have developed all our software and systems from a point of view of being secure by design. This means that data security is a key consideration from day one of developing new software and systems, not an afterthought.
All data that our customers enter into our software and cloud systems is encrypted whilst in transit and whilst in storage, and we also insist upon stringent security measures when our customers' staff are accessing those cloud based systems.
We only chose the most secure datacentres to host our services, and ensure that they are transparent about the independent security testing they carry out and their certifications.
In fact we have achieved accreditations with multiple independent bodies to that demonstrate our commitment to data security by design – details of which can be found in the Data Protection section of our website.
We use well-tested, independently verified, and independently (NIST) approved algorithms. These algorithms are what is known as pre-trained – which means that they are developed using AI to continue to improve their accuracy until they are independently tested and proven to be reliable, then they become passive and are not developed any further to ensure that the levels of accuracy achieved during testing are not eroded.
Facial Recognition has received some bad press in the past about the risks of racial bias. This might be where the algorithm thinks two different people look more alike because the colour of their skin is similar, and the algorithm wasn't trained properly for their skin colour. Such a flaw could result in the wrong person being flagged.
At FaiceTech we licence third-party algorithms and require that they perform exceptionally highly using the FairFace dataset which was created to test for and eliminate racial bias.
We require all third party algorithms to have been tested by NIST with published results. We perform our own tests against all our algorithms and services. We constantly review their performance and take action against any concerns immediately.
We do not store any images or biometric facial templates of any persons for our own usage. Our software and cloud services allow our business customers to store images and biometric templates relating to their separate Watchlists.
When storing a facial recognition Watchlist our clients do not require any personal data at all. All they need is a reasonable photograph of a data subject and a unique reference number that can be matched back to the personal data our client's already hold in their existing data records about the data subjects.
As part of the extensive Data Protection Impact Assessment completed with each client before any service can go live, we identify and document the relevant data retention policies.
Our platform automatically deletes ALL personal data and images according to the data retention policies.
No. Our systems only record data related to individuals that match Subject of Interest on specific businesses' Watchlists. All other images of individuals are deleted immediately after processing, and no match being found.
As our business customers only have access to their own Watchlist data, they can only know when a Subject of Interest from their Watchlist has entered their premises. We strongly discourage business customers from using personal data to refer to Subjects of Interest on their Watchlists too, insisting that they instead use a unique reference code to link back to that Subject's personal data stored in their own master systems.
Hence it would not be possible to use our systems to track or search for an individual, as business customers only have access to their own data, and will not be able to search for a person by name. All they can do is search their own data for when and where a Subject of Interest on their Watchlist last appeared.
Due to the high accuracy levels of the algorithms that we use, that have been independently verified, the chances of our systems producing a suggested match that turns out to be wrong are extremely low. However, it is still possible, especially if someone has obscured a large portion of their face in some way.
For this reason there is legislation in the UK, and most countries around the globe, preventing people from being automatically identified by a computer. All potential facial recognition matches have to be checked and verified by a properly trained human before they can be acted upon. This usually means receiving more attention than anyone else in a retail store as a preventative measure, but may mean in some cases exclusion from a retailer's premises.
Organisations using our services cannot add people to a Facial Recognition Watchlist without them previously being involved in an incident on that business's premises. There are safeguards in place to prevent people from being added to a business's Watchlist without verified justification.
UK Data Protection Legislation requires businesses that are using Facial Recognition to be transparent about this with signage and accessible information about how you can exercise your rights with regards to the data they may hold that relates to you.
This also requires them to inform you, if they can indeed contact you, when they have taken the decision to add you to their Watchlist, for what reasons, and for how long.
No, probably not. Being on a Watchlist is specific to a particular business, and you can only be flagged up when entering one of their premises.
Businesses using our software and systems do not operate shared Watchlists, so you will not be flagged when entering other shops, just those belonging to the same company.
However, dependent upon the nature of the incident that led to your addition to their Watchlist and their operational procedures, it is likely that the staff in a shop that has flagged you will still allow you to shop legally within their premises anyway.
As part of UK Data Protection legislation any businesses using Facial Recognition on their premises are required to make visitors aware of this fact using prominent signage, and to make further information regarding their use of Facial Recognition available to you.
No. We only work with businesses and partners that agree to our code of ethics.
The avoidance of harm is a crucial element in our business. Harm, in this sense refers to a negative impact on privacy, dignity and human rights. We regularly review the application of our services to ensure they cannot be used in any way which causes harm to individuals.
FaiceTech adopts the OECD AI Values-based Principles.
Please note: FaiceTech does not have access to any of the data stored in our systems by our business customers, and therefore we cannot process any requests for data relating to individuals stored in our systems. All Subject Access Requests and the like will need to be directed to the specific business that may be storing that data.
Everything you need to know about how our facial recognition services work, how your data is handled, and what your rights are.
Facial Recognition involves the use of sophisticated software to plot the distance between multiple points on a person's face and any identifying features. This is usually done from a digital photograph of a person's face.
All the observed points on a person's face are then encoded using specialist algorithms into a very long string of numbers, known as a biometric template.
Biometric templates cannot be reversed back into a likeness of a face, or used for anything else, they can only be used to compare against another biometric template created by the same algorithm to determine if there is a match between the two templates.
Businesses that use our Facial Recognition software will use one of their existing CCTV cameras, usually at the entrance, to view all the people that enter their premises. The CCTV camera will send a digital picture of the faces of all the people that enter their premises back to the cloud. Our software will then convert all received facial images into biometric templates.
These biometric templates are then compared by our software to the Watchlist of Subjects of Interest that the specific business has stored securely in the cloud.
This whole process takes less than a second, and if the person entering the store is not a match for someone on the business's Watchlist then all images of them and the resultant biometric templates are deleted immediately and permanently.
If the biometric template of a person entering the store is a likely match for a biometric template stored on their Watchlist then our software alerts designated, trained members of the business's staff to firstly verify that the facial match is indeed correct, and then to decide upon the best course of action to follow.
How businesses respond to a verified match to their Watchlist is their decision, not FaiceTech's, but this usually just involves a human continuing to monitor the Subject of Interest within the store to prevent any further incidents occurring.
FaiceTech's software provides businesses that use our software with the necessary tools to manage their own Watchlists, privately and securely. FaiceTech cannot access any of the data, personal or biometric, that is stored on these Watchlists.
As part of the process to decide whether to use Facial Recognition services or not, each business must consider and set out what safeguards it will put in place to ensure that their Watchlists are managed appropriately by their staff.
As part of each business's Data Protection Impact Assessment process they have to put in place:
FaiceTech do not operate shared Watchlists, and each business customer must create and maintain their own specific Watchlists.
FaiceTech have developed all our software and systems from a point of view of being secure by design. This means that data security is a key consideration from day one of developing new software and systems, not an afterthought.
All data that our customers enter into our software and cloud systems is encrypted whilst in transit and whilst in storage, and we also insist upon stringent security measures when our customers' staff are accessing those cloud based systems.
We only chose the most secure datacentres to host our services, and ensure that they are transparent about the independent security testing they carry out and their certifications.
In fact we have achieved accreditations with multiple independent bodies to that demonstrate our commitment to data security by design – details of which can be found in the Data Protection section of our website.
We use well-tested, independently verified, and independently (NIST) approved algorithms. These algorithms are what is known as pre-trained – which means that they are developed using AI to continue to improve their accuracy until they are independently tested and proven to be reliable, then they become passive and are not developed any further to ensure that the levels of accuracy achieved during testing are not eroded.
Facial Recognition has received some bad press in the past about the risks of racial bias. This might be where the algorithm thinks two different people look more alike because the colour of their skin is similar, and the algorithm wasn't trained properly for their skin colour. Such a flaw could result in the wrong person being flagged.
At FaiceTech we licence third-party algorithms and require that they perform exceptionally highly using the FairFace dataset which was created to test for and eliminate racial bias.
We require all third party algorithms to have been tested by NIST with published results. We perform our own tests against all our algorithms and services. We constantly review their performance and take action against any concerns immediately.
We do not store any images or biometric facial templates of any persons for our own usage. Our software and cloud services allow our business customers to store images and biometric templates relating to their separate Watchlists.
When storing a facial recognition Watchlist our clients do not require any personal data at all. All they need is a reasonable photograph of a data subject and a unique reference number that can be matched back to the personal data our client's already hold in their existing data records about the data subjects.
As part of the extensive Data Protection Impact Assessment completed with each client before any service can go live, we identify and document the relevant data retention policies.
Our platform automatically deletes ALL personal data and images according to the data retention policies.
No. Our systems only record data related to individuals that match Subject of Interest on specific businesses' Watchlists. All other images of individuals are deleted immediately after processing, and no match being found.
As our business customers only have access to their own Watchlist data, they can only know when a Subject of Interest from their Watchlist has entered their premises. We strongly discourage business customers from using personal data to refer to Subjects of Interest on their Watchlists too, insisting that they instead use a unique reference code to link back to that Subject's personal data stored in their own master systems.
Hence it would not be possible to use our systems to track or search for an individual, as business customers only have access to their own data, and will not be able to search for a person by name. All they can do is search their own data for when and where a Subject of Interest on their Watchlist last appeared.
Due to the high accuracy levels of the algorithms that we use, that have been independently verified, the chances of our systems producing a suggested match that turns out to be wrong are extremely low. However, it is still possible, especially if someone has obscured a large portion of their face in some way.
For this reason there is legislation in the UK, and most countries around the globe, preventing people from being automatically identified by a computer. All potential facial recognition matches have to be checked and verified by a properly trained human before they can be acted upon. This usually means receiving more attention than anyone else in a retail store as a preventative measure, but may mean in some cases exclusion from a retailer's premises.
Organisations using our services cannot add people to a Facial Recognition Watchlist without them previously being involved in an incident on that business's premises. There are safeguards in place to prevent people from being added to a business's Watchlist without verified justification.
UK Data Protection Legislation requires businesses that are using Facial Recognition to be transparent about this with signage and accessible information about how you can exercise your rights with regards to the data they may hold that relates to you.
This also requires them to inform you, if they can indeed contact you, when they have taken the decision to add you to their Watchlist, for what reasons, and for how long.
No, probably not. Being on a Watchlist is specific to a particular business, and you can only be flagged up when entering one of their premises.
Businesses using our software and systems do not operate shared Watchlists, so you will not be flagged when entering other shops, just those belonging to the same company.
However, dependent upon the nature of the incident that led to your addition to their Watchlist and their operational procedures, it is likely that the staff in a shop that has flagged you will still allow you to shop legally within their premises anyway.
As part of UK Data Protection legislation any businesses using Facial Recognition on their premises are required to make visitors aware of this fact using prominent signage, and to make further information regarding their use of Facial Recognition available to you.
No. We only work with businesses and partners that agree to our code of ethics.
The avoidance of harm is a crucial element in our business. Harm, in this sense refers to a negative impact on privacy, dignity and human rights. We regularly review the application of our services to ensure they cannot be used in any way which causes harm to individuals.
FaiceTech adopts the OECD AI Values-based Principles.
Please note: FaiceTech does not have access to any of the data stored in our systems by our business customers, and therefore we cannot process any requests for data relating to individuals stored in our systems. All Subject Access Requests and the like will need to be directed to the specific business that may be storing that data.