Faces are considered very personal data, and yours may be getting collected and used for questionable ML applications without your consent or knowledge. Companies and academic institutions collect facial images from various user-submitted sources like dating sites, or worse in public spaces, like the BrainWash project that captured cafe patrons faces unwittingly. The applications of this data use are ethically concerning: Some of it was used to train models to identify ethnic Uighurs in China for the documented targeting and mistreatment of those people.
There currently is no oversight into the use of this datatype; data collection in the BrainWash case was not advertised to those entering the premise, and users of dating sites had no idea where their photos were being shared. Even if there is a disclosure and opt-out, your face would still need to be stored to know not to use it and one would need to trust that entity. There is at least one case already where Sweden's Data Protection Authority has issued its first fine for violations of the European Union's General Data Protection regulation after a school launched a facial recognition pilot program to track students' attendance without proper consent.
What Do You Think? Now that our faces are being collected without consent for machine learning purposes, should we start using tools that add adversarial noise to images of ourselves that we publicly post to reduce their efficacy on training or cause mis-classification, or will we see a time where people where clothing and accessories like glasses that distort our identity to an algorithm? See this defense as a possible glimpse into the future
This is a very interesting topic, not to mention a worrying one.
Personally I feel this should not be done without consent, but I wonder how people will just accept the fact... similar to how companies are using personal data without consent.
Btw, by how much could those tools reduce the use of images of faces but yet be clear enough for a human to see clearly?