Fork me on GitHub

Good Morning!


Morning morning!

Ben Hammond09:03:52

to the AI wonks out there; Articles like this deride facial recognition for bad signal:noise ratio, but reports on facial recognition in China impy that they have a very good hit ratio How does this happen Are the Chinese underestimating their false positives? Do they have better training sets, leading to a more reliable system? Do they have less facial diversity to reckon with, leadding to a more reliable system?


It is perhaps intuitive that if you are a surveillance state you may be better at surveillance techniques

😂 16
👍 8
Wes Hall11:03:21

@ben.hammond Less facial diversity would make the problem harder. The article doesn't really deride it. The results are pretty good. Though that conclusion is a little dependent on how many of the 8600 or so people who were "scanned" were actually in the database. If it was only 1 and they found him, that's a recall of 100%. The tuning of these systems is always a balance of precision vs recall, but since you'd want to tune something like this for recall (as in, "warn us if you think somebody might be in the database", as opposed to, "warn us only if you are absolutely sure that they are"), then accepting a few false positives is par for the course. These stats look pretty good to me, if you are into that sort of thing.


I find it worrying that facial recognition is being steamrolled across the country - we all know how badly it works.


Won't be long before we all have a "social" number akin to the scheme in China


if you don't rate high enough, then no cakes for you!


I probably should eat less cake...

Wes Hall11:03:04

@dharrigan I also find this quite worrying, from a certain perspective. Vaguely libertarian "tech-bro" that I am 😉. That said, I think the article demonstrates that it does work. The goal is recall, not precision. Again if 8600 people were "scanned", 8 were highlighted as people to investigate and 1 of those was a serious criminal (say wanted rapist or terrorist), then i'd definitely feel a little on the backfoot arguing that the costs outweigh the benefits, though I might be tempted to give it the old college try 🙂

Wes Hall11:03:48

Distrust of the state though.... very healthy 🙂

Wes Hall11:03:40

I also kind of wonder whether it will be thermal cameras that people want live soon. Catch those fevers early.

Ben Hammond11:03:39

and its not possible to measure the false negatives in the wild presumably

Wes Hall11:03:28

Yeah, very difficult to properly test this. "Let's let a bunch of dangerous prisoners out and see how many we recapture" 😂.

😂 4

I never know how to handle this. The state shouldn't have this power. Because a future iteration could horrifically abuse it. But private individuals could band together and create this functionality themselves. Google could for example.


Well this is a great conversation I walked into =)…

Wes Hall13:03:20

@dominicm Probably just one of those many things that are "coming" whether we like it or not. Like the day that somebody can download "ebola.stl" and send it to the 3d printer. All of us tech guys are basically just super-villians in denial 😉


I figure someone needs to create an equal level of defence.


Yeah. Exactly.


There will be a cat and mouse game I suppose


Maybe but it’ll only happen if enough people worth catching adopt the counter-measures — and facial recognition or the counter measures are proven effective in practice.


Won't it only happen if individuals feel threatened by their state?

Wes Hall13:03:56

@dominicm Almost posted a gif from 4 lions where they walk around shaking their heads because it "comes out blurry" 🙂


I hear that juggalo makeup is the way to defeat it. :thinking_face:

Wes Hall13:03:13

"Why are you going around in juggalo make-up?", "I want to reduce the chances of being questioned by the police..."


Indeed - it's a bit of an obvious ploy. Also, further evidence that clowns are a malign force.


In principle it’s possible to defeat facial recognition and similar ml techniques by making tiny and imperceptible differences (at least to the human eye) changes to the source

Ben Hammond13:03:28

is there a boundary condition that can provoke excessive processing as the system tries to figure out > face or not ?

Ben Hammond13:03:59

a face so ugly that it melted all the recognition systems would be quite cool

Ben Hammond13:03:48

if you can hijack voice assistants with ultrasonics, do facial recognition system use more light spectrum than the human eye? rendering the dazzle invisiible to the human eye?

Wes Hall15:03:47

These are attacks mostly on precision. Making changes that are imperceptible to the human eye is perhaps a useful strategy against ML platforms trained for very extreme precision but these crowd oriented facial recognition systems are not tuned like that. They're not looking for perfect matches, they are looking for any match that might warrant a bit of extra attention. They are going to tuned such that they still work if you wear glasses or style your hair differently etc. If you have the resources to do extra checks of, say, 15% of passengers going through airport security, then they will help you target the right 15%. It's a lot harder to defeat that kind of algorithm.


Yeah but that’s also not how the attacks against all these kinds of systems are working either. e.g. for the different but related facial recognition case where you want to put photos of your family online, but don’t want google et al to recognise those faces at scale; you can completely derail classification by subtly changing just a few pixels of the source image. See also this google paper where simply including a small psychedelic sticker next to a banana in a photograph causes a complete misclassification swinging from 98% or so confidence it’s a banana to near 100% confidence it’s a toaster, Obviously such attacks are harder when you need to account for different angles etc but it’s not beyond the realms of possibility, you could do something like this; by for example just wearing a rucksack with such a pattern on it repeated at different orientations.


They also show in the paper how you can do this by having a tie dyed like pattern etc


I'd love to carry an ultrasonic device with me constantly. Being in an office means that I'm constantly being recorded by Google.


It's kinda annoying that others can give up my privacy.


Any tool that can parse a source codebase and look for unused functions/defs/vars?


Borkdude has a library for this


I wanna say scalpel, but I know I'm wrong


Anyone got any recommendations for code-level monitoring a la Rollbar or Sentry?


I use sentry


clj-sentry to be precise


Neither of them ^^ has official Clojure support, though they are both well supported by community projects (Rollbar in particular has a library that is maintained by CircleCI)


@dharrigan - Thanks that's another vote for that one then


Works pretty well


@borkdude thanks for carve - reading.


I have a recollection of one that used to be about that is / was Clojure-Centric


also using sentry, we find it very useful


carve is pretty darn useful


Gap in the market?


3% gap that is


@dominicm ..? Are you referring to a Clojure-centric code-monitoring tool..?


Or have I missed something?


Not an existing one, a hypothetical one :)