The tech companies are begging the NSA to let them say how many requests they've acceded to, and my bet is that they'll concede. It doesn't really matter what the number is, it will be tiny relative to the number of users, and it will seem insignificant to most law abiding, voting citizens. It seems to me that the plan is that at that stage it will all go away. However, they're answering the wrong question, a favourite tactic of politicians: if cornered, and asked a question you don't want to answer, answer a different question.
The question the NSA should be answering is this: how do you know which requests to make of the tech companies? In reality, information discovery - demanding personal information about people of interest - has not changed in years. Whether that's phone records, bank records, or - these days - emails and internet records. That stuff is frankly uninteresting, but the narrative is all heading in that direction. However, the real trick is knowing which questions to ask.
The NSA seems to have a mechanism to use big data techniques - in facilities like the one in Utah - to run algorithms on the internet. That's the whole thing. Major service providers are collaborating to an extent, but that is primarily an architecture exercise. The idea is to model and identify patterns in the data that allow them to predict who is likely to do bad things, and to model people networks within the data. Just as Ann Marie Slaughter explained in one of her TED talks that international relations is no longer about states, but about networks, international terrorism is similarly constructed.
These models run on everybody's data. That's really important. If you don't know what the standard patterns are, then how will you recognise a deviation? Therefore you need to monitor the school run the same as you monitor as potential suspect casing a possible target. Anomalies in behaviour, outliers, are interesting. These are relative positions, and in order to recognise their uniqueness, it is essential to have a baseline from which to refer. Now - that's not the same thing as saying that they're spying on people. But their computers would have to trawl through all the data in order to determine patterns and models that help them to identify persons of interest.
Once those determinations are made, then they can make requests of Google, Facebook, and whomever to get more detailed and specific information in order to proceed with an investigation. The tech companies are anxious to let us know how many times they have had such requests, because they know how inconsequential that is likely to be. What they are not discussing is how the NSA figured out what requests to make.
The question the NSA should be answering is this: how do you know which requests to make of the tech companies? In reality, information discovery - demanding personal information about people of interest - has not changed in years. Whether that's phone records, bank records, or - these days - emails and internet records. That stuff is frankly uninteresting, but the narrative is all heading in that direction. However, the real trick is knowing which questions to ask.
The NSA seems to have a mechanism to use big data techniques - in facilities like the one in Utah - to run algorithms on the internet. That's the whole thing. Major service providers are collaborating to an extent, but that is primarily an architecture exercise. The idea is to model and identify patterns in the data that allow them to predict who is likely to do bad things, and to model people networks within the data. Just as Ann Marie Slaughter explained in one of her TED talks that international relations is no longer about states, but about networks, international terrorism is similarly constructed.
These models run on everybody's data. That's really important. If you don't know what the standard patterns are, then how will you recognise a deviation? Therefore you need to monitor the school run the same as you monitor as potential suspect casing a possible target. Anomalies in behaviour, outliers, are interesting. These are relative positions, and in order to recognise their uniqueness, it is essential to have a baseline from which to refer. Now - that's not the same thing as saying that they're spying on people. But their computers would have to trawl through all the data in order to determine patterns and models that help them to identify persons of interest.
Once those determinations are made, then they can make requests of Google, Facebook, and whomever to get more detailed and specific information in order to proceed with an investigation. The tech companies are anxious to let us know how many times they have had such requests, because they know how inconsequential that is likely to be. What they are not discussing is how the NSA figured out what requests to make.
No comments:
Post a Comment