Monthly Archives: November 2014

Federated Authorization

For a while now the use of federated userids has been the norm. A site called the Service Provider, SP, has entered into agreement with another, called the Identity Provider, IP that it will trust the userid the IP sends it as being properly authenticated and accurately identifies the user.
There are a number of way this can be done; the passing of a SAML token from the IP to the SP is a popular way. Where the SAML token contains the userid and the SP trusts this because the token is signed by the IP.

So much for users and their userids, but what about what they have access to; The authorization.
Userids are tied to revenue in significant ways. In businesses deriving revenue from gathering information about the users, the useris is clearly of primary concern. More generally in business applications it is not the userid itself that is central, but what the user has access to and how to execute that access control.

In a standard access control deployment scenario with federated users. For example the user login somewhere else (the IP) and click a link to (return to ) the application in question, with the userid being passed along. This is a federated login. The application (the SP) still examines the userid and refers to its own access control infrastructure for what the user is authorized to do. Could this step be federated too ? Should it ?
Like a login and authentication infrastructure costs money, so does maintaining an access control infrastructure. Federated login cuts the costs of the first, so can Federated Authorization, FA, cut the second.
Form the cost point of view FA makes sense.

Can it be done and does it make sense from a policy and governance point of view ?. It is one thing to let someone else identify the user on your behalf. Digital certificates have been around for a while so the issues surrounding outsourcing the work of identify users are not new. And the risks are largely accepted. But authorization cuts close to the essentials of governance: what are people permitted to do. True, if the user is misidentified having a thorough access control system does not make much difference.
One significant issue is related to changes. A user ID doesn’t change much, if at all. What a user has access to, does. Sometimes great urgency. Having direct control of authorization is then desirable.

The day-to-day problem with access control is not primarily the enforcement, though that can be tricky enough. It is the creation of and updates to policy. The rules governing the access. Who gets to decide and how can those decisions be captured in an enforceable way.

another Silk Road takedown

here we go again… The powers that be have take down some evil doers hiding behind TOR. I’m must ask pardon for being a bit jaundiced about the hole thing.
Clearly there are people doing undesirable things taking advantage of the protection offered by TOR. but TOR has a legitimate purpose and the support behind TOR is impeccable. But TOR is not perfect, specifically it is not fool proof. User error compromises it.
But I’m wondering is there is another way for the vendors on Silk Road to conduct their business. Leaving aside the problems of payment for the moment: BitCoin has it’s own weaknesses.
Concentrating merely on how two parties can get in touch with each other. Were A is looking for something that B has to offer, put very generically.

If my understanding of the attack against Silk Road is correct: It is based on having control over a significant portion of the onion routers. Sufficient for being able to establish a pattern of traffic from yourself to a particular .onion address.
Each request packet is routed every which way by the TOR protocol. But with control of enough routing nodes a pattern will never the less emerge since the request packets must eventually end up in the same place: Where the concealed service is hosted.
This last bit is true even without implementation errors. DNS leakage and the like.
What if the hidden service is not hosted in just one particular server, but on many different ones. And using the service will involve traffic to many different locations. Is this possible and whould it make a service concealed behing a onion routing scheme undetectable ?
That depends. If the hidden service is hosted on a fixed set of servers rather than just one, there is no real difference. Just that more traffic now needs to be analysed to be able to pin-point them. This could be counteracted by moving the service often, but that is a defence mechanism independent of how many host the hidden service is using. And controlling more onion routers will help the tracker.

Trusted VPNs is of course an option. Where both the client and the hidden service use VPN to tunnel to some part of the internet. The tracker is then left with finding a proxy rather than the real host. It is still possible that the tracker might use traffic analysis to get from the proxy to the real host. As long as the service stays put long enough.

This suggests a possible course of action. Can the hidden service be dynamically located ? Perhaps even randomly.
I think I’ll work on that. A facinating challenge. Watch this space.