When you hear the word “permission” what’s the first thing that comes to mind?
For me, it’s the picture in which I’m telling my kids not to do some mischief. It seemes so obvious that permissions in the computer world would be the same: “A user is or is not allowed to do something”, right?
We have all seen this before: User XYZ is (or is not) allowed to read from this file. In other words – it answers the question who is allowed to do what. But is that really all there is to it?
At first glance this does seem to cover everything but sometimes, more often than not, in order for the system to tell whether or not you’re allowed to access a file or a system and maintain a sufficient security in the system, the system also needs to know where from and when are you trying to access the system. For example: if this system is not required to be accessed from certain locations, like a government related website, it would be true to say that regardless of who you are – this type of access should be blocked. Another example would be a secretary that is trying to login to her/his account at let’s say – 02:30AM, it would be considered so abnormal that the identity of the user isn’t that important anymore and this type of access should also be blocked.
Today, most of the systems are already protected by rules that govern what a user (who) can do. Many other systems are also protected by rules that are based on the location – i.e. the where (yes, I am aware of the problematic nature of the accurate detection of the user’s location because of proxies and VPNs but at least these systems do the best they can to detect it correctly). Even fewer systems are using SIEM systems (or others that have similar capabilities) to issue an alert when users are connecting to the system in unusual hours (when) or from unusual locations (where).
In this post I would like to argue that there’s another factor that should be considered: the why. When you come to think about it, it is probably the most important factor. For example, we are not allowed to kill people obviously, according to the law but, there are exceptions to that rule and they are all based on that factor, the purpose. In most countries you are permitted to kill someone who is trying to kill you (and to be honest I don’t know countries that are different in that respect) and of course there are many good examples for that rule.
However, still, in the computing world, we still don’t use this important factor at all. When you install a new app on your phone, you give it permissions, to use the camera (for Instagram for example), to use the microphone (for Waze for example) but we never even think of what do these apps need it for. You give camera permission to Instagram to be able to take and upload your great photos, but in fact it now has the permission to use the camera ALL THE TIME. Even when you are asleep or busy doing other things or using other apps. You give Waze the permission to use the microphone to be able to tell it where you want to go, but now it has the access to use it ALL THE TIME…
So why don’t the companies that dominate this area of technology add this? Well, because it’s hard – very hard. How can you be sure that an app uses the specific resources ONLY for the purpose you allowed?
In this post I would like to challenge everyone who reads it to to harness the power of AI to this field. It is possible to create metrics that will indicate the usage of the feature or resource and then to use that to determine whether or not this app is ALLOWED to use that feature or resource for that specific purpose. At least some of the metrics that should be measured in order to achieve this ambitious goal are already available. If the phone is not in someone’s hand (i.e it’s not moving at all) and it’s not normally used by other apps at this time of day, and it’s in horizontal state and is connected to a charger what are the chances that the user is in-fact asleep and the apps that have permission to use the camera to upload images to the web should not have access to the camera now?
Furthermore, sensors that measure for example the warmth of the hand holding the phone, as well as other sensors, together with some basic ground rules, can help the system predict which access will be needed and deny access automatically when the requested access appears to be abnormal.
Personally, I think that the best AI for such a case would be the NuPic by Numenta because since it is a biologically constrained machine learning, it would be easier for the user to understand why a certain access was blocked but more on that on another post…