Jorge Villalobos
Ideas, technology and the occasional rant

Wednesday, May 17, 2006

Improving security for people - Part 1

Software security is a very big deal. It's constantly on everybody's minds. Operating system, enterprise application and small application developers alike are very concerned (or should be very concerned) in creating products that are both stable and secure. Otherwise, they better pray their products remain unpopular and unimportant. The Mozilla Foundation is learning this with their increasingly popular Firefox browser. I completely vouch for Firefox's security (now), but a great amount of stories has been published downplaying its powerful security due to a series of vulnerability disclosures. Mozilla has always responded swiftly, and to the day I haven't seen a single successful exploit affecting a Firefox user. Exploits has been created, but lacking any repercussion except making headline favorites for certain "news" sites. But the fact remains: the vulnerabilities were there and they were exploitable, just like most browsers and applications.
This "IE vs Firefox" security bout has its parallel with the "Windows vs. Linux , Mac OSX, or pretty much any other OS" security bout. Virus, spyware, Internet worms, you name it. Windows has them all. The other OSs don't lack security issues either, but these issues haven't been remotely close to what Sasser did in its time. Still, many wonder if the alleged security of these systems is due to their lack of usage by the average Joe, who has proved to step into every exploit known to man. I am one of them (the people who wonder about it, not the Joes). These systems are more secure than Windows, I'll give you that with little hesitation. But if we were to replace every installation of Windows with Mac OSX or Ubuntu or any other, and assuming everyone will instantaneously learn how to use it as easily as Windows, what would happen? Immediate security for everybody? That's laughable. Considerable security boost for everybody? I'm not so sure.
The main motivation of black hat hackers today is to gain instant worldwide popularity for their feats. Whoever's on top will be hacked, no matter what. What makes me so sure? This: software will never be perfect. Not while I'm still out there coding, at least. Most of you will agree with me. It is inevitable. Meticulous specification, proper design and thorough testing will minimize bugs, but they will not eliminate all of them. And bugs are just the tip of the iceberg.
It's not very common to find security exploit postings that say something like: "This exploit will allow an attacker to read the user's files". I think it's more common to read: "This exploit will allow an attacker to gain unlimited access to the operating system and run arbitrary code". Why is this more common? Well, that's easy. Application programs are allowed to do too much in the system. The problem: security rights and grants are ridiculously simple.
Let's take a common Internet browser as an example. Modern browsers need to be able to run Flash and Java applications, show PDF files (which are essentially scripts) and sometimes even open other more complex types of document. Using current security measures this means that a browser should be allowed to do pretty much anything it wants. Executing arbitrary scripts is by far the most insecure activity possible and the browser has to be able to do it, period. This means that any attacker that successfully takes control of the browser (through some bug or other type of exploit) will usually be able to do everything the browser does, which is, as I said, everything.
We should all be grateful that the average Joe has no idea of how software works or should work. Otherwise he would be extremely angry when a hacker takes control of his computer just because he clicked some malformed link. Some will say: "If he knew anything about anything he wouldn't have clicked on the damn link in the first place". Well, I don't know about you, but I don't look at the status bar every single time I click on a link. I do look when I'm not sure on where is it leading, but accidents happen, and they happen to everyone.
What shouldn't happen is the utter lack of containment within the system. If I click on a link, is it normal for the browser to execute a shell script? rm -rf comes to mind... Is it normal for it to execute anything? No, it isn't. Except in some cases, you can be sure the browser is not going to execute anything when the user clicks on a link. So why not contain it?
I'm no expert in security, but I have a brain, so here it goes: operating system and user application security should try to resemble normal human security. I'll give you a simple analogy: let's say you're on your house. On your house you have permission to walk around pretty much anywhere, and that's normal. On your house, there's all sort of tools, such as knives or hammers, which you may use. Is it normal for you to walk around your house with a knife or hammer in hand? If you're living with someone else, that person is definitely going to ask you what the hell are you doing. Because it is not normal. You may walk around with a hammer, but that only happens when you have to fix something (which should be rare) and only when you are in fact going to the place where you have to fix that thing, or when you're coming back. It's a very localized circumstance. Other than that, you don't walk with a hammer. That's it. To sum it up: Having access and permission to some resource does not imply having access or permission to such resource at all times.

The Program Safe Zone: Give me the usual

Application programs have way too much liberties and access. If we want programs to be secure, tighter restrictions must be established so that the system can easily recognize security breaches. An important concept to deal with is the "normal" domain in which a program works and to which it should have access. Think of your daily routine. There's a set of objects you frequently access and activities you frequently perform that constitute your "normal" daily activities. People around you won't find it strange to see you doing any of those because they know they're the usual. The same concept should apply to programs. There's a very well defined set of objects and activities a program usually engages, and this is what I call the Program Safe Zone.
This what I think should be included the Program Safe Zone:
  • The program folder. This is the location of the program executable binary or script, and includes all files within the folder and its subfolders. Here one would usually find binaries, libraries and program-wide configuration files. The general rule for this region is that the program should only have read and execute access for all files. Exceptions to this rule would be installation of patches and changes in program-wide configuration, where write access is needed. I think the latter should be avoided as much as possible; most modern applications handle configuration at the user level, and program-wide configuration is usually performed once at installation time and registered elsewhere (see next point: Configuration registry). Either way, write access to the program folder can be restricted to very specific use cases.
  • Configuration registry. This has been more formalized on Windows systems with the infamous Registry, but it has also been adopted by other systems, so it's worth mentioning. This is a centralized configuration repository where programs can register configuration settings in a few formats. Programs should have read and write access to specific locations within the registry, and should only be able to write to it on well-defined cases.
  • User configuration. Some user level configuration is kept in user profile folders, and some is kept in the configuration registry. The latter is covered in the previous point. Both are necessary because the configuration registry holds relatively small pieces of data. The user configuration folders allow storage of arbitrary number of files of arbitrary sizes and formats. Programs should have read and write access to these folders with little restriction. Programs like web browsers are constantly reading and writing on these folders, storing history, bookmarks, cookies, etc. Execution should be avoided as much as possible. In case of user level plugins and extensions, these should be registered as libraries (see next point: Libraries) so that they have special execution permissions. They can still be stored in the user configuration folders, but can only be executed through library interfaces.
  • Libraries. Most modern programs require a somewhat large set of system and third-party libraries to function. These libraries can be dynamically linked during execution, but they are very well known at installation time and modified only at patching and plugin installation time. This means the system can enforce very strict execution restrictions, allowing a program only to execute the binaries and libraries it registered at installation time. Libraries should have execution access only.
  • The temporary folder. For whatever reasons, programs often need to write temporary files on disk. This is very useful for word processors, for example, to keep a backup copy of the unsaved document in memory, that can be recovered in case of power or system failure. All of these files should be written on a centralized location, that already exists on Windows and most (if not all) Unix systems. Programs should have only read and write access to a specific subfolder of the temporary folder, assigned by the system.
Note I haven't mentioned user files, such as documents. These are outside the Safe Zone, even though they are of regular use in most applications. The next section better explains handling of files and other objects outside the Safe Zone.

Outside the Safe Zone: Authorized personnel only

What should a program be allowed to do outside its Safe Zone? Current file handling APIs allow you to open arbitrary files in the system, provided you have proper access permissions. This is very dangerous, specially because most system-critical files are found on well-defined locations and most users work with administrative access on their computers. A program can easily cripple a system because it has free access to all of the system's files, unnecessarily, in my opinion.
Programs should not be allowed to open files outside their Safe Zone. The system should recognize these breaches and return access errors. Programs should know nothing past their Safe Zone. What about user files and documents, then? They are added to the Safe Zone, by user request, through system APIs.
Current high-level file selection APIs do the following:
  1. Program calls the file API, to show a Select File dialog.
  2. Dialog is shown to the user.
  3. User selects any file (using masks and whatever) and submits.
  4. The API returns the path to the file to the program.
The program can then open the file using the provided path. It could as well open any arbitrary path, because the API allows it. It could also write to the selected file, even if the user thought the program would open it read-only. This gives a lot of freedom for malicious applications, granting them access to the whole filesystem, or most of it. I think APIs should work in the following way:
  1. Program calls the file API, to show a Select File dialog. The program indicates the access grants it requires for the file.
  2. Dialog is shown to the user. Optionally it should show the user the type of access the program is requesting for the file (read, write, execute or any combination).
  3. User selects any file (using masks and whatever) and submits.
  4. The API adds the file to the program's Safe Zone, with the access grants requested by the program.
  5. The API returns the path to the file to the program.
Notice that the program will get an access error if it tries to open the file anytime before step 4. This effectively closes the system for the program, unless the user explicitly indicates otherwise.
There's a series of issues here that have to be dealt with. First, some programs don't handle single files but rather have project folders. This is very common with compilers. The same concept can be extended for these cases, adding the folder, subfolders and their files to the Safe Zone, with user approval, that is. Some security is lost here, as a compiler would have to ask for a folder with all permissions (read, write and execute), but most applications handle single files with read and write access, so the gain is considerable in the average case. Secondly, the appearance of the dialog has to be somehow templatable. Some applications such as games require a more customized look but are still bound to the rules of the system. No file opening will happen without going through this interface.
Another issue is non-graphical applications. Yes, I haven't forgotten you, my monochromatic friends, even though I tend to. This is a complicated issue that might be solved by offering different "flavors" for the file selection interface. In the case of very simple console applications, the file API could show a line stating the access grants requested, and on the next line the user can manually input the path to the file. It has the additional gain that it could allow consistent path autocompletion on file inputs. But as I said, this only works for the simplest cases. Many console applications have a semi-graphical text-only interface that makes things more complicated. Most of these have very different looks and there's no way to create a dialog that agrees with all of them. An option would be for the system to overlay a frame with a box where the user can type the file name. It's not pretty, I know, but it would be necessary to have everyone using the proper API.
We also have to consider what happens when a program receives file names as execution parameters. The most common case of this happens when a user double-clicks on a document and it is opened with the associated program. The system is actually calling the program with the clicked file name as a parameter. Another case is with console applications, where you can type commands like "ls jorge/*.txt", to list the files in a directory optionally using a special mask. In these cases the system can somewhat safely assume the user has explicitly asked the program to open the specified files, so the file opening process would begin on step 4. This implies that the program has to register that it can receive file paths as parameters, and the system will have to identify them and add them to the program's Safe Zone before allowing the program to use them. Shells already have to identify paths in order to solve path masks, so this is probably not so hard to implement.
Finally, there's the recent documents list. Most programs that handle user files facilitate a short list of recently used files the user can open with a single click. It's a useful feature that should not be affected by these new security measures. I think the best way to handle this is to delegate to the system the task of storing these paths. Programs are allowed to add or remove any file in their Safe Zone to a extended Safe Zone list (a small queue seems appropriate) handled by the system. When the program is executed, the system looks for the list for that program and that user, adding all paths in the list to the program's Safe Zone. This way, the program can directly open the files when the user selects them from the recent document list.

More to come

I think the discussed changes would go far in improving application security, but this is just half of what I have in mind. There's a second part of this discussion coming soon, which will fill several (but not all) of the wholes in my theory of improved security. As you may have noticed, it's very user-oriented and severely limits the freedom of user programs within a system. This is something that I don't see being implemented on many systems. But if this ideas (or anything like it) ever make it to the public, then I'll be very happy.
Stay put for part 2, where I will get more general and you'll see how the bigger picture makes this first half more coherent.

Get Firefox 1.5!


  • If the browser(I mean firefox...) is not corrupt, I dont see any problem with it being able to use resources as long as they are channelized, i.e if you have a plugin to display PDF it wouldnt do anything fishy unless you get it from a shady place.

    I m no browser expert, but if what you said it true about the exploits then browser shud run some form of BSD jail to quarantine itself incase of an attack.
    Alternatively its capability can be made restrictive (in capability enabled *nix systems)...

    By Blogger Sridhar, at 5/17/2006 7:23 PM  

Post a Comment

<< Home