Monday, May 2, 2011

GSoC 2011 Project: FreeBSD Path-based file system MAC policy

I've been accepted into Google Summer of Code (2011) once more. If you don't know and are curious, I was accepted into Google Summer of Code 2008. This summary email to the OpenChange mailing list better describes what I accomplished that summer of 2008 with the OpenChange and KDE projects and my mentor Brad Hards.

This year I applied to the FreeBSD project and you can find a description of my proposal here.

I've already started talking to my mentor (Pawel Jakub Dawidek) and we're in the brainstorming phase.

There are some interesting obstacles that I will be facing this summer that I am looking forward to overcome. To begin with, except for starting to write a small hobby OS for educational purposes a long time ago, I've never really done any kernel-level development in the past. I have read much about OS design and implementation principles and have read through kernel sources. I'm also currently reading The Design and Implementation of the FreeBSD Operating System.

Engineering-wise the first and foremost challenging obstacle that I am currently facing is the fact that the MAC framework doesn't really forward any path metadata to its modules. So, I'm trying to find a way to do this in a clean and modular way. Once I do, I can then use this data to match it with the vnodes of files that access needs to be restricted on.

I will try to keep posting as much as I can as the project develops.

As a bonus, here's a tool I've been using for browsing the FreeBSD sources. It's an LXR-based FreeBSD and Linux Kernel Cross-reference site. It also has the sources for other operating systems such as NetBSD, Darwin, Plan 9 and MINIX.

Monday, December 28, 2009

Preliminary screenshot

I've been working hard on getting the Files utility to work by working on the Utility Layer.

My goal is to get the (0.1) Files utility finished by the first week of January (and it seems like I should be able to make it).

I just wanted to quickly post a screenshot of where I am so far.
There's a lot to explain as to what's going on behind the scenes because I've added a lot of stuff, so I'll save that for later.

Here's the screenshot:

















Saturday, November 28, 2009

Utilities++

I've gotten a lot done this weekend on getting workplace and utility data synchronized between the client and server on login.

The task was easier than I expected, but still a lot had to be done.

It won't seem like much in this screenshot, but believe me, it was quite a bit of work:




It might actually seem like less is going on now than in our previous screenshots.

What happened before was that all those Utilities and Workplaces that were loaded on the GUI, were manually loaded.

What I've done this weekend is that I've added a few messages to our client<->server protocol to synchronize workplaces and utilities over the network when a client logs in. Also, they are actually utility plugins, and not just made up utilities.

Another thing here is that the Utility icon is actually part of the utility plugin binary, and Mira core gets it from the plugin at run-time.

Also, this weekend I finally got a new laptop and I have a dual-boot setup with Windows 7 on it.

So, my next task is to set up a dev environment around MS Visual C++ express 2008 and get Mira to compile on it. I tried once a long time ago, but gave up on it because a lot of code had to be re-written.

However, this code has already been removed, so I don't expect this to take too long this time around.

Also, all the work I've done on the plugin has been tested on linux, but not on windows, which prevents me from pushing these changes to trunk according to the rules we laid out a while ago in a meeting.

After I'm done getting Mira to work with Visual C++, I'll then work on a simple Files utility and get ready for our 0.1 release :).

I expect us to be able to release 0.1 before mid-January if we keep up this pace.

Wednesday, November 25, 2009

Mira Client Utilities

I just finished laying the framework for Utilities (Plugins) on the Client.

The design is more Object Oriented and cleaner than it is for the server thanks to QPluginLoader.

This is an example of what a plugin looks like:


#include "UtilityInterface.h"
#include "GuiInterface.h"
#include "NetworkInterface.h"

class FilesUtility : public QObject, public UtilityInterface, public GuiInterface, public NetworkInterface
{
    Q_OBJECT
    Q_INTERFACES(UtilityInterface GuiInterface NetworkInterface)

    public:
        FilesUtility();
        ~FilesUtility();

        bool initialize_gui(QLayout* qt_layout);
        bool receive_message(const std::string& workplace_name, const std::string& message);
        bool load_workspace(const std::string& workspace_name);
};


Basically you have to implement interfaces in order for Mira core to be able to interact with the Plugin.

There are some interfaces that are going to be required, and others are going to be optional.

With this design it is also really easy for utilities to communicate with each other. All that is necessary is for utilities to share interfaces, request the other utility's object, and just use the interface. This is not implemented yet, but it's fairly easy to do.

All that's needed now is a good library for plugins to use to interact with Mira core. This will probably be an ever-evolving library, at least until we get to release our first version of Mira.

After I get the library started, I will propose a merge with trunk. But for now you can follow this code in my Launchpad branch.

Sunday, November 8, 2009

Mira Utilities via Plugins

After reading through a lot of material on the web on dynamic libraries (aka plugins) using C++, I've come to a basic design for plugins in the server.

It consists of using the functions provided by the OS (dlopen/dlsym for Unix-based systems including Mac OS X and LoadLibrary/GetProcAddress for MS Windows).

Each plugin will have to provide two functions with C linkage:
- extern "C" void create_utility(WorkPlace* workplace)
- extern "C" void receive_message(const std::string& workplace_name, TcpConnection* tcp_connection, const std::string& message)

The first function (create_utility) is a factory function that takes care of creating new instances of the utility.

The way it should work is that a new instance of the utility class is created for each workplace. This Utility class wouldn't have to derive from any other base class as is normally done with C++ plugins.

The second function (receive_message) is used to receive network messages coming from clients, mostly from its utility counterpart on the client-side.

Keep in mind that this implementation will only be for the server. The Qt QPluginLoader framework will probably have to be used on the client. I didn't want to use this on the server to not introduce Qt as a dependency for the server.

Also, the plugin system on the server doesn't require much flexibility. The current design is rather simple and seems to work quite well so far. The UtilityManager class is less than 50 (real) lines of code.

One thing I haven't introduced yet is a way for utilities to communicate with each other internally (within the server). A few ideas that I can think off the top of my head to implement this, might to require the addition of another function with external C linkage to pass internal messages. Not sure how flexible this would be do.

Another option would be to use Boost.Signal2.

With this, and some more work on the Mira framework, I should be able to produce some simple utilities rather easily. Still some design issues that I'm gathering need to be worked out.

Saturday, November 7, 2009

Mira client connects to Server

There isn't much to say about this, so I'll just show some screenshots.



These two screenshots show some error handling I've added to the network code. Apart from this and a handler for error messages coming from the server, I haven't really added anything else to the network code.
































The client code has been able to connect to the server, but nothing had been done to be able to do this with the GUI. This is just to show the little progress I've made this weekend.

The code is currently in a separate branch from trunk which Shilpa created for handling network messages on the server. I'm waiting on her to finish up some work on this and then we're going to merge these changes with trunk.

Saturday, October 31, 2009

Security in the Directory Layer

After brainstorming for a while I came up with an idea for the security stuff in the directory layer.

In the existing code it's referred to as an access list.

Basically different resources are registered with the Directory layer. Each resource has an access list which is basically an user id associated with a permission.

Right now the access is an int8_t with the following constants defined for identifying what each value represents:


 static const uint8_t  READ_ACCESS        = 0x01;
 static const uint8_t  WRITE_ACCESS       = 0x10;
 static const uint8_t  READ_WRITE_ACCESS  = 0x11;
 static const uint8_t  NO_ACCESS          = 0x00;


What identifies the resource is a simple std::string object. This makes it flexible enough so that we can provide this functionality to utility developers and new resources can be created at run-time without creating conflicts.

However, because of this we will need to enforce a naming convention for resource names. This will need to be enforced by only allowing utility developers to create resource access lists through our API.

The naming convention should probably be something like this:
- For server resources: "server_resource"
- For workspace resources: "workspace_workspacename_resource"
- For workspace resources created by utilities: "utility_workspace_utilityname_workspacename_resource"

ie: In the Directory database in trunk there is an existing resource called "server_add_new_user". This is a resource that is used to determine who can add new users to the server.

In this case, a read access would only allow to see who has access to the resource, and a write access would allow to actually use the resource (Add new users to the server).

There is one more thing that we've talked about which is Server and WorkPlace roles. I haven't implemented this yet, but I have a good idea of how to do so (probably sometime next week).

I think the Role code should be more like a script (not literally of course). All it will do is know how to give the person the role is given to, the right access, to the right resources. So if a role of Server Administrator is applied to a user, then the Role code will give the user read/write access to most server functionalities, such as stopping the server (server_stop), restarting the server (server_restart), adding new users (server_add_new_user), etc... This should be easy to implement. We just need to find out what resources to give access to for each Role.