Considerations for high-level embedded platforms and desktop programming language

Some time ago I was involved in a software project founded by the company I have been successfully cooperating with for number of years. This company specializes in servicing and modernizing military equipment and instruments (mostly aviation) and I was responsible for implementing a piece of software integrated with the hardware platform designed by this company. During one of the catch-up meetings I was asked by the project lead engineer to share my thoughts on the route they could take to start developing software solutions running on embedded platforms as well as on desktop (two birds with one stone, one may say).

embedded_GUI.png

These days picking ONE, multi purpose technology is actually more difficult than it seems and the fact that web oriented solutions are getting more and more popular does not make things any easier. That is why I decided to publish my thoughts on this subject.

Background

To fully understand what I was asked for it is important to know some context. The company’s tower I have been working with hires a number of highly qualified, seasoned electronics engineers. Unlike tech departments in most of the companies I have seen so far, the average age of engineer is quite high and people use to work in the same company for more than just a couple of years (working with military and aviation especially is hard, one needs to spend few years as a junior to know what is going on). Engineers involved in the project had experience with the “old school” digital electronics, microcontrollers programmable with low level programming languages such as assembler and high level programming languages like C/C++ and VHDL (books I used to learn programming from defined C and C++ as high-level languages but I think it is not the case as languages like Scala, C#, Python are around). With this set of tools they were delivering top quality solutions for military for years although, as systems they were developing were getting more complex, they found themselves to be in a position where using programmable microcontrollers and low-level hardware communication protocols was just not enough. They needed to master higher level tools such as OS (Operating Systems), Ethernet protocols, multithreading and GUIs (Graphic User Interfaces) – things that have been around for years in desktop development world.

Picking the right technology

In IT things are never simple – despite what the salesmen say, silver bullets just do not exist. There is a lot of marketing hype around new technologies, older technologies (even if still widely used) are lying in the dank corners, not mentioned by any of cool technology “evangelists” (that being said, it seems like IT these days is driven more by marketing and less by the engineering concerns and real specialists). Nevertheless, picking a technology and investing time to master you have to proceed with caution. To approach this methodically, we set back with the lead engineer and listed requirements that ideal platform should met. Here are few of them:

Support for GUI

The new platform was intended to be used in a wide variety of scenarios, including communication with hardware as well as providing user interaction. There are probably dozens (or more) technologies meeting these requirements but, based on my personal experience, I have narrowed this set to only three-ish mainstream options:

  • Microsoft .NET (Windows Forms / WPF) / Mono;
  • Qt;
  • Java (Swing / JavaFX).

I am aware that there are more libraries, languages and frameworks but they are either niche, not mature enough or I had no experience with them. Plus, you have to start with something. If we had not found suitable technology, we would have continued with longer list.

One may notice that I have not listed any HTML/JS frameworks such as AngularJS, Backbone etc. which are very popular these days. Although it seems possible to use them to provide industry quality GUIs (HTML/JS components rendered by the browser installed on the embedded OS, displayed on 7” LCD touch screen etc.) but it also seems like a big overhead – performance wise – and would force the team to acquire much more knowledge (HTML, stylesheets, JavaScript). Besides, it would solve only part of the problem as some backend technology would have to be used anyway, even if it was NodeJS, it would be some additional framework to cope with.

It had to be there for “a while”

Considering the character of the company, and the team (time required to train new member, average age of the crew – yep, it takes longer to learn new stuff when you are over 40 or 50 years old), it was one of the crucial requirements. Investing months in getting acquainted with the new technology, running tests, integrating with hardware, writing drivers, just to say good-bye after a couple of years and start from scratch again – this risk was not acceptable.

To estimate maturity of the technology and its adoption, it is a good idea to start with Google Trends.

trends.png

It seems like Qt is the big winner. Not only people’s interest in this platform is sustained, it is the number #1 if you compare number of search queries with other options. The most likely reason is its wide adoption in the IoT and embedded worlds. You can find Qt implemented software in various devices, on board computers of modern cars, trams, trains etc.

Compared to Qt, other technologies look rather off-colour. I think the reason for that was that these technologies were widely adopted in desktops applications, which are not very popular these days in favor of web frameworks. Most of the new projects are started with browsers and mobile devices in mind.

Microsoft, on the other hand, is a bit (un)famous for its products depreciation policy. I remember the big hype around XNA, Silverlight and other frameworks that were deprecated. For companies using these technologies it was a massive cockup. In addition to that, it seems like Microsoft is not willing to compete with Qt at this field. It has been investing in IoT market recently but I think it is more like an attempt to popularize their platform amongst the students and enthusiasts, rather than top industry specialists. Even though Windows Forms and WPF are widely adopted, they both depend on Win32 and I would expect seeing problems developing with these platforms in the future.

Java, as opposed to .NET, was always very serious about backward compatibility. I think this is very important and allows us to assume that Java based frameworks will not get deprecated in the near future. Furthermore, Java GUI frameworks do not directly depend on any specific OS but rather on the Java VM which is generally a good thing, although comes for the price of performance.

Mono looks like it never really kicked-off although worth further investigation, as it is an interoperable port of decent .NET platform.

Interoperable

This is where both Qt and Java shines most. Both of these frameworks allow software engineers to target multiple operating systems and their embedded versions.

.NET and Mono, on the other hand, were defeated. .NET framework can be used only on Windows based platforms (Microsoft’s approach to interoperability is like Henry Ford’s approach to car color customization was). But what about Mono? It most certainly can be run on wide variety of systems but it does not implement Windows Forms nor WPF! So yes, you can develop on multiple platforms with Mono as long as you do not need any GUI. One could use third party ports and bindings (like Gtk#) but it introduces additional effort and make things much more complicated (plus relying on a port on top of another port – you get the idea…).

Easy to use

I think that if you are starting up with the completely new technology you have more than enough problems already, hence I think ease of use is very important factor. I am aware that simplicity very often comes for a price of limited flexibility but the goal was not to find the simplest solution – it was rather to find technology stack that required as few extra steps to get up and running as possible. After running some tests with both Qt and Java Swing on an arbitrary embedded board I had to admit that Java and its GUI framework – Swing – really shines.

After few hours I was able to develop Java Swing applications with NetBeans IDE, run them on my desktop (with Windows 7 installed) and easily deploy them on my embedded, Linux based board. There were no issues with setting up a debugger (so I could remotely debug code running on embedded board with Debian from Windows desktop station – pretty neat!). Getting Swing components to be displayed on touch LCD screen was not too hard either (a solution to the only issue I had is described HERE).

Achieving the same with Qt was not as easy. First of all, a de-facto standard to develop with Qt is C++ which requires some extra steps. You have to use so called “tool chains” to develop on one architecture and deploy to the other (PC -> ARM etc.). This requirement did not allow me to use Windows as a development and compilation platform. I had to install Linux on VM and set up the toolchain there. Additionally, I had to compile Qt for my specific embedded board setup. After spending much more time than with Java, I finally got it working and I was pretty happy with the results. I could develop applications with Qt Creator IDE on my desktop (VM with Linux installed which was slightly inconvenient), run it locally, deploy to the device and debug it remotely.

Based on my experience, I would say that getting up and running with Java Swing is much easier than with Qt (especially if you are not a Linux professional). Having a Java Runtime Environment solves lots of problems for you, makes it easier to develop for multiple platforms (for the price of execution performance, for sure).

Summary

I think I did my best to provide as much rationale as possible – and I know it was very subjective and based on my personal experience and observations.

I would definitely not proceed with .NET nor Mono. Although a decent platform, .NET is bound to the Microsoft Windows OS (which is far from being a de-facto standard in the world of embedded devices and IoT). Mono, on the other hand, seems to be a bit immature, does not provide out of the box GUI framework (although there are some other components like Gtk#) and will always be one step behind .NET. There was a lot of hype around Xamarin recently but it is targeted to mobile devices rather than industry standard embedded platforms.

Java and Swing seemed to be a good choice. Java is very widely spread in the industry, it allows developers to target multiple platforms and there are plenty of books, courses and other materials, including on-line. Swing, even though quite mature, is not deprecated and I am convinced that it will be supported in future JRE releases (as I stated before in this post, Java is known for its backward compatibility). And last but not least, getting it up and running on an arbitrary device was much easier compared to Qt.

Qt would be also a very good fit as it is interoperable, very popular in the industry and very powerful. Preparing a development environment is slightly more complicated but it pays off when it comes to performance (no VM). The other thing I like about Qt is that it seems to be the only platform being so actively developed. I have a feeling that both Microsoft and Oracle are ignoring desktop GUI developers which could be easily picked up and adopted by embedded UI devs – which is a big shame IMHO. The downside of Qt is that licensing is quite costly. I am not saying it is a deal breaker but it is definitely something to take into consideration.

As stated before, this whole post is very subjective. All opinions stated are my own and they were based on my personal experience and observations. I am most certainly not paid by any of listed companies– nor their competition – to state any opinions (that would be some easy money, would it not?).

Advertisements

ActiveMQ NMS enlisted in TransactionScope

How to enlist ActiveMQ session in the ambient transaction scope? I believe that code below is self explanatory.

Why to do so? Imagine a situation (likely to occur in SOA  + EDA scenario):

  • Service A handles a “PostOrderRequest”;
  • Service A starts a transaction;
  • Service A creates an order in its internal data storage;
  • Service A commits the transaction;
  • Service A publishes a “OrderPosted” event to the ActiveMQ bus – which fails;
  • Service B can not consume the message

or

  • Service A creates the order in the DB;
  • Service A publishes the event to the ActiveMQ – with success;
  • Service A commits the transaction – which fails (no power, CPU explodes – you name it);
  • Service A restarts;
  • Service B consumes the message (but the order is not there!);

The solution is to enlist the ActiveMQ publisher session in the transaction – the same being used for database access. Please mind that this will make the transaction promoted to distributed transaction! There are other options to introduce consistency in messaging scenarios (and to live with eventual inconsistency) but let’s assume that 2PC is our only acceptable solution (which is a very strong assumption).

using Microsoft.VisualStudio.TestTools.UnitTesting;
using Apache.NMS.ActiveMQ;
using System.Transactions;
using Test.DBAccess;

namespace ActiveMQTranScope
{
    [TestClass]
    public class UnitTest1
    {
        [TestMethod]
        public void TestMethod1()
        {
            var esbConnFactory = new NetTxConnectionFactory("failover:(tcp://localhost:61616)?transport.timeout=5000");
            using (var esbConn = esbConnFactory.CreateNetTxConnection("user", "passwrod"))
            {
                esbConn.ClientId = "unit-test";
                esbConn.Start();

                using (var session = esbConn.CreateNetTxSession())
                using (var destination = session.GetQueue("TestTransactionalQueue"))
                using (var publisher = session.CreateProducer(destination))
                using (var db = new MyDbContext("MyConnectionString"))
                {
                    using (var ts = new TransactionScope(TransactionScopeOption.Required))
                    {
                        db.MyEntities.Add(new MyEntity());

                        publisher.Send(session.CreateTextMessage("Message1"));
                        publisher.Send(session.CreateTextMessage("Message2"));
                        publisher.Send(session.CreateTextMessage("Message3"));

                        db.SaveChanges();
                        ts.Complete();
                    }
                }
            }

        }
    }
}

Quartz.NET – remote scheduler without job assembly reference on the client side

I have seen few tutorials showing how to schedule Quartz.NET jobs on the remote process using scheduler proxy and all of them suffered th e same inconvenience – they assumed that jobs assembly is added as a reference to the both client and a server projects. In my opinion this is a serious architecture smell because jobs are most certainly part of your business logic (in the scenarios I can think of) and it can cause problems with deployment (imagine one server running Quartz.NET scheduler and few or dozens of clients) and could cause some security issues (disassembling etc.). I really think that jobs should be placed in the business logic, not visible to clients which should be able to schedule them using some kind of contract / interface.

Here is how to achieve this.

Job contract – referenced by client and server side. Exposes only information required to identify a job and prepare its parameters:

//contract for the job - simple class with constant unique job name and some helper method
//used to create parameters for the job instance
public class SomeJobContract
{
	public const string JobName = "SomeApi.SomeJobUniqueName";

	public const string SomeParameterName = "BusinessObjectId";

	public static IDictionary<string, object> BuildJobDetails(int businessObjectId)
	{
		return new Dictionary<string, object>()
		{
			{ SomeParameterName, businessObjectId }
		};
	}
}

Job implementation – its type can not be resolved on the client side because there is no reference to the implementation assembly:

//implementation of the job which contract is exposed
public class SomeJob : IJob
{
	public void Execute(IJobExecutionContext context)
	{
		//use data passed with trigger - context.Trigger instead of context.JobDetails.JobDataMap
		var businessObjectId = (int?)context.Trigger.JobDataMap[SomeApi.SomeJobContract.SomeParameterName];
		
		//... regular job code
	}
}

Server code used to register a specific job (using its type) with a unique identifier exposed in contract and hence known to the client:

//create job details - use unique job identifier
var preProcessingJob = JobBuilder.Create<SomeJob>()
                .StoreDurably(true)
                .RequestRecovery(true)
                .WithIdentity(SomeApi.SomeJobContract.JobName)
                .Build();

//add a durable job to scheduler without using a trigger
scheduler.AddJob(preProcessingJob, true, true);

Client side code used to schedule the job with appropriate parameters – uses only information exposed by the contract assembly:

//create a trigger for the job with specified identifier (we use contract, we have no reference to job implementation on the client side)
var trigger = TriggerBuilder
                .Create()
                .ForJob(SomeApi.SomeJobContract.JobName)
                .UsingJobData(new JobDataMap(SomeApi.SomeJobContract.BuildJobDetails(myBusinessObjectId)))
                .WithSimpleSchedule().StartNow()
                .Build();

schedulerProxy.ScheduleJob(trigger);

jQuery AJAX – redirect on unathorised request

Problem

In default ASP.NET MVC setup, when you send a AJAX request to the MVC Action which returns JSON or simple type value (boolean / string) and the request is not authenticated (user has just logged out or authentication cookie has expired) your jQuery success callback will be fired with a login page HTML content as data. That happens because default AuthorizeAttribute redirects unauthenticated requests to he login page, and jQuery follows this redirect and does not provide a simple way of telling what just happened nor why the redirect was performed.

Solution

To overcome this inconvenience you can roll custom AuthorizeAttribute which will redirect standard HTML requests (caused by forms posting and browser url change) to the login change but return an error code (which can be detected on the jQuery script side) for AJAX request.

public class AJAXAwareAuthorizeAttribute : AuthorizeAttribute
    {
        protected override void HandleUnauthorizedRequest(AuthorizationContext filterContext)
        {
            if (filterContext.RequestContext.HttpContext.Request.IsAjaxRequest())
                filterContext.Result = new HttpStatusCodeResult((int)System.Net.HttpStatusCode.Forbidden);
            else
                base.HandleUnauthorizedRequest(filterContext);
        }
    }

 

Then you can add a global AJAX handler to check for the 403 (Forbidden) code on each AJAX request and redirect/show error/do anything else when this code is returned from the controller.

$(document).ajaxError(function (event, xhr, settings, error) {
    //when there is an AJAX request and the user is not authenticated -> redirect to the login page
    if (xhr.status == 403) { // 403 - Forbidden
        window.location = '/login';
    }
});
 Now your success callback will not fire for unauthenticated requests and the error callback will be triggered instead

jQuery – how to get the validation framework to work with hidden fields

Problem

jQuery validation does not validate hidden fields by default, even when appropriate attributes (data-val, data-val-required etc.) are applied.

Solution

You can change that by overwriting jQuery validation defaults. There is a ignore field which is set to :hidden which causes all hidden elements to be skipped when validation happens.

If you want only specific hidden fields to get validated you can do something like this:

$.validator.setDefaults({
        ignore: ":hidden:not([data-validate-hidden])"
    });

 

This will cause all hidden fields to be ignored by validation framework unless they have data-validate-hidden attribute.

New opensource project

I’ve recently created an util class that might be helpful while working with communication data protocols: https://github.com/mkarczewski/BinaryMask.

Activity-based authorization in modular systems

There are some materials on the Web concerning the fact that role-based authentication is probably not the best option while implementing system security infrastructure. I find this blog post quite exhaustive: http://lostechies.com/derickbailey/2011/05/24/dont-do-role-based-authorization-checks-do-activity-based-checks/.

So basically you need a component which determines whether user X is authorized to perform action Y. But that is the simplest case scenario. Probably, in practice you need to determine whether user X is authorized to perform action Y on object V. For example the project manager can change the project schedule but the other users cannot. Probably you need some service which you could inject into your code in business logic layer, application logic layer or even UI logic layer (for example to hide “Change project schedule” button). This service could define few methods like “IsUserAuthorizedToChangeProjectSchedule(IPrincipal user, int projectId)” but at the end your service would declare dozens or even hundreds of methods and would have many many reasons to change (Interface Segregation Principle and Single Responsibility Principle would be violated). This solution would be very problematic in modular applications because it would force you to create a single service responsible for implementing the authorization rules of many modules. Of course you could create multiple services e.g. ISalesAuthorizationService or ICRMAuthorizationService but I’d like to present a different approach.

Basically we can abstract any authorization rule request as a pair of activity name (because it’s the activity-based authorization which I’m writing about) and one or more parameters.  Based on this assumption lets create IAuthorizationService interface:

public interface IAuthorizationService
{
	/// <summary>
	/// Exceptions:
	/// AccessDeniedException
	/// </summary>
	void Authorize(string action, params object[] parameters);

	bool IsAuthorized(string action, params object[] parameters);
}

This interface should be declared in a core assembly – the assembly which is referenced by all of your application modules e.g. ERP.Core.Interfaces so each module can call Authorize and IsAuthorized methods. Authorize and IsAuthorized methods are very similar. The first one will throw exception when currently logged user can’t perform some activity and the other will return true if she is authorized and false if she is not.

So we have a single point of authorization but we want authorization logic of each module to resist in its module assembly. To make it possible we need to implement IAuthorizationService in chain-of-responsibility manner so i.e. when the Authorize method is being invoked with action parameter equals to “ProjectsModule.ChangeProjectSchedule” the service dispatches this call to e.g. ERP.ProjectsModule.Security.ProjectsAuthorizationRulesService. To make this mechanism even more flexible we could register these module specific services at runtime by using IoC. It’s a good idea to remove magic-strings issue by placing activity-names in module-scoped constant fields.

Here’s a code fragment which shows Spring.NET based implementation of mechanisms described above :

PluggableAuthorizationService is implementation of IAuthorizationService interface which does the magic. It exposes Plugins public property which accepts the array of strings where each string is fully qualified authorization plugin type name. During initialization this service scans registered types of plugins for public methods decorated with AuthorizationEndpointAttribute. When IsAuthorized method of this service is called it searches for cached methods with proper activity name and executes them in the correct order (depending on Order property of AuthorizationEndpointAttribute).

public class PluggableAuthorizationService : IAuthorizationService, IApplicationContextAware
{
	private bool _initialized = false;
	private object _syncLock = new object();
	private Dictionary<string, List<Tuple<Type, int, MethodInfo>>> _actions = new Dictionary<string, List<Tuple<Type, int, MethodInfo>>>();

	public string[] Plugins { get; set; }

	public IApplicationContext ApplicationContext { get; set; }

	public void Authorize(string action, params object[] parameters)
	{
		if (!IsAuthorized(action, parameters))
			throw new AccessDeniedException();
	}

	public bool IsAuthorized(string action, params object[] parameters)
	{
		EnsureIsInitialized();

		if (!_actions.ContainsKey(action))
			throw new Exception("Authorization endpoint '" + action + "' not found");

		List<Tuple<Type, int, MethodInfo>> method = _actions[action];

		if (!method.Any())
			return false;

		foreach (var methodValidation in method)
		{
			var plugin = ApplicationContext.GetObjectsOfType(methodValidation.Item1)
				.Values
				.OfType<object>()
				.Single();

			bool result = false;

			if (methodValidation.Item3.GetParameters().Count() == 1)
				result = (bool)methodValidation.Item3.Invoke(plugin, new object[] { action });
			else
				result = (bool)methodValidation.Item3.Invoke(plugin, new object[] { action, parameters ?? new object[] { } });

			if (!result)
				return false;
		}
		return true;
	}

	private void EnsureIsInitialized()
	{
		if (!_initialized)
		{
			lock (_syncLock)
			{
				if (!_initialized)
				{
					Initialize();
					_initialized = true;
				}
			}
		}
	}

	private void Initialize()
	{
		if (Plugins == null) throw new Exception("Object property not initialized - Plugins");

		var types = Plugins
			.Select(x => Type.GetType(x));

		foreach (var type in types)
		{
			var allMethods = type.GetMethods(BindingFlags.Public | BindingFlags.Instance | BindingFlags.DeclaredOnly);
			foreach (var potentialEndpoint in allMethods)
			{
				var endpointAttrs =
					potentialEndpoint.GetCustomAttributes(false)
					.OfType<AuthorizationEndpointAttribute>();

				foreach (var endpointAttr in endpointAttrs)
				{
					if (!(potentialEndpoint.ReturnType == typeof(bool) &&
						!potentialEndpoint.IsStatic &&
						potentialEndpoint.GetParameters().Count() > 0 &&
						potentialEndpoint.GetParameters()[0].ParameterType == typeof(string) &&
						potentialEndpoint.IsPublic))
					{                            
						continue;
					}

					if (!_actions.ContainsKey(endpointAttr.Action))
						_actions[endpointAttr.Action] = new List<Tuple<Type, int, MethodInfo>>();

					_actions[endpointAttr.Action].Add(new Tuple<Type, int, MethodInfo>(type, endpointAttr.Order, potentialEndpoint));
				}
			}

			foreach (var key in _actions.Keys)
				_actions[key].OrderBy(x => x.Item2);
		}
	}
}

Spring.NET XML configuration file fragment which shows how authorization plugins are registered.

<object name="authorizationService" type="ERP.Base.Security.PluggableAuthorizationService, ERP.Base" scope="application">
	<property name="Plugins">
	  <list>
		<value>ERP.Projects.Application.Services.ProjectsAuthorizationPlugin, ERP.Projects</value>
		<value>ERP.Partners.Application.Services.PartnersAuthorizationPlugin, ERP.Partners</value>
		<value>ERP.Tasks.Application.Services.TasksAuthorizationPlugin, ERP.Tasks</value>
		<value>ERP.Documentation.Application.Services.DocsAuthorizationPlugin, ERP.Documentation</value>
		<value>ERP.Messaging.Application.Services.MessagesAuthorizationPlugin, ERP.Messaging</value>
	  </list>
	</property>
</object>

Module activity names constants class:

public static class ModuleActivityNames
{
	public const string Calendar_List = "Projects.Calendar.List";

	public const string Projects_List = "Projects.Projects.List";
	public const string Project_Create = "Projects.Project.Create";
	public const string ProjectData_Read = "Projects.Project.Data.Read";
	public const string ProjectData_Edit = "Projects.Project.Data.Edit";
	public const string Project_ChangeStatus = "Projects.Project.ChangeStatus";
	public const string ProjectSchedule_Read = "Projects.Project.Schedule.Read";
	public const string ProjectSchedule_Edit = "Projects.Project.Schedule.Edit";
	public const string ProjectCollaboration_Read = "Projects.Project.Collaboration.Read";
	public const string ProjectCollaboration_Edit = "Projects.Project.Collaboration.Edit";

	public const string Reports_Run = "Projects.Reports.Run";
}

Projects module authorization rules logic:

public class ProjectsAuthorizationPlugin
    {
        public IProjectsQuery ProjectsQuery { get; set; }

        private ERPPrincipal Principal
        {
            get { return Thread.CurrentPrincipal as ERPPrincipal; }
        }

        [AuthorizationEndpoint(Action = ModuleActionNames.Calendar_List)]
        [AuthorizationEndpoint(Action = ModuleActionNames.Projects_List)]
        [AuthorizationEndpoint(Action = ModuleActionNames.Reports_Run, Order = 1)]
        [AuthorizationEndpoint(Action = ModuleActionNames.Project_Create, Order = 1)]
        [AuthorizationEndpoint(Action = ModuleActionNames.ProjectData_Read, Order = 1)]
        [AuthorizationEndpoint(Action = ModuleActionNames.ProjectData_Edit, Order = 1)]
        [AuthorizationEndpoint(Action = ModuleActionNames.ProjectSchedule_Read, Order = 1)]
        [AuthorizationEndpoint(Action = ModuleActionNames.ProjectSchedule_Edit, Order = 1)]
        [AuthorizationEndpoint(Action = ModuleActionNames.ProjectCollaboration_Read, Order = 1)]
        [AuthorizationEndpoint(Action = ModuleActionNames.ProjectCollaboration_Edit, Order = 1)]
        [AuthorizationEndpoint(Action = ModuleActionNames.Project_ChangeStatus, Order = 1)]
        public bool VerifyBasePermission(string action, params object[] dummy)
        {
            if (Principal == null) return false;
            return true;
        }

        [AuthorizationEndpoint(Action = ModuleActionNames.Reports_Run, Order = 2)]
        public bool VerifyReportPermissions(string action, params object[] dummy)
        {
            return Principal.IsInRole(UserRole.Administrator);
        }

        [AuthorizationEndpoint(Action = ModuleActionNames.ProjectData_Read, Order = 2)]
        [AuthorizationEndpoint(Action = ModuleActionNames.ProjectData_Edit, Order = 2)]
        [AuthorizationEndpoint(Action = ModuleActionNames.ProjectSchedule_Read, Order = 2)]
        [AuthorizationEndpoint(Action = ModuleActionNames.ProjectSchedule_Edit, Order = 2)]
        [AuthorizationEndpoint(Action = ModuleActionNames.ProjectCollaboration_Read, Order = 2)]
        [AuthorizationEndpoint(Action = ModuleActionNames.ProjectCollaboration_Edit, Order = 2)]
        [AuthorizationEndpoint(Action = ModuleActionNames.Project_ChangeStatus, Order = 2)]
        public bool VerifyPerProjectPermission(string action, params object[] prm)
        {
            if (Principal == null) return false;
            if (Principal.IsInRole(UserRole.Administrator))
                return true;

            int projectId = Convert.ToInt32(prm[0]);

            var project = ProjectsQuery.Query(projectId);
            if (project.ProjectManagerId == Principal.UserId)
                return true;

            var permissions = ProjectsQuery.QueryPermissions(projectId, Principal.UserId);

            switch (action)
            {
                case ModuleActionNames.ProjectData_Read:
                    return permissions.Any(x => x.Permission == ProjectPermission.ReadProjectData);
                case ModuleActionNames.ProjectData_Edit:
                    return permissions.Any(x => x.Permission == ProjectPermission.EditProjectData);
                case ModuleActionNames.ProjectCollaboration_Read:
                    return permissions.Any(x => x.Permission == ProjectPermission.ReadProjectCollaboration);
                case ModuleActionNames.ProjectCollaboration_Edit:
                    return permissions.Any(x => x.Permission == ProjectPermission.EditProjectCollaboration);
                case ModuleActionNames.ProjectSchedule_Read:
                    return permissions.Any(x => x.Permission == ProjectPermission.ReadProjectSchedule);
                case ModuleActionNames.ProjectSchedule_Edit:
                    return permissions.Any(x => x.Permission == ProjectPermission.EditProjectSchedule);
                case ModuleActionNames.Project_ChangeStatus:
                    return false;
                default:
                    return false;
            }
        }

        [AuthorizationEndpoint(Action = ModuleActionNames.Project_Create, Order = 2)]
        public bool VerifyCanCreateProjects(string action)
        {
            if (Principal == null) return false;

            return Principal.IsInRole(UserRole.ProjectManager);
        }
    }

Authorization service usage examples:

if (AuthorizationService.IsAuthorized(ERP.Projects.Interfaces.Application.ModuleActionNames.Projects_List))
        projectsModuleItem.ChildItems.Add(new MenuItem("My projects", String.Empty, String.Empty, "~/Projects/Default.aspx"));

and

public partial class Default : ERPPageBase
{
        protected int SelectedProjectId {get;set;}
	public override void Authorize()
	{
		AuthorizationService.Authorize(ModuleActionNames.ProjectSchedule_Edit, SelectedProjectId);
	}

	protected void Page_Load(object sender, EventArgs e)
	{
		DoSomePageLogic();
	}
}

Mechanism described above requires some consequence in order of parameters passed to authorization methods but it’s possible to modify it to use key-value collections or to match parameters by their names. Despite this disadvantage I find this approach working really nice in mid to large, modular systems.