Electron main process debugging with Visual Studio Code

It turned out that debugging Electron’s main process is not trivial. I did not manage to make it work with node-inspector. Surprisingly, I turned out that it is relatively easy to set up end to end Electron development environment with Visual Studio Code. All you really need is create default “launch.json” file by using the cog icon in debugging tab. With the default setting you should be to attach to a running electron app but you can not use “Launch” profile with its default content. You have to specify electron.exe location in “runtimeExecutable” – just like on the screenshot below.

image-4

Once you have done that, you will be able to start your Electron app straight from debugging tab in your Code window.

So when you start debugging using “Launch” profile (not “Attach” nor “Attach to process”), you will notice that your application is running…

image-5

… and your breakpoints are being hit too.

image-3

Which makes Visual Studio Code pretty neat IDE for designing electron.atom.io powered desktop applications.

You can find full launch.json code I have used with my setup (electron quick-start sample app) attached below:

{
“version”: “0.2.0”,
“configurations”: [
{
“name”: “Launch”,
“type”: “node”,
“request”: “launch”,
“program”: “${workspaceRoot}\\main.js”,
“stopOnEntry”: false,
“args”: [],
“cwd”: “${workspaceRoot}”,
“preLaunchTask”: null,
“runtimeExecutable”: “${workspaceRoot}\\node_modules\\electron\\dist\\electron.exe”,
“runtimeArgs”: [
],
“env”: {
},
“externalConsole”: false,
“sourceMaps”: false,
“outDir”: null,

“port”: 5858,
“address”: “localhost”,
},
{
“name”: “Attach”,
“type”: “node”,
“request”: “attach”,
“port”: 5858,
“address”: “localhost”,
“restart”: false,
“sourceMaps”: false,
“outDir”: null,
“localRoot”: “${workspaceRoot}”,
“remoteRoot”: null
},
{
“name”: “Attach to Process”,
“type”: “node”,
“request”: “attach”,
“processId”: “${command.PickProcess}”,
“port”: 5858,
“sourceMaps”: false,
“outDir”: null
}
]
}

Considerations for high-level embedded platforms and desktop programming language

Some time ago I was involved in a software project founded by the company I have been successfully cooperating with for number of years. This company specializes in servicing and modernizing military equipment and instruments (mostly aviation) and I was responsible for implementing a piece of software integrated with the hardware platform designed by this company. During one of the catch-up meetings I was asked by the project lead engineer to share my thoughts on the route they could take to start developing software solutions running on embedded platforms as well as on desktop (two birds with one stone, one may say).

embedded_GUI.png

These days picking ONE, multi purpose technology is actually more difficult than it seems and the fact that web oriented solutions are getting more and more popular does not make things any easier. That is why I decided to publish my thoughts on this subject.

Background

To fully understand what I was asked for it is important to know some context. The company’s tower I have been working with hires a number of highly qualified, seasoned electronics engineers. Unlike tech departments in most of the companies I have seen so far, the average age of engineer is quite high and people use to work in the same company for more than just a couple of years (working with military and aviation especially is hard, one needs to spend few years as a junior to know what is going on). Engineers involved in the project had experience with the “old school” digital electronics, microcontrollers programmable with low level programming languages such as assembler and high level programming languages like C/C++ and VHDL (books I used to learn programming from defined C and C++ as high-level languages but I think it is not the case as languages like Scala, C#, Python are around). With this set of tools they were delivering top quality solutions for military for years although, as systems they were developing were getting more complex, they found themselves to be in a position where using programmable microcontrollers and low-level hardware communication protocols was just not enough. They needed to master higher level tools such as OS (Operating Systems), Ethernet protocols, multithreading and GUIs (Graphic User Interfaces) – things that have been around for years in desktop development world.

Picking the right technology

In IT things are never simple – despite what the salesmen say, silver bullets just do not exist. There is a lot of marketing hype around new technologies, older technologies (even if still widely used) are lying in the dank corners, not mentioned by any of cool technology “evangelists” (that being said, it seems like IT these days is driven more by marketing and less by the engineering concerns and real specialists). Nevertheless, picking a technology and investing time to master you have to proceed with caution. To approach this methodically, we set back with the lead engineer and listed requirements that ideal platform should met. Here are few of them:

Support for GUI

The new platform was intended to be used in a wide variety of scenarios, including communication with hardware as well as providing user interaction. There are probably dozens (or more) technologies meeting these requirements but, based on my personal experience, I have narrowed this set to only three-ish mainstream options:

  • Microsoft .NET (Windows Forms / WPF) / Mono;
  • Qt;
  • Java (Swing / JavaFX).

I am aware that there are more libraries, languages and frameworks but they are either niche, not mature enough or I had no experience with them. Plus, you have to start with something. If we had not found suitable technology, we would have continued with longer list.

One may notice that I have not listed any HTML/JS frameworks such as AngularJS, Backbone etc. which are very popular these days. Although it seems possible to use them to provide industry quality GUIs (HTML/JS components rendered by the browser installed on the embedded OS, displayed on 7” LCD touch screen etc.) but it also seems like a big overhead – performance wise – and would force the team to acquire much more knowledge (HTML, stylesheets, JavaScript). Besides, it would solve only part of the problem as some backend technology would have to be used anyway, even if it was NodeJS, it would be some additional framework to cope with.

It had to be there for “a while”

Considering the character of the company, and the team (time required to train new member, average age of the crew – yep, it takes longer to learn new stuff when you are over 40 or 50 years old), it was one of the crucial requirements. Investing months in getting acquainted with the new technology, running tests, integrating with hardware, writing drivers, just to say good-bye after a couple of years and start from scratch again – this risk was not acceptable.

To estimate maturity of the technology and its adoption, it is a good idea to start with Google Trends.

trends.png

It seems like Qt is the big winner. Not only people’s interest in this platform is sustained, it is the number #1 if you compare number of search queries with other options. The most likely reason is its wide adoption in the IoT and embedded worlds. You can find Qt implemented software in various devices, on board computers of modern cars, trams, trains etc.

Compared to Qt, other technologies look rather off-colour. I think the reason for that was that these technologies were widely adopted in desktops applications, which are not very popular these days in favor of web frameworks. Most of the new projects are started with browsers and mobile devices in mind.

Microsoft, on the other hand, is a bit (un)famous for its products depreciation policy. I remember the big hype around XNA, Silverlight and other frameworks that were deprecated. For companies using these technologies it was a massive cockup. In addition to that, it seems like Microsoft is not willing to compete with Qt at this field. It has been investing in IoT market recently but I think it is more like an attempt to popularize their platform amongst the students and enthusiasts, rather than top industry specialists. Even though Windows Forms and WPF are widely adopted, they both depend on Win32 and I would expect seeing problems developing with these platforms in the future.

Java, as opposed to .NET, was always very serious about backward compatibility. I think this is very important and allows us to assume that Java based frameworks will not get deprecated in the near future. Furthermore, Java GUI frameworks do not directly depend on any specific OS but rather on the Java VM which is generally a good thing, although comes for the price of performance.

Mono looks like it never really kicked-off although worth further investigation, as it is an interoperable port of decent .NET platform.

Interoperable

This is where both Qt and Java shines most. Both of these frameworks allow software engineers to target multiple operating systems and their embedded versions.

.NET and Mono, on the other hand, were defeated. .NET framework can be used only on Windows based platforms (Microsoft’s approach to interoperability is like Henry Ford’s approach to car color customization was). But what about Mono? It most certainly can be run on wide variety of systems but it does not implement Windows Forms nor WPF! So yes, you can develop on multiple platforms with Mono as long as you do not need any GUI. One could use third party ports and bindings (like Gtk#) but it introduces additional effort and make things much more complicated (plus relying on a port on top of another port – you get the idea…).

Easy to use

I think that if you are starting up with the completely new technology you have more than enough problems already, hence I think ease of use is very important factor. I am aware that simplicity very often comes for a price of limited flexibility but the goal was not to find the simplest solution – it was rather to find technology stack that required as few extra steps to get up and running as possible. After running some tests with both Qt and Java Swing on an arbitrary embedded board I had to admit that Java and its GUI framework – Swing – really shines.

After few hours I was able to develop Java Swing applications with NetBeans IDE, run them on my desktop (with Windows 7 installed) and easily deploy them on my embedded, Linux based board. There were no issues with setting up a debugger (so I could remotely debug code running on embedded board with Debian from Windows desktop station – pretty neat!). Getting Swing components to be displayed on touch LCD screen was not too hard either (a solution to the only issue I had is described HERE).

Achieving the same with Qt was not as easy. First of all, a de-facto standard to develop with Qt is C++ which requires some extra steps. You have to use so called “tool chains” to develop on one architecture and deploy to the other (PC -> ARM etc.). This requirement did not allow me to use Windows as a development and compilation platform. I had to install Linux on VM and set up the toolchain there. Additionally, I had to compile Qt for my specific embedded board setup. After spending much more time than with Java, I finally got it working and I was pretty happy with the results. I could develop applications with Qt Creator IDE on my desktop (VM with Linux installed which was slightly inconvenient), run it locally, deploy to the device and debug it remotely.

Based on my experience, I would say that getting up and running with Java Swing is much easier than with Qt (especially if you are not a Linux professional). Having a Java Runtime Environment solves lots of problems for you, makes it easier to develop for multiple platforms (for the price of execution performance, for sure).

Summary

I think I did my best to provide as much rationale as possible – and I know it was very subjective and based on my personal experience and observations.

I would definitely not proceed with .NET nor Mono. Although a decent platform, .NET is bound to the Microsoft Windows OS (which is far from being a de-facto standard in the world of embedded devices and IoT). Mono, on the other hand, seems to be a bit immature, does not provide out of the box GUI framework (although there are some other components like Gtk#) and will always be one step behind .NET. There was a lot of hype around Xamarin recently but it is targeted to mobile devices rather than industry standard embedded platforms.

Java and Swing seemed to be a good choice. Java is very widely spread in the industry, it allows developers to target multiple platforms and there are plenty of books, courses and other materials, including on-line. Swing, even though quite mature, is not deprecated and I am convinced that it will be supported in future JRE releases (as I stated before in this post, Java is known for its backward compatibility). And last but not least, getting it up and running on an arbitrary device was much easier compared to Qt.

Qt would be also a very good fit as it is interoperable, very popular in the industry and very powerful. Preparing a development environment is slightly more complicated but it pays off when it comes to performance (no VM). The other thing I like about Qt is that it seems to be the only platform being so actively developed. I have a feeling that both Microsoft and Oracle are ignoring desktop GUI developers which could be easily picked up and adopted by embedded UI devs – which is a big shame IMHO. The downside of Qt is that licensing is quite costly. I am not saying it is a deal breaker but it is definitely something to take into consideration.

As stated before, this whole post is very subjective. All opinions stated are my own and they were based on my personal experience and observations. I am most certainly not paid by any of listed companies– nor their competition – to state any opinions (that would be some easy money, would it not?).

ActiveMQ NMS enlisted in TransactionScope

How to enlist ActiveMQ session in the ambient transaction scope? I believe that code below is self explanatory.

Why to do so? Imagine a situation (likely to occur in SOA  + EDA scenario):

  • Service A handles a “PostOrderRequest”;
  • Service A starts a transaction;
  • Service A creates an order in its internal data storage;
  • Service A commits the transaction;
  • Service A publishes a “OrderPosted” event to the ActiveMQ bus – which fails;
  • Service B can not consume the message

or

  • Service A creates the order in the DB;
  • Service A publishes the event to the ActiveMQ – with success;
  • Service A commits the transaction – which fails (no power, CPU explodes – you name it);
  • Service A restarts;
  • Service B consumes the message (but the order is not there!);

The solution is to enlist the ActiveMQ publisher session in the transaction – the same being used for database access. Please mind that this will make the transaction promoted to distributed transaction! There are other options to introduce consistency in messaging scenarios (and to live with eventual inconsistency) but let’s assume that 2PC is our only acceptable solution (which is a very strong assumption).

using Microsoft.VisualStudio.TestTools.UnitTesting;
using Apache.NMS.ActiveMQ;
using System.Transactions;
using Test.DBAccess;

namespace ActiveMQTranScope
{
    [TestClass]
    public class UnitTest1
    {
        [TestMethod]
        public void TestMethod1()
        {
            var esbConnFactory = new NetTxConnectionFactory("failover:(tcp://localhost:61616)?transport.timeout=5000");
            using (var esbConn = esbConnFactory.CreateNetTxConnection("user", "passwrod"))
            {
                esbConn.ClientId = "unit-test";
                esbConn.Start();

                using (var session = esbConn.CreateNetTxSession())
                using (var destination = session.GetQueue("TestTransactionalQueue"))
                using (var publisher = session.CreateProducer(destination))
                using (var db = new MyDbContext("MyConnectionString"))
                {
                    using (var ts = new TransactionScope(TransactionScopeOption.Required))
                    {
                        db.MyEntities.Add(new MyEntity());

                        publisher.Send(session.CreateTextMessage("Message1"));
                        publisher.Send(session.CreateTextMessage("Message2"));
                        publisher.Send(session.CreateTextMessage("Message3"));

                        db.SaveChanges();
                        ts.Complete();
                    }
                }
            }

        }
    }
}

Data mining opportunities for small and mid companies

The following article is also published on my new website.

There is a famous anecdote about research done by Wal-Mart about how upcoming hurricane impacts their customers’ purchases (http://www.nytimes.com/2004/11/14/business/yourmoney/what-walmart-knows-about-customers-habits.html).

We didn’t know in the past that strawberry Pop-Tarts increase in sales, like seven times their normal sales rate, ahead of a hurricane

Ms. Dillman

Wal-Mart is a real giant. It is not surprising that marketing guys working there spend big bucks on data mining so why smaller companies would bother?

3D render of battle in a virtual world / Digital war concept or attackThe answer is: because they can afford, they can benefit from it, there are right tools to be used and people who know how to use them.

To be fair, that is what I love about IT – it allows small businesses to grow, it allows new companies to pop up in areas unreachable for them few years in the past. Think how IT technologies enabled companies around the globe reaching the “long tail” (https://en.wikipedia.org/wiki/Long_tail).

The other reason for small and mid companies to consider data mining is that the data is already there. In most of the cases you do not have to run expensive researches and queries. As more and more of operations are handled by computers, there is relatively little effort involved with getting this data ready for data mining. In addition, the storage is really cheap which makes data acquisition even cheaper and more available. In fact, I have seen very small companies collecting gigabytes (sometimes more) of data and not appreciating the great opportunities.

And the tools! There are many decent tools out there – some of them available for FREE (amazing Weka platform by University of Waikato  in New Zeland – http://www.cs.waikato.ac.nz/ml/weka/ or R – https://www.r-project.org).

If you represent a small to mid enterprise and you think you can benefit from data mining – give me a buzz.

Quartz.NET – remote scheduler without job assembly reference on the client side

I have seen few tutorials showing how to schedule Quartz.NET jobs on the remote process using scheduler proxy and all of them suffered th e same inconvenience – they assumed that jobs assembly is added as a reference to the both client and a server projects. In my opinion this is a serious architecture smell because jobs are most certainly part of your business logic (in the scenarios I can think of) and it can cause problems with deployment (imagine one server running Quartz.NET scheduler and few or dozens of clients) and could cause some security issues (disassembling etc.). I really think that jobs should be placed in the business logic, not visible to clients which should be able to schedule them using some kind of contract / interface.

Here is how to achieve this.

Job contract – referenced by client and server side. Exposes only information required to identify a job and prepare its parameters:

//contract for the job - simple class with constant unique job name and some helper method
//used to create parameters for the job instance
public class SomeJobContract
{
	public const string JobName = "SomeApi.SomeJobUniqueName";

	public const string SomeParameterName = "BusinessObjectId";

	public static IDictionary<string, object> BuildJobDetails(int businessObjectId)
	{
		return new Dictionary<string, object>()
		{
			{ SomeParameterName, businessObjectId }
		};
	}
}

Job implementation – its type can not be resolved on the client side because there is no reference to the implementation assembly:

//implementation of the job which contract is exposed
public class SomeJob : IJob
{
	public void Execute(IJobExecutionContext context)
	{
		//use data passed with trigger - context.Trigger instead of context.JobDetails.JobDataMap
		var businessObjectId = (int?)context.Trigger.JobDataMap[SomeApi.SomeJobContract.SomeParameterName];
		
		//... regular job code
	}
}

Server code used to register a specific job (using its type) with a unique identifier exposed in contract and hence known to the client:

//create job details - use unique job identifier
var preProcessingJob = JobBuilder.Create<SomeJob>()
                .StoreDurably(true)
                .RequestRecovery(true)
                .WithIdentity(SomeApi.SomeJobContract.JobName)
                .Build();

//add a durable job to scheduler without using a trigger
scheduler.AddJob(preProcessingJob, true, true);

Client side code used to schedule the job with appropriate parameters – uses only information exposed by the contract assembly:

//create a trigger for the job with specified identifier (we use contract, we have no reference to job implementation on the client side)
var trigger = TriggerBuilder
                .Create()
                .ForJob(SomeApi.SomeJobContract.JobName)
                .UsingJobData(new JobDataMap(SomeApi.SomeJobContract.BuildJobDetails(myBusinessObjectId)))
                .WithSimpleSchedule().StartNow()
                .Build();

schedulerProxy.ScheduleJob(trigger);

jQuery AJAX – redirect on unathorised request

Problem

In default ASP.NET MVC setup, when you send a AJAX request to the MVC Action which returns JSON or simple type value (boolean / string) and the request is not authenticated (user has just logged out or authentication cookie has expired) your jQuery success callback will be fired with a login page HTML content as data. That happens because default AuthorizeAttribute redirects unauthenticated requests to he login page, and jQuery follows this redirect and does not provide a simple way of telling what just happened nor why the redirect was performed.

Solution

To overcome this inconvenience you can roll custom AuthorizeAttribute which will redirect standard HTML requests (caused by forms posting and browser url change) to the login change but return an error code (which can be detected on the jQuery script side) for AJAX request.

public class AJAXAwareAuthorizeAttribute : AuthorizeAttribute
    {
        protected override void HandleUnauthorizedRequest(AuthorizationContext filterContext)
        {
            if (filterContext.RequestContext.HttpContext.Request.IsAjaxRequest())
                filterContext.Result = new HttpStatusCodeResult((int)System.Net.HttpStatusCode.Forbidden);
            else
                base.HandleUnauthorizedRequest(filterContext);
        }
    }

 

Then you can add a global AJAX handler to check for the 403 (Forbidden) code on each AJAX request and redirect/show error/do anything else when this code is returned from the controller.

$(document).ajaxError(function (event, xhr, settings, error) {
    //when there is an AJAX request and the user is not authenticated -> redirect to the login page
    if (xhr.status == 403) { // 403 - Forbidden
        window.location = '/login';
    }
});
 Now your success callback will not fire for unauthenticated requests and the error callback will be triggered instead

jQuery – how to get the validation framework to work with hidden fields

Problem

jQuery validation does not validate hidden fields by default, even when appropriate attributes (data-val, data-val-required etc.) are applied.

Solution

You can change that by overwriting jQuery validation defaults. There is a ignore field which is set to :hidden which causes all hidden elements to be skipped when validation happens.

If you want only specific hidden fields to get validated you can do something like this:

$.validator.setDefaults({
        ignore: ":hidden:not([data-validate-hidden])"
    });

 

This will cause all hidden fields to be ignored by validation framework unless they have data-validate-hidden attribute.

NetBeans – Remote Java ARM debugging with GUI

There is a known issue related to NetBeans remote debugging a Java application with GUI enabled (Swing/JavaFX) running on embedded Linux (Raspberry Pi, BeagleBone Black etc.).

When one tries to run / debug an application with GUI components then java.awt.HeadlessException with “No X11 DISPLAY variable was set, but this program performed an operation which requires it” message is being thrown.

Long story short, the DISPLAY environment variable is empty and remotely run program does not know the display it is supposed to appear on – hence the error.

For my setup the problem was solved by small modification to ANT build XML script. I have found the “-copy-to-remote-platform” target, copied it to notepad++ and slightly modified it by adding highlighted lines.

Then I have pasted modified ANT target to build.xml file of my project to override the target (you can not just edit …-impl.xml file because your changes will be lost after each NetBeans restart or configuration changes).

Voila! Now I can run and debug my application remotely with GUI displayed on my BeagleBone Black 7” LCD display – pretty neat. One could modify my code to point a remote X server (like Xming) as a display.

<!-- Remote ARM linux deploy overwrite !-->
    <target name="-copy-to-remote-platform">
      <macrodef name="runwithpasswd" uri="http://www.netbeans.org/ns/j2se-project/remote-platform/1">
            <attribute name="additionaljvmargs" default=""/>
            <sequential>
                <sshexec host="${remote.platform.host}" port="${remote.platform.port}" username="${remote.platform.user}" password="${remote.platform.password}" trust="true" command="mkdir -p '${remote.dist.dir}'"/>
                <scp todir="${remote.platform.user}@${remote.platform.host}:${remote.dist.dir}" port="${remote.platform.port}" password="${remote.platform.password}" trust="true">
                    <fileset dir="${dist.dir}"/>
                </scp>
                <antcall target="profile-rp-calibrate-passwd"/>
                <sshexec host="${remote.platform.host}" port="${remote.platform.port}" username="${remote.platform.user}" password="${remote.platform.password}" trust="true" usepty="true"
                    command="export DISPLAY=:0; cd '${remote.project.dir}'; ${remote.platform.exec.prefix}'${remote.java.executable}' @{additionaljvmargs} -Dfile.encoding=${runtime.encoding} ${run.jvmargs} ${run.jvmargs.ide} -jar ${remote.dist.jar} ${application.args}"/>
            </sequential>
        </macrodef>
        <macrodef name="runwithkey" uri="http://www.netbeans.org/ns/j2se-project/remote-platform/1">
            <attribute name="additionaljvmargs" default=""/>
            <sequential>
                <fail unless="remote.platform.keyfile">Must set remote.platform.keyfile</fail>
                <sshexec host="${remote.platform.host}" port="${remote.platform.port}" username="${remote.platform.user}" keyfile="${remote.platform.keyfile}" passphrase="${remote.platform.passphrase}" trust="true" command="mkdir -p '${remote.dist.dir}'"/>
                <scp todir="${remote.platform.user}@${remote.platform.host}:${remote.dist.dir}" port="${remote.platform.port}" keyfile="${remote.platform.keyfile}" passphrase="${remote.platform.passphrase}" trust="true">
                    <fileset dir="${dist.dir}"/>
                </scp>
                <antcall target="profile-rp-calibrate-key"/>
                <sshexec host="${remote.platform.host}" port="${remote.platform.port}" username="${remote.platform.user}" keyfile="${remote.platform.keyfile}" passphrase="${remote.platform.passphrase}" trust="true" usepty="true"
                    command="export DISPLAY=:0; cd '${remote.project.dir}'; ${remote.platform.exec.prefix}'${remote.java.executable}' @{additionaljvmargs} -Dfile.encoding=${runtime.encoding} ${run.jvmargs} ${run.jvmargs.ide} -jar ${remote.dist.jar} ${application.args}"/>
            </sequential>
        </macrodef>
    </target>

Offline Pessimistic Lock in Entity Framework (or any other ORM)

I’ve always found surprising that so few projects were started with data access concurrency in mind. I’ve heard many discussions about new fancy frameworks and UI controls the teams were about to use but possibility of concurrent access to users’ data didn’t appear to be a concern in their minds.

Wen you think about it, it seems to be very logical. People have a natural tendency to avoid problems they haven’t encountered directly. There are so few people with attitude of challenging obstacles not laying directly on their path. And after all, most of the questions managers tend to ask are like “when will you finish this use case”, “how much will this cost”. Eventually, the more technical aware PMs ask for iPad or recent web browser support. I’ve never heard a manager asking for example “what will happen when user A edits invoice while user B is issuing a warehouse document ?” question.

The fact that software tools providers does not mention concurrency handling (but not only that) in context of using their tools doesn’t help at all. There are still so many people who acquire their knowledge mostly from Microsoft marketing hype and don’t ask themselves these kind of tough questions. It is also very significant that Microsoft’s flag ORM product – Entity Framework – implements out of the box only very limited optimistic concurrency mechanism which is not enough for most of the real world systems.

To shed some light on the concurrency issue I decided to show a real live example of Offline Pessimistic Lock pattern I’ve implemented in my recent project.

The Offline Pessimistic Lock is very useful pattern. It allows you to ensure that, for example, only one user is altering an object’s data. In my example it happens by creating a separate lock object on the whole document. This object contains information about the type of locked object (i.e. domain entity), it’s key and a user which holds this lock.

The LockManager class is responsible for acquiring, releasing and ensuring these locks. The lock objects are stored in WriteLocks table.

CREATE TABLE [dbo].[WriteLocks](
	[OBJECT_TYPE] [nvarchar](255) NOT NULL,
	[OBJECT_ID] [nvarchar](50) NOT NULL,
	[ACCOUNT_ID] [int] NOT NULL,
	[ACQ_TIMESTAMP] [datetime] NOT NULL,
 CONSTRAINT [PK_WriteLocks] PRIMARY KEY CLUSTERED 
(
	[OBJECT_TYPE] ASC,
	[OBJECT_ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
namespace ATMS.Core.Concurrency
{
    public class LockManager
    {
        /// <summary>
        /// Utility method for showing information about existing lock on object
        /// </summary>
        public void ShowNotification(Control parent)
        {
            MessageBox.Show(Res.Lock_ObjectBeingLocked, string.Empty, MessageBoxButtons.OK, MessageBoxIcon.Exclamation);
        }

        /// <summary>
        /// Ensures that lock on the object already exists
        /// </summary>
        /// <typeparam name="TObj">Entity type</typeparam>
        /// <param name="id">Entity key</param>
        /// <returns>Returns true if lock acquired by current user exists; false otherwise</returns>
        public bool EnsureLock<TObj>(object id)
        {
            var user = UserInfo.Current;

            if (user == null)
                throw new InvalidOperationException("User not logged in - can not release write lock");

            var typeStr = typeof(TObj).FullName;
            var idStr = id.ToString();

            using (var ctx = CoreEntitiesBuilder.Build())
            {
                var exist = ctx.WriteLocks.FirstOrDefault(x => x.OBJECT_TYPE == typeStr && x.OBJECT_ID == idStr && x.ACCOUNT_ID == user.AccountId);
                return exist != null;
            }
        }

        /// <summary>
        /// Removes lock acquired on the object hence it is available for locking by other users
        /// </summary>
        /// <typeparam name="TObj">Entity type</typeparam>
        /// <param name="id">Entity key</param>
        /// <returns>Returns true if there was a lock to release; false otherwise</returns>
        public bool ReleaseLock<TObj>(object id)
        {
            var user = UserInfo.Current;

            if (user == null)
                throw new InvalidOperationException("User not logged in - can not release write lock");

            var typeStr = typeof(TObj).FullName;
            var idStr = id.ToString();

            using (var ctx = CoreEntitiesBuilder.Build())
            {
                var exist = ctx.WriteLocks.FirstOrDefault(x => x.OBJECT_TYPE == typeStr && x.OBJECT_ID == idStr);

                if (exist == null)
                    return true;

                if (exist != null && exist.ACCOUNT_ID == user.AccountId)
                {
                    ctx.WriteLocks.Remove(exist);
                    ctx.SaveChanges();
                    return true;
                }
                else
                    return false;
            }
        }

        /// <summary>
        /// Removes a batch of locks acquired on objects
        /// </summary>
        /// <typeparam name="TObj">Entity type</typeparam>
        /// <param name="ids">Entity keys</param>
        /// <returns>Returns true if all locks were successfully released; false otherwise</returns>
        public bool ReleaseLocks<TObj>(object[] ids)
        {
            var user = UserInfo.Current;

            if (user == null)
                throw new InvalidOperationException("User not logged in - can not release write lock");

            var typeStr = typeof(TObj).FullName;
            var idStr = ids.Select(x => x.ToString());

            using (var ctx = CoreEntitiesBuilder.Build())
            {
                var exist = ctx.WriteLocks.Where(x => x.OBJECT_TYPE == typeStr && idStr.Contains(x.OBJECT_ID)).ToArray();

                var canRemove = exist.Where(x => x.ACCOUNT_ID == user.AccountId).ToArray();
                var canNotRemove = exist.Where(x => x.ACCOUNT_ID != user.AccountId).ToArray();

                ctx.WriteLocks.RemoveRange(canRemove);
                ctx.SaveChanges();

                return !canNotRemove.Any();
            }
        }

        /// <summary>
        /// Acquire locks on list of objects
        /// </summary>
        /// <typeparam name="TObj">Entity type</typeparam>
        /// <param name="ids">Entity keys</param>
        /// <returns>Returns true when all locks where successfully acquired; false otherwise</returns>
        public bool AcquireLocks<TObj>(object[] ids)
        {
            var user = UserInfo.Current;

            if (user == null)
                throw new InvalidOperationException("User not logged in - can not acquire write lock");

            var typeStr = typeof(TObj).FullName;
            var idStr = ids.Select(x => x.ToString());
            var result = false;

            using (var ctx = CoreEntitiesBuilder.Build())
            {
                var existing = ctx.WriteLocks.Where(x => x.OBJECT_TYPE == typeStr && idStr.Contains(x.OBJECT_ID)).ToArray();

                if (!existing.Any() ||
                    existing.All(x => x.ACCOUNT_ID == user.AccountId))
                {
                    foreach (var ex in existing)
                        ex.ACQ_TIMESTAMP = DateTime.Now;

                    var notExisting = idStr
                        .Where(x => !existing.Any(e => e.OBJECT_ID == x))
                        .Select(x => new WriteLocks()
                        {
                            ACCOUNT_ID = user.AccountId,
                            ACQ_TIMESTAMP = DateTime.Now,
                            OBJECT_ID = x,
                            OBJECT_TYPE = typeStr
                        });

                    try
                    {
                        ctx.WriteLocks.AddRange(notExisting);
                        ctx.SaveChanges();

                        result = true;
                    }
                    catch (SqlException)
                    {
                        result = false;
                    }
                }
                else
                    result = false;
            }

            return result;
        }

        /// <summary>
        /// Acquire lock on a single object
        /// </summary>
        /// <typeparam name="TObj">Entity type</typeparam>
        /// <param name="id">Entity key</param>
        /// <returns>Returns true when lock was acquired successfully; false otherwise</returns>
        public bool AcquireLock<TObj>(object id)
        {
            var user = UserInfo.Current;

            if (user == null)
                throw new InvalidOperationException("User not logged in - can not acquire write lock");

            var typeStr = typeof(TObj).FullName;
            var idStr = id.ToString();
            var result = false;

            using (var ctx = CoreEntitiesBuilder.Build())
            {
                var exist = ctx.WriteLocks.FirstOrDefault(x => x.OBJECT_TYPE == typeStr && x.OBJECT_ID == idStr);

                if (exist != null && exist.ACCOUNT_ID == user.AccountId)
                {
                    exist.ACQ_TIMESTAMP = DateTime.Now;
                    ctx.SaveChanges();

                    result = true;
                }
                else if (exist != null && exist.ACCOUNT_ID != user.AccountId)
                {
                    result = false;
                }
                else
                {
                    try
                    {
                        ctx.WriteLocks.Add(new WriteLocks()
                        {
                            ACCOUNT_ID = user.AccountId,
                            ACQ_TIMESTAMP = DateTime.Now,
                            OBJECT_ID = idStr,
                            OBJECT_TYPE = typeStr
                        });
                        ctx.SaveChanges();

                        result = true;
                    }
                    catch (SqlException)
                    {
                        result = false;
                    }
                }
            }

            return result;
        }
    }
}

Here are some examples of LockManager class usage:

private void btnSave_Click(object sender, EventArgs e)
{
	this.WithDbErrorHandling(
		() =>
		{
			if (!_lockMan.EnsureLock<rcr_CVDocument>(_currentDocument.ID))
			{
				_lockMan.ShowNotification(this);
				
				// ... 
			}
			else
				SaveDocumentData(_currentDocument.ID);
			
			// .... update UI controls etc.
		}
}

private void btnDelete_Click(object sender, EventArgs e)
{
	if (!_lockMan.AcquireLock<rcr_CVDocument>(_currentDocument.ID))
	{
		_lockMan.ShowNotification(this);
		return;
	}

	try
	{
		_currentDocument.TO_DELETE = true; //real delete handled by backend worker
		_entities.SaveChanges();
	}
	finally
	{
		_lockMan.ReleaseLock<rcr_CVDocument>(_currentDocument.ID);
	}
}

For further reading I suggest:

Cheap web-cam monitoring in the Cloud with raspberry pi, ASP.NET MVC and Marionette.js

I’ve recently seen a few articles about video streaming with raspberry pi using node.js streaming server and ffmpeg utility. It’s funny how easily you can create your own live video streaming with opensource tools and cheap mini-computer. But there are some problems with this approach. The highest resolution I was able to capture, encode and live stream was 160×120. It is too low to recognize people or plate numbers seen on the picture. There are also some network issues that make things harder like maximum throughput, security and it requires you to have public IP.

All this issues made me wander if wouldn’t be better to capture fewer frames (1 per second or even per few seconds) but with higher quality – maybe event in HD which is not a problem for today’s webcams. If you have a bunch of time-stamped HD images you will be able to identify people or event read a plate numbers of passing cars etc. To overcome network issues I’ve decided to give OpenStack Object Storage (hosted by oktawave) a try.

First thing I had to do was to plug a webcam to my raspberry pi board and get it connected to my access-point.

WP_20150822_005

Then I had to upload two utilities from linux repositories. The first one was fswebcam used for capturing webcam image and the other one was imagemagick used for comparing pictures in order to avoid uploading duplicated images (when nothing happens in front of the camera there is no need to use cloud storage you for which you pay).

Then I have written a simple python script (available at my github) which generally does the following:

  1. authenticate against open-stack object storage,
  2. create directory for new snapshot (/camera-name/year/month/day/hour/),
  3. take a snapshot and compare it with previous one,
  4. upload picture to storage if it is different enough from the previous one,
  5. go to step 2.

The result of running this script on my raspberry was visible on my administration panel provided by Oktawave.

list

Using administration was not the most convenient way to browse my captured photos so I decided to use ASP.NET MVC 5, bootstrap and Marionette.js to create a simple browser of my data. The whole solution is also available on my github.

The app is very straightforward and it looks as in the picture below.

capture

The whole project is available on github. This is just a PoC but it’s amazing how easily you can create something fun and useful with these days’ toys.