Posted by: sbelus | 22/01/2016

Quartz.NET – How to fire trigger NOW


In one of my project I have Quartz.NET job scheduler with a job scheduled once per day (at night). Generally it is an SQL script, which is fired from C# code in a loop several times with different parameters. Unfortunately sometimes it fails.

When it fails I need to fix it and most likely lunch it again. The problem is that I can’t change cron for the job and restart the scheduler just like that. The scheduler works as Windows Service on remote server. Well, eventually the job will fire next day, but sometimes I need to run it as soon as possible. The only solution for me is to modify data in quartz SQL tables.


I found a column NEXT_FIRE_TIME in QRTZ_TRRIGERS table which is a bigint data type and determines next fire time (what a surprise 🙂 ) of a trigger expressed in ticks. The date can be retrieved by sample sql code (thanks to this answer) . I can list all triggers and their next fire time in human readable datetime data type (Download script)


As Quartz.NET is periodically quering for NEXT_FIRE_TIME before it run the job, the real problem is to put there proper value. And here comes another help from

Converting date and time to ticks with some modifications looks like this:(Download script)

DECLARE @ticksPerDay BIGINT = 864000000000 -- DO NOT CHANGE
DECLARE @triggerName varchar(300)

--####### set the source date value here ########
SET @date = GETUTCDATE() --put here UTC date
SET @triggerName = 'triggerName'
DECLARE @date2 DATETIME2 = @date
DECLARE @dateBinary BINARY (9) = cast(reverse(cast(@date2 AS BINARY (9))) AS BINARY (9))
DECLARE @days BIGINT = cast(substring(@dateBinary, 1, 3) AS BIGINT)
DECLARE @time BIGINT = cast(substring(@dateBinary, 4, 5) AS BIGINT)
DECLARE @nextFireTime bigint = @days * @ticksPerDay + @time

SELECT @date AS [DateTime]
,@nextFireTime AS [Ticks]
,CAST((@nextFireTime) / 864000000000.0 - 693595.0 AS DATETIME) + (GETDATE() - GETUTCDATE()) as [CheckDate] --local time (for SQL SERVER)

SET NEXT_FIRE_TIME = @nextFireTime

I hope it would be useful for you.

ps. sorry for external links for SQL scripts. I need to think about moving this blog to other server

Posted by: sbelus | 28/03/2014

Serialization – testing performance

Lately I was looking for optimal solution for object serialization in .NET. Optimal means:

  • small size of serialized data
  • fast serialization & deserialization for 50000 objects
  • serialized data should be human-readable

I found some .NET implementations for json & yaml serialization which I have tested:

Before I setup the test procedure I did some pre-test and YamlSerializer didn’t show the good side. It was the slowest and for some objects it throws an exception while for the same object other serializers work fine. It was probably configuration issue, however I didn’t spend additional time to solve it as I saw that other are better.

Test procedure

For tests I used an entity with nested list of other entity. As a whole it was not a big object, however it contain: ~50 string fields, 15 decimal, 10 boolean, 6 DateTime and 6 integers. All the results were compared to standard XmlSerializer.

Size of serialized data

The following graph shows total amount of the serialized data:


As you can see Json & Yaml serialization has very similar size. That is not a surprise, if we look on the format of both standards. All sizes are almost 50% (in this case) smaller in compare to XmlSerializer.

Time performance

First test (1000 rounds of serialization) pointed out that only three of serializers can be used on bigger test.


Time of YamlDotNet deserialization is totally unacceptable. It took over 20 seconds, while other did their work in miliseconds (XmlSerializer, Json.Net & ServceStack) or took about 3 seconds (JsonUtilities)

For 50000 rounds of serialization JsonUtilities took over 2 minutes so it won’t be shown on the screen below :


As you can see other serializers were far much better than JsonUtilities (CodeFluent Runtime Client). There is also no big different between commercial serializer (ServiceStack) and based on MIT license – Json.Net. ServiceStack is faster, but in this case the difference is less than one second on serializing and a little above 1 second  when deserializing. XmlSerializer is also fast, however it does not meet my size goal.


Posted by: sbelus | 20/03/2014

Algorithmics – time to check your skills

Work, work, work…It’s been a while since my last blog post. I’ve been involved in three projects since then. Today I will present you loose subject: algorithmics.

In daily work, when business and functionality goals must meet client requirements, there is no much algorithmics in dedicated projects. Of course it depends what you are doing exactly and what you are responsible for, but you must admit when you are a software developer there is a lot of tedious work to be done. No one else will do that for us. In fact, you are the one who need to do this.
Sometimes it is good to stop and do something new, something that impress you. And here it comes (old but great) ProjectEuler.Net. You can find there hundreds of algorithmics problem to solve. You can do this for your own satisfaction or to compete with others and gain some awards.


Most of problems of Project Euler needs programming while others needs only a piece of paper and pen… and think (of course). The result is the only goal. However it does not stop you to think about the solution and do it in several ways, for instance: think about parallel algorithm and check whether it is faster or not (why not?). At the end you can compare your solution with other users. There are dedicated forums for each problem. You can find there many solutions in various languages. It’s a great fun. Try it, it doesn’t hurt 🙂

Posted by: sbelus | 14/02/2013

CodeFluent Entities – In The Real Project

Now it’s time to face with a real project and how it works with CodeFluent Entities. My experiences concerns a project with around 50 entities, so it is not so big, but enough to collect issues and think what could be done better before next, maybe bigger, project.

Version Control System

The goal was to keep sources ready to compile and run for everyone also for those who doesn’t know anything about CodeFluent Entites. That indicated keeping everything what CodeFluent generates in version control system.
First was Microsoft Visual Source Safe 2005 (I know that is old tool, but it was not my decision to use it). There were few problems with that:

  • CodeFluent recreates every generated already file from the model. That means every file is changed. However producers don’t checkout files. The developer need to do it by himself and find manually what had changed and commit it to source control.
  • CodeFluent adds time stamp to (almost) every generated file. That indicates conflicts problems when 2 developers run producers independently.
  • There is also .NET runtime version added which can be different depending on windows version, service packs you have installed and finishing on language packages installed on the system

I think the first problem was the biggest we face in this matter, because the developer could not catch every changed file and commits could not be compiling. To avoid this situation every developer had to regenerate model after they got latest version. So the goal was not achieved.

The solution was to move sources into other version control system: Git. The main reason was that Git is comparing files by their content, so the files are checked-out when the content was different than in the last commit. Yes, but time stamps are different every time we generate model. This option can be turned off in CodeFluent configuration file (defaultProducerProductionFlags=”RemoveDates”). What about .NET runtime version? We use simple patch producer (also from the CodeFluent package), that simple removed runtime version (using regex) from every file at the end of generating. (EDIT: Since 1 March 2013 Codefluent Entities (build 702) provides RemoveDiffs production flag that is also removing the Runtime Version from generated files. Now patch producer is redundant)

Own tracking properties

Our other goal was to use our own login system and keep entity tracking who changed entity and when. CodeFluent provides standard mechanism to track changes, but there is no possibility to change the user name. This name is  in format “ComputerName\WindowsLoggedInUser”. The solution was CodeFluent aspect functionality. The aspect can in very simple way manipulate whole project (entities, their properties, methods, etc). For example we can add some properties to all entities. On the build such aspect will be executed before producing. That is how we add additional properties to (almost) all entities. The other problem was to find the place where the properties would be filled in with data (user name) from application global context.

First try was OnBeforeSave rule. It was working fine when you always save objects using Save() method. OnBeforeSave is not executing always, especially when you try to save modified collection of objects. As we use WCF architecture we knew that each object that we need to save is serialized. That is why we use OnSerializing rule to fill in all data. This works fine also for collections.

Minimize transferred data 

Next goal was to minimize transferred data. It is very important on loading a collection of data, because when entity is connected with other entities, than it is most likely they will be also loaded with collection. The problem is that this is done one by one, not as a whole (single query). This is very inefficient, especially when the collection is big. The good solution was to use views functionality. We can simply define a view that contains only data we need on the list and it takes only one SQL query. The view can be done very simple. You can just say which properties from the entity and connected entities you need. There is also possible to create own view query (so called “raw view”), where we can use any SQL query.

Note that this approach is good only if you not editing whole collection in-line (i.e. in grid).

XML-Synchronizer issues

For the developer it is good to edit xml data as fast as needed, however this is difficult due to CodeFluent XML-Synchronizer that is checking integration between all XML parts. It is still not perfect, so it may cause problems when project is loaded in Visual Studio. As Softluent support says on their forum “Modifying a .CFP part file when the project is loaded is not the recommended way of changing the model“.  For the developer that is an issue, because the synchronizer may change your XML files and you even don’t see the difference on the editor, until you build the model or reopen Visual Studio. For example there is modelNullable attribute for viewProperty node. It can just disappear when saving some XML part. Also it is very possible that synchronizer will compress complex SQL queries into one line of code! I don’t think I need to write how this can be painful when you need to edit it.


What I wrote is not all issues we had in real project. Those ones were most time consuming to solve them. I bet you will also find some issues during your development. CodeFluent Entities is not perfect. Personally I submitted at least 5 bugs. However they have very good support and they solves bugs very quickly. They are also ready to help and give you some workaround if possible. Finally I can recommend CodeFluent Entities for software development. Despite of all issues we had, it really can simplify life during development.

In conclusion:

  • It is good idea to use Git as version control system
  • To avoid conflicts and time spent on resolving them, it is better to turn off adding dates to generated files
  • For the same reason it is good to remove also runtime version (i.e. by using Patch producer) (redundant since 1.03.2013 – see edit note above)
  • Use aspect to automatically create properties,  methods, etc.  in all entities
  • Use views when possible to minimize transferred data
  • XML-synchronizer is not perfect, so it is not recommended to edit XML parts when project is loaded.

If you have some comments, please leave it below. Thanks.

In previous post I described a vision of the CodeFluent Entities. Now I will try to show how it works.

We have two options at the beginning: start new project or import from existing database. I will focus on the first option.

Let say we just want to create very simple windows application without care about the UI. A small movies database would be fine. I’ve created only Database project, class library for business entities and windows application project. Lets define entities as on the picture (I used designer to do this simple model):


In the model xml also looks very simple:


All those actions took me 10 minutes (with configuring the CodeFluent’s model). After code generation we get:

  • database schema together with views and stored procedures to load, save (with update), delete and load all entities by its type
  • business entities with properties and methods that interacts with database procedures
  • collection entities with methods like: LoadAll, SaveAll, etc.

I bet you won’t do those things faster than 10 minutes by your self.

Now it is possible to create UI by our self and just use those generated parts. When you create desktop application you may also want to use WCF services. It is just simple as above steps. We just need to add new sub-producer: Smart Client producer, that will generate client entities (to separate project) and WCF services (each per entity). We can also use CodeFluent built-in sevice to host those services. It can work either as console application and windows service. Configuration files can also be generated by a simple Template producer that uses pattern for configuration files.

As you can see CodeFluent Entities is very powerful tool. Can it be used for real projects? In my opinion yes. In the next post I will describe what issues you can face in a bigger project.

What is CodeFluent Entities? As vendor ( says: “CodeFluent Entities is a unique product integrated into Visual Studio 2008 SP1 / 2010 / 2012 which allows developers to generate database scripts (e.g. T-SQL, PL/SQL, MySQL, Pg/SQL), code (e.g. C#, VB), web services (e.g. WCF, JSON/REST) and UIs (e.g. Windows 8, ASP.NET, MVC, SharePoint, WPF)“. What CodeFluent is not? Definitely it’s not an ORM system, but let’s start from the beginning.

In general this technology should help us to create code faster as it generates  files with code, database scripts and schemas and more. CodeFluent Entities provides about 20 producers that can generate every part of the application. For example: SQL producer generates database schema and scripts, C# BOM Producer generates C# code with business entities (defined by user) together with methods that are interacting with database entries. Smart Client producer generates proxy entries and WCF contracts. As you can see you can build whole application with minimum effort. All you need to do is use designer and create entities and add some additional logic. You can also edit xml-based files outside the designer, which, in daily work, can be  more comfortable for software developers.


In my opinion designer is very useful only at the beginning of the project and when there is not many entities. When project is becoming a larger then visualisation doesn’t look nice and using designer doesn’t have many benefits. Of course authors predicted that issue and they offer so called “surfaces”, that allows you grouping entities to have some logic in designing. I suppose that after a while you will probably go to edit xml files with data by your self. It can be faster when you get to know well the structure and attributes.

Generation of the code is continuously. That means you can edit model (entities and logic) then generate files and add your custom code. After that you can edit model again and generate files again and again. It’s important to make corrections in the model than making corrections in the generated files, because after another generation files will be overwritten by those generated from new model.

The theory looks great. How it is in real project? I will try to describe my impressions in the next post.

Posted by: sbelus | 25/03/2012

Let me introduce myself :)


This is my first blog entry, so I would like to introduce myself.

I’m a Software Developer – Specialist currently working at Infover S.A. located in Kielce, Poland. I have over 5 years of experience in .NET technologies such as: .NET framework 2.0, 3.5 and 4.0, ASP.NET 2.0 and Winforms as well. Previously I worked at Volvo (Wroclaw, Poland), where I was working for over 4,5 years including internship program in Summer 2007.

Why I’m writing this blog? Well… during looking for a new job in my home-city I realized that the only reference to my IT knowledge is in my CV. I want to change it, so that is why I’m writing it.

In next weeks I plan to publish posts about programming in practice using: Code Fluent Entities, Git (source control). So please stay tuned. I hope you”ll enjoy it! Feel free to comment my posts.