A few words about automation

For some time, I  hear a lot about automation. And more and more often hear the statement: we should automate everything. In my opinion, you can automate a lot, but not everything. However, so far, the human factor is needed. Maybe in the near future, there will be advanced artificial intelligence that will allow people to take more creative tasks at work than they are today. And here comes the question of what is a more creative task?

Automation can be talked about in two ways. Automate tasks through bots or appropriate scripts that allow you to perform the right tasks automatically. The second approach is to run automated tests, which is related to the first point because it involves eliminating human interference in the process.

Automation, like every field, can bring a lot of good, save time and money, but it can also put a lot of time and money into it, and you don’t get any meaningful benefits.
Let’s be honest automation is not cheap. However, in an appropriate way, it will make testing more effective.

Many times I encountered a situation where when moving to a new environment, existing automated tests did not work or could not be started. This was due to two reasons. Firstly, a code has not been maintained for a long time, I mean half a year or often even a year. No one is able to say anything about the code after this period of time. And second, test code was written in a very chaotic way, without using any pattern to create automated tests. What does this really mean? Often it means that nobody has no idea how tests work and why they give such and not other results. A lot of time and energy are needed to maintain automated tests created without the use of a design pattern. It usually involves rewriting the entire architecture of test again. A very bad option is also to recording test and playing them once in some period of time. In my opinion, this is not a solution. Right until the next run of this type of recorded test scripts will be useless.

Several times I also met with the situation when the automation technology stack was different from the technological development stack. Such a solution required preparation of additional infrastructure, which involved costs, an additional environment for maintenance. This also causes trouble for the developer, especially when the product code is created iteratively. At this point, it is also worth mentioning CI – continuous integration. This is one of the basics of automation. This allows, above all, a quick feedback on the fact that a particular feature does not work. Sooner we find out that something is not working we’ll fix the problem faster. Otherwise, it may just not be enough time for a “quick fix”. It also saves us a lot of nerves on release day when everyone is praying that all tests will turn green. But the reports sometimes say something different than it actually is. Keep in mind that the automation product is the report, and each of them should be as well reviewed. It’s important to provide one source of information about the reports. I mean here that people tend to check one source and the others are omitted.  Besides, I also met with the coincidence that we have a problem only when the tests fail, and only then we take care of the repair. This is not quite true, tests may pass and functionality still does not work and no one knows why. In that case,  tests are badly written and they need to be thoroughly reviewed.

If automation test engineers only write UI tests, we should consider very well whether we do automation well. It also depends on the type of product we have to deal with, but there are some problems with it. First of all UI tests are most expensive to build and maintain. UI testing is never as testing the service layer and database layer. Automation can be made much more interesting and effective, so as not to lose the essence of automation. It may be better to write scripts, tooling, even a bot to inform you about the progress of the various activities that will save a few hours during each deployment and it make it less painful.

Ok, but test automation is suitable for checking repetitive tasks, then it is most useful. If we want to check feel, look and a good teste of the new feature then the best idea is to customer or people who have not previously had nothing to do with it. Then we get a fresh feedback.

If all tests pass 100% we probably do not test as much as we should. Properly matching boundary conditions in such a way that some of them pass and some certainly should not give us the result of 100% is on the pass.

A few additional words about automation. Automation testing is irreplaceable, but you have to be careful to do it with reason. The customer does not pay for the tests, but for the product, good quality and timely delivery at a reasonable price.

Multilayer architecture

For some time I am thinking about application architecture. Consider pros and cons and tried to redesign the concepts of how to develop an application that the architecture uses. However, after gathering more and more information I came to one simple conclusion: I try to invent a circle again, which I don’t really want to. Many web systems work in multi-layered architecture. What does that really mean? What is it and what is it? Regardless of whether we are building an internet portal, booking system, document circulation system in the company, electronic banking is a part of the elements remains very similar but never identical.

Why do we really need the architecture of the IT system? It is often the case that when the complexity of IT system increase, an appearance of additional components caused a need for their separation, a possibility of easy change to another component. This solution forced the use of specific protocols, declarations of abstract classes, or direct reference to fields and class methods.

Single-layer architecture
Usually, it is a single application that does not require any external communication to perform tasks.

Double layer architecture
Client – Server or Master – Slave. These are usually two programs located on one machine or scattered geographically. The service provider is a superior program and one or more programs use these services, for example, a web server, an email server, an application server.

Multilayer architecture
This model consists of more than two layers. Separation of the user interface, storage of data into several separate layers that can be separately developed and scaled. Such a division facilitates their maintenance and mutual development does not affect the other layers.


Description of specific elements in the layer:

  • Frontend
    • Unser Interface – implements the logic of handling the presentation layer for the user
    • Gateway – induces and  provides services to external systems
    • Admin Interface – implements logic for handling the presentation layer for the administrator
  • Backend
    • Message-service(Commons) – it supports common logic
    • Scheduler – responsible for periodically running processes
    • Security Management – permissions and authentication management
    • Process Managment – allows to running long-term processes with the need to maintain and interact with the user
    • Domain Specific Components – responsible for fixing and accessing objects
  • Database – relational database

Advantages and disadvantages of layered architecture:


  • I don’t need a specific technology or vendor platform to create multi-layer architecture
  • Testability – Is very easy to create tests in automated testing environment as JUnit for example, for each layer
  • Separation of layers permits to easily map functional requirement with system modules
  • Enhanced Security
  • Layered architecture enables to update only the application servers, not all clients in case that we want to modify only business logic
  • Hidden database structure


  • Communication that can be complex
  • Performance and it’s required to use additional tools for measure and reporting this information
  • Difficult to implement and maintain

That’s why I decided that Energy Billing System will be based on a multi-layered architecture. There is one more aspect, I will be creating applications for the first time based on this architecture, so I will make a lot of mistakes, which is very good. Why? Because I will be able to solve them later.


Maven #02 ~ pom.xml

pom.xml is the main element of our project. In file are defined all of the dependencies for example:

  • a way of building a project
  • testing
  • running a project
  • generating documentation
  • creating a release


After generating project pom.xml file looks as follows:

It looks very raw. Only groupId, artifactId, modelVersion, version. This information on this step is not very important for us. First, change that we will make will be adding JUnit library.

Now we can compile our project:

As we can see project build successfully. For now, we should look at three commands:

  • mvn  – this command runs maven and proper pom.xml file
  • clean – delete target catalog and old files version
  • compile – runs java compiler

Maven #01 ~ What is Maven?

Apache Maven is a software project management and comprehension tool. Based on the concept of a project object model (POM), Maven can manage a project’s build, reporting, and documentation from a central piece of information.

Primarily used for Java-based projects but that can also be used to merge projects in other programming languages like C# and Ruby. Many integrated development environments provide plugins for Maven. Typical tasks if a build tool is a compilation of source code, running test and packaging results into jar files. Maven can also perform related activities, for example, creating websites, uploads build results or generate reports.

Automating process of creating the initial project structure for Java application is very important, especially when doing it manually may take much more time and also there is a possibility for making typos during using some commands in the console, the last one is very frequent.

A little bit history.
Originally Maven was designed to simplify building process in Jakarta Turbine project.

What for?
Maven can be used for:

  • handle dependencies and intermediate dependencies
  • easy selection of tasks from the command line
  • versioning and tagging code
  • a large number of plugins to simple and complex tasks
  • Integration with all IDE such as Eclipse, NetBeans or IntellyJ
  • ability to manage a number of the modules of the project at a time
  • support for:
    • unit tests
    • integration tests
    • load tests
    • performance tests
  • application builds
  • generating documentation
  • releases
  • mailing list management

Maven is working according to the Convention Over Configuration pattern, which means that the configuration requires only those components that are nonstandard or user will want to adopt them for their needs, and the same configuration change can also be found under only one file – pom.xml.
Developers can build Maven project without the need to understand how the individual plugin works.

I use several systems start from Linux, Windows and Mac OS X machines. For this post, I will use Mac OSX.
Open terminal and type: brew install maven

If we have already maven installed, in a terminal should appear:

After installing Maven should check if everything is working properly, this purpose should be in the command line to run Maven and check the versions of what was installed:

As you can see installation is not difficult in inches. Some different looks to install Maven on Linux and Windows.

Before we turn to the creation of the first project we need to discuss a matter of a few, namely what they are:

  • artefact
  • group
  • version

The artifact is a name that identifies the project in the group. The group allows you to organize the namespace in which it is the artifact.

Both the group and artifact and version uniquely identifies the library of which we want to use. There is also something like what is called a plugin and archetype. The plugin is a special Java language, class in which performs the appropriate actions on the basis of asking configuration, for example, to pack compiled class to a jar file, generate documentation. An artifact is a specific plugin using which created them st project. It contains in its structure mapping of tree directories a new project, or file pom.xml contains basic project data.

Generating project
To generate maven project we have two options. First from a console and second from IDE.
Creating maven project looks as follows:

With this command Maven generates a Java project:

This last step can take some time depends on how much RAM memory do you have (These days it’s 16 GB).
Structure of the genereted project looks like follows:

We have generated whole Maven project structure Java source code. Maven created App.java class which is just simple “Hello World” program. We have also can see AppTest.java that is the simple test class. In root catalog, we can find our pom.xml file:

To compile Java source code we need to trigger: mvn compile in root project directory. This command runs through all life cycle phase, which is needed to compile a project.

To run test phases instead to run a full build we need just to trigger: mvn test

To clean project and remove all generated files from ./target directory we can use: mvn clean


  1. Apache Maven Project
  2. MVN Repository
  3. IntellyJ IDEA Maven
  4. Supported tags and respective Dockerfile links

  5. Jakarta Projekt

  6. Five things you didn’t know about Maven by IBM

  7. Introduction to Apache Maven 2

  8. Deployment of artifacts with FPT

  9. Maven Wrapper GitHub repo

Get Notice 2017

An interesting idea has been spread in polish software development world. Recently is called “Let them know you” (“Daj się poznać” in Polish). It’s all about developing an open-source project and blogging to share the knowledge about this. Last year I also participate in this event but from lack of a time, I didn’t finish my project. I hope that in this year I will be able to complete my project as I planned.

What this mean for me and especially for others?

For me, it’s a time to challenge myself with creating something to solve one simple problem. And for others make live easier. Another thing I would like to achieve is to return to writing. A few years ago when I was 17 years old I started with the first blog. It was not pretty much easy for me. So after a few posts, I closed my blog. Then I didn’t know the value od sharing of knowledge. Currently, I approach things a little different.

I don’t like to guess how much I will have to pay my bills in a given quartal or first/second half of the year. In the other hand, no one likes to pay bills. Especially high bills which nowadays it’s not easy. Please imagine that suddenly comes a high electric or consumed water bill. We don’t need to imagine because it usually is. So how to be prepared? How to reduce consumption? How to compare consumption with the previous billing periods?

A solution is an application that can allow users to check a status of used energy in each time. Compare how much energy has been consumed in each month.

I think that such an application can be helpful and will help solve one basic problem, namely notifies you of the charges on the present day user. Unfortunately, this is already on the side of us will lay what do we do with this information. Do you start to save and manage their resources wisely, or we will be wasteful.

In this project, I want to use Java, Spring, R language to data visualization and also python. Probably there will be the android application, but for now, I want to concentrate on core application.

2016 -retrospection

2016 it was a very strange year for me. A lot has happened at the turn of the year both: good and bad. I have met many great people, visit several countries, mainly in central and western Europe. I worked with many people of different characters and the approach to life.

Last year taught me a lot, showed what I’m doing well, and show me what I don’t really know.

I started programming in Ruby and Rails. Still coding at night.
Last year has passed me very quickly, and we have already 2017.

For the new year, I don’t really have any decisions. Some few goals that I want to achieve. Starting from a personal sphere, personal development and ending with a professional sphere.

2017 will be a good year. I assure you.

Hamming distance

Today’s post will be about hamming exercise. This exercise can be found on: Exercism.io. First two task were very simple and I will not bother anybody with them but the hamming exercise was very nice to learn something new. About this exercise: … put in two lengths of DNA and calculate their hamming distance.
Following is the README file or basics instructions of the exercise:

Write a program that can calculate the Hamming difference between two DNA strands.

A mutation is simply a mistake that occurs during the creation or copying of a nucleic acid, in particular DNA. Because nucleic acids are
vital to cellular functions, mutations tend to cause a ripple effect throughout the cell. Although mutations are technically mistakes, a very
rare mutation may equip the cell with a beneficial attribute. In fact, the macro effects of evolution are attributable by the accumulated 
result of beneficial microscopic mutations over many generations.
The simplest and most common type of nucleic acid mutation is a point mutation, which replaces one base with another at a single nucleotide.

By counting the number of differences between two homologous DNA strands taken from different genomes with a common ancestor, 
we get a measure of the minimum number of point mutations that could have occurred on the evolutionary path between the two strands.

This is called the 'Hamming distance'.

It is found by comparing two DNA strands and counting how many of the nucleotides are different from their equivalent in the other string.

^ ^ ^  ^ ^    ^^

The Hamming distance between these two DNA strands is 7.

In my solution, first I split DNA strand into an array due to the each_shortest_strand method. But I realized that is a bad idea. In addition, iterating to the end of string1 and also ensuring that we were comparing to an existing base pair in string2.


100 Days of Code – second edition

In 2016/04/10 I start 100 days of code challenge. It was not easy and I almost have done it. Almost it means that one day stays without commit. But happy information is that 99 days was with at least one commit. I was a very busy time for me. I learn a lot, especially how to code in Python. But this one-day caused that this whole challenge it does not count. So from today I start again nex, the second edition 100 days of code challenge. Rules are the same.


As we can see in above picture there is the screenshot from my GitHub account from today.

Simple scaffold application in Rails

In last few hours, I was creating some application that could work in near future as a blog. But before that good to make some nice structure to have a base of application. So this post is about: how to create a blog using scaffolding. Scaffolding is a quick way to generate some of the most important pieces of an application. If I want to create models, view and also controllers for a new resource in a single operation scaffolding is a great way to make the job quick and easier.

Creating a new rails app is very easy. To do that, I can type only:

rails new scaffold_app

The whole structure of app is as:

To create an MVC components needed to post comments in the scaffold app directory we need to use scaffold generator:

rails generate scaffold post title:string body:text

And this generator creates post:

rails generate scaffold comment post_id:integer body:text

Creates post and comments tables in database:

rake db:migrate

Rails uses rake commands to run migrations:

Rake routes are the lists of all the URLs currently uses in the application.

zrzut-ekranu-2016-11-21-o-01-09-00 zrzut-ekranu-2016-11-21-o-01-11-43zrzut-ekranu-2016-11-21-o-01-09-05


After running rails server we can see that some very simple post and comments formulas were created. Good idea is to add some user with some unique id – integer type, public name – string type, email address – string type that will be used as a username. id, name, email attributes are columns in users table in a database.

To user should be associated each post that user will create:

rails generate scaffold Micropost content:string user_id:integer

So, for now, we should have scaffold application with:

  • comments
  • posts
  • users
  • and adding each of them to database.


FATAL: database files are incompatible with server

A few day ago I was struggling with some strange (for me as a beginner) problem with PostgreSQL database during generate some application skeleton. Running some: rails new app —database=postgresql and after that: rake db:migrate or rake db:create cause the fallowing problem:

I restart also database but the problem still appears. So I start to looking for some answer. First I check if my database it’s running, to perform some simple check:

ps auxww | grep postrges

This simple command should display all processes that are running and also connected with postres, but of corse, in my case, output was totally different. PostgreSQL was not running at all:

andrzejdubaj     19418   0,0  0,0  2442020    896 s003  S+   12:43     0:00.00 grep --color=auto --exclude-dir=.bzr --exclude-dir=CVS --exclude-dir=.git --exclude-dir=.hg --exclude-dir=.svn postgres

I have PostgreSQL data files from a previous version of PostgreSQL and they are not compatible with current one. One solution was to delete all data and get a fresh database, basically to wipe data from PostrgreSQL completely including the user and data files.

rm -rf /usr/local/var/postgres && initdb /usr/local/var/postgres -E utf8

After that:

rake db:setup


rake db:migrate

from my rails application to get setup again. Next step was to check if my changes work as intended:

First process that is displayed gere is a masterserver process. The command shown for it are the same ones given when it was launched. Next two ones are the bacground workers processes launched automatically by the master process. Each of the remaining processes is a server process handling one client connection.

So I we can see this simple solution resolve problem with database on version 9.5.