Short introduction to git

This note can be used as a cheat sheet. Below is a brief description of most often used git commands.

Basics:

  • Working directory
    • Working directory | working tree
      Catalog in which we make changes. It contains check out latest version in a current branch: current version, HEAD, and changes that we made
  • Index
    • Staging area
      Contains changes which will be committed to a new version in the local repository. Changes from working catalog can be added to index by git add.
  • Local repository
    • A local copy of a repository. All operations are being done on a local repository. If we want to the public about changes we should push it out to a remote repository using git push.
  • Version
    • File version saved in repository | Commit
      We can connect versions by git merge. A version may have more than one parent. Git checkout is used to change files in a working directory to files of a given version.
  • Branch
    • Linearly ordered set of vertices in graph version. Usually, there is a current branch whose pointer is moved together with the HEAD when executing git commit, git reset …
  • Remote Branch
    • Branch in the remote repository.
  • Tracking branch
    • Branch in local repository that tracks branch.
  • HEAD
    • Indicates the current version from the local repository.
  • ORIG_HEAD
    • The previous HEAD value, before any of the HEAD changing operations:
      • git commit
      • git merge
      • git pull
      • git checkout
      • git reset, etc.
  • Master
    • Main branch. After creating new repository this is the current branch.  The way we work with GITs is that for every major task we create a new branch, we work in that branch, and then, as we already have the functionality, we merged this branch into a master.

 

Index operations:

Changes to the working directory will not be self-referenced to the new version in the local repository when we do git commit. Only changes recorded in the index are commenced.
Below are few basic commands and probably all commands that we need for work with git:

  • git add .
    • Adds changes to all files in the working directory I subcategory. Files that were not previously included in the current version are added.
  • git status
    • Displays information about which changes to the working directory have been stored in the index and which are not.
  • git rm file
    • Deletes files from working directory and index
  • git mv file1 file2
    • Renames the file file1 to a file and saves the change in the index.
  • git reset file
    • Reverse operation to git add. Sets the file in the index to its current version, which in effect removes changes to the file added to the index by git add.

 

Version operations:

  • git commit -m “description”
    •  Creates a new version in the local repository. This version will differ from the current changes in the index. After executing this command, the newly created version becomes current, HEAD, and the current branch point to a new version and the index is identical to the current version (no changes).
  • git commit –amend -m “description”
    • This is the most common form of the commit. The option causes the indexes to be added to the index in all files in the working directory that existed in the current version before they were uploaded.
  • git log
    • Shows the version history to the current version, ie the vertices of the version graphs that are available from the current version, always going to the father. This command has a number of additional options that let you specify exactly which versions we want to watch.
  • git diff
    • Displays the differences between the index and the working directory.

Operations on the branch:

  • git branch
    • Display branches in the local repository.
  • git branch -d  name
    • Deletes branch Name, deletes the Name pointer itself, and not the version to which this branch points. This version may still be available from other branches.
  • git checkout name 
    • Changes the current branch to name if Name is the branch name. HEAD is also set to the version indicated by name.
  • git merge name1, name2, name3
    • Creates a new version by including the versions pointed to by branches name1, name2, and name3 to the current version. Usually, we want to merge only one branch, but you can and several at a time.

Remote repository

  • git remote
    • Displays the list of remote repositories.
  • git push
    • Writes to remote branches in the default remote repository (usually the origin) changes from a branch tracking them in a local repository.
  • git pull
    • Takes changes from the appropriate remote branch to the branch that tracks the remote branches. Trying to automatically merge changes.
  • git fetch
    • It works like a git pull with the difference that it does not automatically erase changes to local branches.

Reference:

  1. Pro Git
  2. git commit murder
  3. Learn Git in a Month of Lunches
  4. Git Essentials
  5. Version Control with Git: Powerful tools and techniques for collaborative software development
  6. Professional Git

Login form using Spring MVC part 2


LoginService:

LoginServiceImpl class:

In the LoginController we have to modify the validation logic to use the service class for validating the user. LoginController class:

For creating full login site we should add Hibernate and MySql driver dependencies in pom.xml file. Another thing is to add applicationContext.xml file for initializing Spring and Hibernate related components. For example bean definition for MySQL dataStructure and sessionFactor which will be used in DAO classes.

Thera are also a few changes in: web.xml:

Login form using Spring MVC part 1


For Energy Billing System I want to add login screen for the user that will be displayed before access to user panel. The user can enter username and password, click submit button to proceed the login. For this purpose, I create a simple database in mysql.

Create a table called: users, by using fallowing sql:

Checking information about table that we created:

Added two records into users table:

Now in a database, we should have two records. Next step is to create a model class. This Entity is mapped to the “users” table.

Created DAO class interface:

For now is a small part of login implementation. In the next post, I will present complete implementation.


Reference:

Initial controller, view and dispatcher servlet configuration


For the last few days, I have not had time to work on the project. Now there is simple page displayed as an initial project page.

Added spring framework dependencies in pom.xml file:

Simple view:

Controller:

Created dispatcher servlet configuration:

The picture above presents the basic appearance of the page that will appear after run changes. For now, there are not many changes, but it is a necessary attitude to make new changes and work on the application.


Reference:

Selenium Grid

Selenium Grid is a tool, that allows to runs multiple tests across different operating systems, browsers in parallel at the same time. Grid architecture contains Hub that only runs tests on a single machine, execution, however, these tests are performed on different machines called nodes.
This approach obviously has its advantages. Speeds execution of a test, otherwise the application under tests is tested simultaneously on several environments, giving additional feedback.

Selenium Grid Architecture

An architecture of grid is very simple. First, we should have a hub, only one hub in a grid. This will be our starting point and to a hub, we will load tests into. From there, tests are loaded into nodes.

Nodes are instances that will execute tests that were loaded from a hub. There can be one or more nodes, this depending on our configuration and of course needs.

To install Selenium Grid on a local machine we should make two things: first, install java JDK, and second download Selenium Server JAR file from SeleniumHq webpage.

After download, all needed files lets start with hub configuration. To start selenium grid hub on a local machine, open console and go to the directory where is located selenium server JAR file.

After that, we can open a browser and go to http://192.168.0.2:4444/grid/console and for now, we should get an empty console, as bellow:

and config:


For now, we don’t have only a lot in this configuration. Next step is to configure nodes. To configure node on localhost: java -jar selenium-server-standalone-3.3.1.jar -role node -hub http://192.168.0.2:4444/grid/register/

And now we can see our first node:

To override default configurations node, we can register the second node as fallows:

Listing below:

After refreshing console with nodes configuration we should have second with custom configuration:

How to write tests for Selenium Grid?

To design tests for the grid is not that complicated that you thing. But first od course we must create some changes in our code. Wery important is to import DesiredCapabilites package. This allows us to use: DesiredCapabilites object.

Define a browser and initialize a DesiredCapabilites object with firefox method:

Declare requirements for a specific platform and browser version.


For this configuration, we will run out tests for OSX with Firefox 52.0.2 browser.

Below is configuration for all platforms:


Reference:

Packages in java

Java allows to a group class and organizes them into packages. Packages are very simple and useful mechanism to organize, easily locate files, reuse code, use libraries and also save time. In real life, there are situations when we can tray to create the class in the same name. This provides to namespace collisions. Packages also help to prevent that kind of problems.

How to use a package

The class can use all of the classes from their package and all of the public class from packages that belong to other pancakes. To get access to the public class from other packages we can do it in two ways:

  • Call a full name: com.somePackage myClass = new com.somePackage();

However, it’s not a very practical way. Easier and much faster way is to use keyword: import. This keyword allows for easier use of a class from another package.

Of course, we can import all class from the package by add *, for example:

Adding a class to the package

To add class to package first we must to add package name on the beginning of source file, for example:

If we don’t add package name, a class will be placed in a default package. This package doesn’t have a name.

Part of the program that is listed bellow describes that HelloController.java belongs to com.billingsystem.controller. Therefore file HelloController.java must be stored in com/billingsystem/controller:

Docker ~01 Docker and Raspberry Pi

Today I tried to install Docker on my Raspberry Pi 3 board to check how it will work, just for testing purpose. And you know what? It works very nice. First of all, I install Raspbinan Jessie Lite on SD card. For this point, I log in on a fresh installation and make the system update. After update list of repositories, I made the update of the whole raspberry:

Next step is to install Docker directly from the Docker website for ARM architecture.

This process will take a few minutes. After installation finished we will see the latest installation of Docker and that is running on ARM architecture:

Installation process suggests us to use user pi and add them to docker group. This will allow us to run docker command without using the sudo command.

To check that Docker demons are running we can use: sudo docker info

As you can see docker demons are running. After running classic: sudo docker run hello-world docker can’t find hello version:

The container didn’t start. This problem appears due to incompatible between two architectures: ARM and x86. To solve this problem the best way is to find Docker image that can be run on ARM architecture. So how can we find this image?

The best way is to find an image on Docker Hub. Searching for: ARM or RPI.

Another way is use Docker build to build our own Docker images for Raspberry. To create simple Dockerfile we must create the build directory, and in this folder placed Dockerfile that in my case contains:

To run Docker build: sudo docker build -t /rpi-java8 .
The build will take a while. After finish, we can check by using: sudo docker images. This will list:

To run created Docker image we can use: sudo docker run -it /rpi-java8. But this displayed all options for java because I didn’t specify anything. After adding some variables like: java -version we can see:

Using Travis CI

Getting started with Travis-Ci is pretty easy. First, we should create a .travis.yml file in root project directory. The second step is to create hooks between GitHub and Travis.

In my configuration of Travis, I used two operation systems: Windows and Linux. This configuration allows to runs each build in an isolated Google Compute Engine virtual machine that offer a vanilla build environment for each build. That allows creating clean slate and making clear output for our tests that runs in environment build from scratch every time.

Of course, this is some example of the initial setup for Travis, that I currently use for my pet project. In the near feature it will be updated for more components such as data store, environmental variables and adding APT Sources.

Travis will automatically create matrix environment variable with each Java version. In that case, all test will be run for every combination of three.

Eclipse IDE

Eclipse is an integrated development environment, a free platform widely distributed among Java developers, but also PHP, C ++. The last option especially highlights Visual Studio.

Why is writing about the Eclipse IDE? I’m writing about it because I started to develop my project seriously. Let’s get to know 2017 but IntellyJ Community version does not allow to create certain JEE components or use Spring Framework. That’s why I started looking for another solution and support for Spring. I choose Eclipse Neon. A few years ago I had quite a bit of tangency with this environment. It’s very powerful.

Installation is very simple. In the first step, we should primarily download versions from the eclipse.org project. Find the Downloads section and download Eclipse. I downloaded the Java Enterprise Edition (JEE). Installing such a package consists in extracting the * .zip archive to the appropriate folder. It depends primarily on our preferences and the operating system.

After the first start up location window should appear, in which we choose the workspace. This is the location where our projects will be stored. A perspective project is already ready for programming and the creation of the first project. We can also modify it with a simple drag and drop mechanism, dragging selected items into a convenient place for us, allowing partial personalization of the IDE perspective view.

On the left, we can see Package Explorer. The structure itself resembles a tree. On the right side, there is Outline, a quick preview of the class in the form of a sketch of the field and method, along with colored markings indicating the access specifiers. At the bottom, there is a box showing the errors, warnings, and problems that appear in his project. By default, the console also appears after launching the application.

A few words about automation

For some time, I  hear a lot about automation. And more and more often hear the statement: we should automate everything. In my opinion, you can automate a lot, but not everything. However, so far, the human factor is needed. Maybe in the near future, there will be advanced artificial intelligence that will allow people to take more creative tasks at work than they are today. And here comes the question of what is a more creative task?

Automation can be talked about in two ways. Automate tasks through bots or appropriate scripts that allow you to perform the right tasks automatically. The second approach is to run automated tests, which is related to the first point because it involves eliminating human interference in the process.

Automation, like every field, can bring a lot of good, save time and money, but it can also put a lot of time and money into it, and you don’t get any meaningful benefits.
Let’s be honest automation is not cheap. However, in an appropriate way, it will make testing more effective.

Many times I encountered a situation where when moving to a new environment, existing automated tests did not work or could not be started. This was due to two reasons. Firstly, a code has not been maintained for a long time, I mean half a year or often even a year. No one is able to say anything about the code after this period of time. And second, test code was written in a very chaotic way, without using any pattern to create automated tests. What does this really mean? Often it means that nobody has no idea how tests work and why they give such and not other results. A lot of time and energy are needed to maintain automated tests created without the use of a design pattern. It usually involves rewriting the entire architecture of test again. A very bad option is also to recording test and playing them once in some period of time. In my opinion, this is not a solution. Right until the next run of this type of recorded test scripts will be useless.

Several times I also met with the situation when the automation technology stack was different from the technological development stack. Such a solution required preparation of additional infrastructure, which involved costs, an additional environment for maintenance. This also causes trouble for the developer, especially when the product code is created iteratively. At this point, it is also worth mentioning CI – continuous integration. This is one of the basics of automation. This allows, above all, a quick feedback on the fact that a particular feature does not work. Sooner we find out that something is not working we’ll fix the problem faster. Otherwise, it may just not be enough time for a “quick fix”. It also saves us a lot of nerves on release day when everyone is praying that all tests will turn green. But the reports sometimes say something different than it actually is. Keep in mind that the automation product is the report, and each of them should be as well reviewed. It’s important to provide one source of information about the reports. I mean here that people tend to check one source and the others are omitted.  Besides, I also met with the coincidence that we have a problem only when the tests fail, and only then we take care of the repair. This is not quite true, tests may pass and functionality still does not work and no one knows why. In that case,  tests are badly written and they need to be thoroughly reviewed.

If automation test engineers only write UI tests, we should consider very well whether we do automation well. It also depends on the type of product we have to deal with, but there are some problems with it. First of all UI tests are most expensive to build and maintain. UI testing is never as testing the service layer and database layer. Automation can be made much more interesting and effective, so as not to lose the essence of automation. It may be better to write scripts, tooling, even a bot to inform you about the progress of the various activities that will save a few hours during each deployment and it make it less painful.

Ok, but test automation is suitable for checking repetitive tasks, then it is most useful. If we want to check feel, look and a good teste of the new feature then the best idea is to customer or people who have not previously had nothing to do with it. Then we get a fresh feedback.

If all tests pass 100% we probably do not test as much as we should. Properly matching boundary conditions in such a way that some of them pass and some certainly should not give us the result of 100% is on the pass.

A few additional words about automation. Automation testing is irreplaceable, but you have to be careful to do it with reason. The customer does not pay for the tests, but for the product, good quality and timely delivery at a reasonable price.