Tag Archives: Azure

Support for Uniface in the cloud: a DevOps project

For the last few months we have been working towards adding cloud providers to the Product Availability Matrix (PAM). This project is known internally as Cloud Phase 1 and has proven to be, on the whole, a DevOps project.

DevOps

For us to add support for a platform there are several things that we must do – the most important of which is to test it, on every platform, for every build, to make sure it works. The framework we use to test Uniface is a custom-built application with the imaginative name of RT (Regression Test) and contains tests targeted at proving the Uniface functionality. The tests have been built up or added to as new functionality is added, enhanced or maintained.

Up until the cloud project, the process of building and testing Uniface (and this is a very simplistic description) was to:

  • Create a Uniface installation by collecting the compiled objects from various build output locations (we have both 3GL and Uniface Script)
  • Compile the RT application and tests using the newly created version
  • Run the test suite
  • Analyze the output for failures and if successful
    • Create the installable distribution (E-Dist or Patch)

The testing and building was completed on pre-configured (virtual) machines with databases and other 3rd party applications already installed.

To add a new platform (or versions of existing platform) to our support matrix could mean manually creating a whole new machine, from scratch, that represents the new platform.

To extend support onto cloud platforms, we have some new dimensions to consider

  • The test platform needs to be decoupled from the build machine as we need build in-house and test in the cloud
  • Tests need to run on the same platform (i.e. CentOS) but in different providers (Azure, AWS, …)
  • Support for constantly updating Relational Database Service (RDS) type databases needs to be added
  • The environment needs to be scalable with the ability to run multiple test runs in parallel
  • It has to be easily maintainable

As we are going to be supporting the platforms on various cloud providers, we decided to use DevOps methodologies and the tools most common for this type of work. The process, for each provider and platform, now looks like this:

  • Template machine images are created at regular intervals using Packer. Ansible is used to script the installation of the base packages that are always required
  • Test pipelines are controlled using Jenkins
  • Machine instances (based on the pre-created packer image) and other cloud resources (like networks and storage) are created and destroyed using Terraform
  • Ansible is used to install Uniface from the distribution media and, if needed, overlay the patch we are testing
  • The RT application is installed using rsync and Ansible
  • RT is then executed one test suite at a time with Ansible dynamically configuring the environment
  • Docker containers are used to make available any 3rd party software and services we need for individual tests and they are only started if the current test needs them. Examples of containers we have made available to the test framework are mail server, proxy server, webserver and LDAP server
  • Assets such as log files are returned to the Jenkins server from the cloud based virtual machine using rsync
  • The results from Jenkins and the cloud tests are combined along with the results from our standard internal test run to give an overview of all the test results.

As quite large chunks of the processing are executed repeatedly (e.g. configure and run a test) we have grouped the steps together and wrapped them with make.

As most of the platforms go through the same process we have also been able to parameterize each step. This should mean that a new platform or database to test on, after the distribution becomes available, “could” be a simple as adding a new configuration file.

The result of Phase 1 of the Cloud project is that the Product Availability Matrix has been extended to include new platforms and databases. Internally we also have the benefit of having a much more scalable and extendable testing framework.

The new platforms added to the PAM in 9.7.04 and 10.2.02 by the cloud project:

Uniface Deployment-in-Cloud

In this initial phase, we have been concentrating on the Linux platforms; next (in Phase 2) we will be working on Windows and MS SQL server.

During this process, I have learnt a lot about our test framework and the tests it runs. Much of the work we have undertaken has just been a case of lifting what we already have and scripting its execution. This has not been the case for everything, there have been some challenges. An example of something that has been more complex than expected is testing LDAP. The existing environment would use a single installation of LDAP for every platform being tested. Tests would connect to this server and use it to check the functionality. As the tests are for both read and write, we could only allow one test to be active at a time; other tests and platforms would have to wait until the LDAP server was released and become available before continuing. With the cloud framework, we have an isolated instance of the service for each test that needs it.

The project to bring cloud support to Uniface has been an interesting one. As well as allowing us to add new platforms and providers onto our supported matrix, it has also allowed us to be more scalable and flexible when testing Uniface.

 

Red Hat/OpenShift – Finding the silver lining

I have been in the Uniface business for longer than twenty years. I have experienced the GUI baby steps of Uniface 6 at around the same time Windows 95 saw the light. I could keep up with the new features that were presented with each new version of Uniface that was released. So, with regards to Uniface, I can proudly say that although I may look like a monkey, I am an old monkey. I know a lot of tricks.

The world is changing at a fast pace and it is necessary to keep my bag of tricks up to date. With Uniface moving into the direction of supporting cloud features, I feel that it is necessary to do a bit of homework to prepare myself for this mind shift.

As a first step, I joined a few colleagues at the Red Hat Openshift Roadshow that was held in Amsterdam. With many similar cloud-technology related events currently taking place and with Uniface being so strong in supporting multiple platforms, it seemed like a good idea to search for the silver lining at the Red Hat event.

Red Hat Cloud Blog

Why Red Hat?

Red Hat is just one of multiple platforms that Uniface supports. It is a leading enterprise Linux platform. It is supported on both Amazon Web Services and Microsoft Azure. These two providers are currently the preferred providers for Uniface cloud support. In addition, it is also open, reliable, secure and flexible for customers who have business-critical systems.

How does Red Hat align with the goal of supporting multiple platforms for Uniface?

At Uniface we are not in the business of putting one platform in front of the other. We want the client to make the decisions around the technologies that are going to be used and we want to fit into it. Red Hat is just one of the platforms that we as well as the cloud providers do support. What makes us strong, is the fact that we can confirm that Red Hat is one of the many platforms that are on our list that we can tick off.

What benefits does this bring?

As a result of our work, we now have the infrastructure in place to verify and test Uniface on cloud platforms, therefore enabling us to tick the box that Uniface is supported.  This means customers do not need to make changes to their application source code, because, we can deploy to Red Hat as well as other platforms in the cloud in the same way as if they were deploying to on premise operating systems.

What is OpenShift?

Before we understand what OpenShift is, we first need to understand a few other terms (in short of course).

  • Infrastructure as a Service (IaaS)

When a provider runs computers on demand with specified configurations. This is alternative to rack and stack hardware. You specify the amount of RAM, CPU, disk space and operating system and the provider starts up a machine that meets these specifications within minutes.

  • Software as a Service (SaaS)

Requires zero or very little maintenance or setup. You just sign up for a cloud based service and it is available for you to use. A simple example of SaaS is Gmail.

  • Platform as a Service (PaaS)

This falls between IaaS and SaaS. Currently it is targeted at application developers. With PaaS, all the necessary pieces of your application are spinned on a server up from either the command line or a web interface. These pieces can be applications and databases.

This is where OpenShift starts to play a role. OpenShift provides the command line/web interface for the developer to spin up everything. From one command, all the necessary networking and server installs are done and a Git repository is created. OpenShift administrators will update the operating system, manage the network and do other admin work so that the developer can focus on writing code. The interface also allows the user to scale his application and do some performance tuning.

What does this mean for Uniface?

The strategy of Uniface has always been to support multiple platforms/databases etc. Internally, we are currently using Ansible as part of our build processes rather than OpenShift, but we are always investigating new ways to improve our processes and we try not to focus on specific technologies or tools. Therefore, from a DevOps point of view, I do see that OpenShift could play a part for us.

By making use of Infrastructure as Code, we can spin up multiple processes in the cloud to assist us in our build and verification processes. In our case, our application(s) are our tests, and we can now run them in parallel. We are also able to research new platforms without making investments in new physical infrastructure. This is a micro services approach which is the magic of the cloud.

I see OpenShift as a possible tool that can be used by our users. It is very powerful and useful and could be used to deploy applications into cloud environments, and to scale or contract as required.

Every cloud has a silver lining. The new silver lining is the fact that the cloud opens up so many restrictions. With new tools released every day, it is important to stay informed so that we can also be as open minded as the cloud.

Picking up on the latest and greatest on Microsoft’s Azure Platform

I recently attended Microsoft’s tech summit, held at Amsterdam’s RAI convention centre. For those of you who know me, my computing background is on the other side of the spectrum with predominantly UNIX and Linux derivatives. This was my first Microsoft event ever so it was with great anticipation and somewhat uncertainness that I attended the keynote.

From the word go it was clear that Microsoft is heavily vested in Cloud Technologies with customer stories from the Dutch Railway (Nederlandse Spoorwegen) who use Azure’s Big Data platform to predict when train components are about to fail, before failing and causing unnecessary disruptions. Abel Wang proceeded to guide us through a demo using Azure which would predict crime hotspots in certain areas around Seattle. Very impressive all of it.

The main reason however for attending the conference was to pick up on the latest and greatest on Microsoft’s Azure Platform. Microsoft Azure holds second place in the Cloud provider arena but, did experience the biggest growth compared to other players over the last year. Here at Uniface we already use Azure daily, the goal was to see if there were ways to better utilise Azure’s IaaS and PaaS offerings.

From all the Azure and Application Development sessions I learned a lot more about Azure’s PaaS offerings. In the ‘Protect your business with Azure’ session it was evident that Microsoft is fully committed to security and availability. By far, one of the most interesting sessions was ‘Building Serverless Applications with Azure Functions’ in fact. The session demonstrated how simple it is to run a basic event driven application without vesting any time in infrastructure or PaaS offerings.

All in all, the Tech Summit was a great success, I learnt a lot and will be applying the knowledge on workloads we execute in Azure.