Tag Archives: AWS

Support for Uniface in the cloud: a DevOps project

For the last few months we have been working towards adding cloud providers to the Product Availability Matrix (PAM). This project is known internally as Cloud Phase 1 and has proven to be, on the whole, a DevOps project.


For us to add support for a platform there are several things that we must do – the most important of which is to test it, on every platform, for every build, to make sure it works. The framework we use to test Uniface is a custom-built application with the imaginative name of RT (Regression Test) and contains tests targeted at proving the Uniface functionality. The tests have been built up or added to as new functionality is added, enhanced or maintained.

Up until the cloud project, the process of building and testing Uniface (and this is a very simplistic description) was to:

  • Create a Uniface installation by collecting the compiled objects from various build output locations (we have both 3GL and Uniface Script)
  • Compile the RT application and tests using the newly created version
  • Run the test suite
  • Analyze the output for failures and if successful
    • Create the installable distribution (E-Dist or Patch)

The testing and building was completed on pre-configured (virtual) machines with databases and other 3rd party applications already installed.

To add a new platform (or versions of existing platform) to our support matrix could mean manually creating a whole new machine, from scratch, that represents the new platform.

To extend support onto cloud platforms, we have some new dimensions to consider

  • The test platform needs to be decoupled from the build machine as we need build in-house and test in the cloud
  • Tests need to run on the same platform (i.e. CentOS) but in different providers (Azure, AWS, …)
  • Support for constantly updating Relational Database Service (RDS) type databases needs to be added
  • The environment needs to be scalable with the ability to run multiple test runs in parallel
  • It has to be easily maintainable

As we are going to be supporting the platforms on various cloud providers, we decided to use DevOps methodologies and the tools most common for this type of work. The process, for each provider and platform, now looks like this:

  • Template machine images are created at regular intervals using Packer. Ansible is used to script the installation of the base packages that are always required
  • Test pipelines are controlled using Jenkins
  • Machine instances (based on the pre-created packer image) and other cloud resources (like networks and storage) are created and destroyed using Terraform
  • Ansible is used to install Uniface from the distribution media and, if needed, overlay the patch we are testing
  • The RT application is installed using rsync and Ansible
  • RT is then executed one test suite at a time with Ansible dynamically configuring the environment
  • Docker containers are used to make available any 3rd party software and services we need for individual tests and they are only started if the current test needs them. Examples of containers we have made available to the test framework are mail server, proxy server, webserver and LDAP server
  • Assets such as log files are returned to the Jenkins server from the cloud based virtual machine using rsync
  • The results from Jenkins and the cloud tests are combined along with the results from our standard internal test run to give an overview of all the test results.

As quite large chunks of the processing are executed repeatedly (e.g. configure and run a test) we have grouped the steps together and wrapped them with make.

As most of the platforms go through the same process we have also been able to parameterize each step. This should mean that a new platform or database to test on, after the distribution becomes available, “could” be a simple as adding a new configuration file.

The result of Phase 1 of the Cloud project is that the Product Availability Matrix has been extended to include new platforms and databases. Internally we also have the benefit of having a much more scalable and extendable testing framework.

The new platforms added to the PAM in 9.7.04 and 10.2.02 by the cloud project:

Uniface Deployment-in-Cloud

In this initial phase, we have been concentrating on the Linux platforms; next (in Phase 2) we will be working on Windows and MS SQL server.

During this process, I have learnt a lot about our test framework and the tests it runs. Much of the work we have undertaken has just been a case of lifting what we already have and scripting its execution. This has not been the case for everything, there have been some challenges. An example of something that has been more complex than expected is testing LDAP. The existing environment would use a single installation of LDAP for every platform being tested. Tests would connect to this server and use it to check the functionality. As the tests are for both read and write, we could only allow one test to be active at a time; other tests and platforms would have to wait until the LDAP server was released and become available before continuing. With the cloud framework, we have an isolated instance of the service for each test that needs it.

The project to bring cloud support to Uniface has been an interesting one. As well as allowing us to add new platforms and providers onto our supported matrix, it has also allowed us to be more scalable and flexible when testing Uniface.


Red Hat/OpenShift – Finding the silver lining

I have been in the Uniface business for longer than twenty years. I have experienced the GUI baby steps of Uniface 6 at around the same time Windows 95 saw the light. I could keep up with the new features that were presented with each new version of Uniface that was released. So, with regards to Uniface, I can proudly say that although I may look like a monkey, I am an old monkey. I know a lot of tricks.

The world is changing at a fast pace and it is necessary to keep my bag of tricks up to date. With Uniface moving into the direction of supporting cloud features, I feel that it is necessary to do a bit of homework to prepare myself for this mind shift.

As a first step, I joined a few colleagues at the Red Hat Openshift Roadshow that was held in Amsterdam. With many similar cloud-technology related events currently taking place and with Uniface being so strong in supporting multiple platforms, it seemed like a good idea to search for the silver lining at the Red Hat event.

Red Hat Cloud Blog

Why Red Hat?

Red Hat is just one of multiple platforms that Uniface supports. It is a leading enterprise Linux platform. It is supported on both Amazon Web Services and Microsoft Azure. These two providers are currently the preferred providers for Uniface cloud support. In addition, it is also open, reliable, secure and flexible for customers who have business-critical systems.

How does Red Hat align with the goal of supporting multiple platforms for Uniface?

At Uniface we are not in the business of putting one platform in front of the other. We want the client to make the decisions around the technologies that are going to be used and we want to fit into it. Red Hat is just one of the platforms that we as well as the cloud providers do support. What makes us strong, is the fact that we can confirm that Red Hat is one of the many platforms that are on our list that we can tick off.

What benefits does this bring?

As a result of our work, we now have the infrastructure in place to verify and test Uniface on cloud platforms, therefore enabling us to tick the box that Uniface is supported.  This means customers do not need to make changes to their application source code, because, we can deploy to Red Hat as well as other platforms in the cloud in the same way as if they were deploying to on premise operating systems.

What is OpenShift?

Before we understand what OpenShift is, we first need to understand a few other terms (in short of course).

  • Infrastructure as a Service (IaaS)

When a provider runs computers on demand with specified configurations. This is alternative to rack and stack hardware. You specify the amount of RAM, CPU, disk space and operating system and the provider starts up a machine that meets these specifications within minutes.

  • Software as a Service (SaaS)

Requires zero or very little maintenance or setup. You just sign up for a cloud based service and it is available for you to use. A simple example of SaaS is Gmail.

  • Platform as a Service (PaaS)

This falls between IaaS and SaaS. Currently it is targeted at application developers. With PaaS, all the necessary pieces of your application are spinned on a server up from either the command line or a web interface. These pieces can be applications and databases.

This is where OpenShift starts to play a role. OpenShift provides the command line/web interface for the developer to spin up everything. From one command, all the necessary networking and server installs are done and a Git repository is created. OpenShift administrators will update the operating system, manage the network and do other admin work so that the developer can focus on writing code. The interface also allows the user to scale his application and do some performance tuning.

What does this mean for Uniface?

The strategy of Uniface has always been to support multiple platforms/databases etc. Internally, we are currently using Ansible as part of our build processes rather than OpenShift, but we are always investigating new ways to improve our processes and we try not to focus on specific technologies or tools. Therefore, from a DevOps point of view, I do see that OpenShift could play a part for us.

By making use of Infrastructure as Code, we can spin up multiple processes in the cloud to assist us in our build and verification processes. In our case, our application(s) are our tests, and we can now run them in parallel. We are also able to research new platforms without making investments in new physical infrastructure. This is a micro services approach which is the magic of the cloud.

I see OpenShift as a possible tool that can be used by our users. It is very powerful and useful and could be used to deploy applications into cloud environments, and to scale or contract as required.

Every cloud has a silver lining. The new silver lining is the fact that the cloud opens up so many restrictions. With new tools released every day, it is important to stay informed so that we can also be as open minded as the cloud.

Experimenting with the AWS S3 API

Last month I uploaded a community sample that showed how to call an Amazon Web Services RESTful API, in particular for their S3 storage service.  That sample is contained within a single form, and is accompanied by some simple instructions and notes on assumptions made etc.  I used a form component type, and constructed it to use operations for the actual API calls, so that it would be easy to understand, and easy to modify to a service component, for wider usage.

The next thing I wanted to try was to provide the same functionality from a DSP.  Initially this could have meant replacing the Windows GUI layer with a HTML5 based layer in a DSP.  However, DSPs make the Uniface JavaScript API available, and thus there is an opportunity to try out the AWS SDK for JavaScript (in the Browser).  Information is available at http://docs.aws.amazon.com/AWSJavaScriptSDK/guide/browser-intro.html .

The main advantage of using this SDK is that becomes possible to avoid a lot of low level coding with the RESTful API.  If you study the form sample I mentioned earlier, you will see a lot of code to build canonical requests, and then to sign them securely.  This is all buried inside the various SDKs that AWS provides.  This was worth a try!

As it turned out, coding the JavaScript to list the bucket contents, download and upload files, was relatively easy.  In particular, the feature to generate signed URLs for downloading files is very handy.  In fact most of the buttons on the sample DSP have browser side JavaScript which calls the AWS SDK without much reference to the Uniface JavaScript API.  This just means that in some circumstances you might not need to use DSPs at all, but if your use case does involve exchanging information with back-end processes, then this sample should be of interest.  One such use-case is to save S3 files on the back-end server, and so a JavaScript activate is done to send the signed URL to a DSP operation, to complete the download.  In any case, it is tidy to keep the JavaScript code in the Uniface repository as much as possible.

So … although the JavaScript coding turned out easy enough, the challenge turned out to be how to authenticate the SDK calls.  In the form sample I used the AWS Access Key ID and a Secret Access Key to sign requests.  These were quarantined from the form source code, and the runtime user (who shouldn’t have access to the Uniface debugger), by storing the sensitive data in assignment file logicals.  Not the ultimate form of protection, but adequate for my sample.  The JavaScript SDK requires access to these artifacts, and since this runs in the browser, it exposes them to all users.  To slightly obscure these private values, I placed them in a separate JavaScript file, which is not referred to in the HTML, but dynamically loaded by Uniface with this statement:  $webinfo(“JAVASCRIPT”) = “../js/aws_config.js” .  Of course you can read the variable contents with any browser debugger.  So this DSP sample comes with a similar caveat to the AWS recommendations.

The options for supplying credentials to the AWS JavaScript API are described here:   http://docs.aws.amazon.com/AWSJavaScriptSDK/guide/browser-configuring.html .  So for my sample I did effectively supply hard-coded credentials for an IAM user that has read-only permissions.  Real applications will want a more secure method.  I was going to evaluate AWS Cognito, but it is not yet available in my region.  Another option to investigate is to use Temporary Security Credentials, via the AWS Security Token Service.  Further discussion on authenticating credentials is beyond the scope of this blog / sample.

One final security configuration had to be made, because the sample is running within a browser, which is likely to be enforcing CORS.  This is best explained in the documentation at http://docs.aws.amazon.com/AWSJavaScriptSDK/guide/browser-configuring.html#Cross-Origin_Resource_Sharing__CORS_ .

To summarise, Uniface developers have a choice when integrating with AWS.  They can choose the RESTful APIs for lower level control, in a wider set of situations, or they can use the JavaScript SDK for easier integration when using the Uniface JavaScript API.