All posts by James Rodger

A Recap of the First Uniface 10 Workshop

Last week I had the pleasure of running the very first Uniface 10 Workshop, the first time anyone outside Uniface has got their hands on the new IDE being introduced in Uniface 10.

James blog

Our aim for the session was to trial run the workshop that we’d be using at this year’s user events. Having a large group of customers in the room was also too good an opportunity to miss so we were keen to use the day to get as many first impressions, thoughts and ideas from people as possible.

In all we had 22 people from The Netherlands, Belgium, England and Germany representing 11 different companies. This included a number of unscheduled gate crashers making for a very busy classroom!

After some scene settings presentations from Adrian Gosbell, Henk van der Veer and Erik Mulder from Ergos (http://www.ergos.nl/) everyone dived into the workshop exercises, exploring the new IDE whilst building a small Uniface web application. We had plenty of Uniface lab staff on hand to answer questions (thanks guys!) and before long everyone was well on their way.

Since Uniface queries were being fielded by a crack team of our own developers I was free to wander around and discuss things with anyone that made eye contact. Of particular interest to me was discussing how the new project concepts in Uniface 10 can aid people’s development processes and release procedures. I love a good release procedure.

To wrap things up we split into groups to discuss specific areas of the IDE. Everyone was able to pick an area such as projects, navigation or script editors to go and have a focused discussion on with a chance to put questions and opinions to the Uniface 10 development team directly.

I’d like to thank everyone that made the trip to Amsterdam. The feedback around Uniface 10 has already been very positive and invaluable. We certainly have a lot of bed time reading to get through.

My main objective for the session was to ensure that the exercises and the workshop environment technically worked and were enjoyable to go through. On this front we’ve been able to make some tweaks here and there ahead of the user events. So many thanks again to all the willing guinea pigs.

Big Data with SAP HANA

I’ve been interested in large scale computing ever since I was introduced to it at the University of Southampton where the Computer Science department was heavily involved in Data Mining and Grid Computing research, which obviously influenced the courses on offer and what the lecturers liked to talk about. My dissertation looked at how these techniques could be applied to protein folding research, which generated much more data than could be handled effectively with the technologies available at the time. We are producing more and more data at an ever increasing rate each day, so these challenges are becoming relevant to more and more businesses and sectors. Of course things have moved on in recent years. The buzz words today are Cloud Computing and Big Data, evolutions on those older concepts that I’ve followed with great interest.

Given this I was happy to recently have the chance to do some research into SAP’s in-memory database solution HANA (High-performance Analytic Appliance, a somewhat tortured acronym). Clearly it’s a complex platform and I’ve only scratched the surface of what it’s capable of. It’s tricky to sum up, but to quote from a SAP HANA white paper (http://www.saphana.com/blogs/experts/2012/02/01/latest-sap-hana-whitepaper-reveals-more-than-just-technical-details):

“The SAP HANA® database is an in-memory database that combines transactional data processing, analytical data processing, and application logic processing functionality in memory. SAP HANA removes the limits of traditional database architecture that have severely constrained how business applications can be developed to support real-time business.”

This blog post is a good place to start if you would like to know more about SAP HANA: http://www.saphana.com/community/blogs/blog/2012/09/07/sap-hana-for-beginners

One of the benefits that really jumped out at me was the ability to query for complex aggregated data in real time. With a standard database it can take too much time to fetch, for example, the balance of a financial ledger. Because of this it’s left to the application layer to run batch jobs which calculate these figures and store them in additional tables. HANA allows you to delegate this task back to the database layer. We can therefore simplify our data models, we don’t have to write specific application code to handle the task and our totals are truly “real time” because we do not have to run nightly batch jobs to update them. This leaves our application to focus on what it should be doing, implementing business logic and not worrying about how to overcome the limitations of the database platform.

Also of note is that HANA provides a standard SQL interface over ODBC, this means that existing applications can start to use its functionality with a minimum of migration effort. In businesses utilising HANA, Uniface would be able to play a crucial role in tying together various different applications and technologies they might have. One use case might be for a Uniface interface able to pump real time data into HANA from other disparate applications. Since a HANA database is likely to be structure differently to a traditional database Uniface’s powerful ability to transform and map data would be invaluable to the process.

It’s fairly simple to try out HANA. SAP provides a very handy 30 day developer trial on CloudShare (http://scn.sap.com/docs/DOC-28191). Once you’re signed up it’s just a case of logging into CloudShare and then starting up the VM environment. You’re then free to do whatever you like with the servers. In my case I uploaded a copy of Uniface, ran the ODBC configuration utility to setup my driver settings and had a play around. Incidentally, I can highly recommend taking an initial snapshot of your VMs so that you can revert to it in case you completely break the servers (like I did).

Dependency Injection in Uniface

Hi, my name is James Rodger and I’ve spent the last 8 years working on Uniface applications as a Uniface consultant. I really enjoy the challenge of writing enterprise software so I thought I would tackle a nice light issue for my first blog post.

One of the areas of software development that I’ve been trying to become more familiar with is software design patterns. These describe techniques for addressing common development challenges and are hugely helpful in designing good software. Sadly there seems to be a perception that patterns require an object oriented programming language so we don’t see much discussion of them in the Uniface world. While it’s true that some patterns don’t translate well into a non-OO language there are many which will apply to architectures written in any type of language. It’s also true that Uniface specific features mean we don’t need a lot of these patterns since we get a lot of the functionality for free, but there are still patterns which are worth considering.

I’m going to talk briefly about one pattern which addresses a common problem with a lot of applications that can easily be applied to Uniface, Dependency Injection (DI). It is an example of an inversion of control pattern which essentially states that things should be configured rather than configure themselves. In Java or PHP we might say that an object has a dependency on another if it uses the ‘new’ keyword. In Uniface we can say that a component is dependent on another if it uses the ‘newinstance’ keyword (Let’s assume for the moment that we always create instances this way as opposed to using activate, which is a discussion for another time). If a component is dependent on another then we can never separate them. They can’t be unit tested in isolation or swapped out with an alternate implementation easily.

Let’s consider an example. Here we have some code which is calling a service DATA_SVC for some data.

variables
  Handle vDataHandle
  String vSomeData
endvariables

  newinstance "DATA_SVC", vDataHandle
  vDataHandle->getSomeData("1", vSomeData)

DATA_SVC.getSomeData is implemented as follows.

;-----------------------------------------------------
operation getSomeData
;-----------------------------------------------------
params
  String pDataId : IN
  String pData   : OUT
endparams

variables
  String vConfigHandle
  String vOfficeCode
endvariables

  newinstance "CONFIG_SVC", vConfigHandle
  vConfigHandle->getOfficeCode(vOfficeCode)

  DATA_CD.DATA/init   = pDataId
  OFFICE_CD.DATA/init = vOfficeCode
  retrieve/e "DATA"

  pData = DATA_VALUE.DATA

  return 0

end ;-getSomeData

We create a new service CONFIG_SVC in order to lookup some additional information before using this and the pDataId argument to fetch some data and return it in pData.

There are a number of issues with this approach:

  1. If we want to change the service that we use for fetching configuration data (CONFIG_SVC) then we need to alter the newinstance statement in every service that uses it.
  2. We can’t unit test DATA_SVC without also having to test CONFIG_SVC. In other words, we can’t use a mock implementation of CONFIG_SVC.

There is one other issue here, we’re not really considering the life cycle of the DATA_SVC service. The CONFIG_SVC instance we create only stays alive for the length of this operation, it would perhaps make more sense to create the CONFIG_SVC instance in the init operation and keep the handle in a component variable.

;-----------------------------------------------------
operation init
;-----------------------------------------------------

  ;-Create an instance of the service we'll be using
  newinstance "CONFIG_SVC", $configHandle$

end ;-init

Now let’s suppose that we need to support 3 different configuration methods: Database, files and in-memory. We might alter the init trigger to look like this.

;-----------------------------------------------------
operation init
;-----------------------------------------------------

  ;-Create an instance of the service we'll be using
  selectcase $logical("CONFIG_PROVIDER")
    case "DB"
      newinstance "CONFIG_DB", $configHandle$
    case "FILE"
      newinstance "CONFIG_FILE", $configHandle$
    case "MEMORY"
      newinstance "CONFIG_MEMORY", $configHandle$
  endselectcase

end ;-init

Again we can see some potential problems with this approach:

  1. If we need to support another configuration method we need to add more cases to our selectcase. In fact we’ll have to add a case to every service using these configuration providers.
  2. We still have the issue that we can’t test DATA_SVC in isolation. We could add a “TEST” case but this introduces code only used for unit testing into our business logic, which should be avoided.

So let’s try and fix some of these problems using Dependency Injection. There are a lot of DI frameworks for other languages out there, so the temptation might be to try and write something similar. However, it’s important to remember that DI is a concept before it’s a framework so we’re really free to implement it however works best for our application.

The key is to try and remove all the ‘newinstance’ statements from DATA_SVC so that it isn’t responsible for setting itself up any more. I’m going to move this logic out of DATA_SVC and into another service which is going to be purely responsible for creating and configuring instances for us.

;-----------------------------------------------------
operation getDataServiceInstance
;-----------------------------------------------------

params
  Handle pDataHandle : OUT
endparams

variables
  Handle vConfigHandle
endvariables

  ;-Create a config service based on the logical CONFIG_PROVIDER
  selectcase $logical("CONFIG_PROVIDER")
    case "DB"
      newinstance "CONFIG_DB", vConfigHandle
    case "FILE"
      newinstance "CONFIG_FILE", vConfigHandle
    case "MEMORY"
      newinstance "CONFIG_MEMORY", vConfigHandle
    endselectcase

  ;-Create a new instance of DATA_SVC
  newinstance "DATA_SVC", pDataHandle

  ;-Setup DATA_SVC by injecting the configuration service we created
  pDataHandle->setup(vConfigHandle)

  return 0

end ;-getDataServiceInstance

This is then invoked by the component that wants to use DATA_SVC, note that the newinstance has been replaced with a call to our factory service.

variables
  Handle vDataHandle
  String vSomeData
endvariables

  ;-Get an setup and configured instance of DATA_SVC
  activate "FACTORY_SVC".getDataServiceInstance(vDataHandle)

  ;-Finally use DATA_SVC to fetch the data we need
  vDataHandle->getSomeData("1", vSomeData)

And here are the improved DATA_SVC operations (Note that the init operation is now gone because there is nothing for it to setup):

;-----------------------------------------------------
operation setup
;-----------------------------------------------------
params
  Handle pConfigHandle : IN
endparams

  ;-Assign injected handle
  $configHandle$ = pConfigHandle

end ;-setup

;-----------------------------------------------------
operation getSomeData
;-----------------------------------------------------
params
  String pDataId : IN
  String pData   : OUT
endparams

variables
  String vConfigHandle
  String vOfficeCode
endvariables

  $configHandle$->getOfficeCode(vOfficeCode)

  DATA_CD.DATA/init   = pDataId
  OFFICE_CD.DATA/init = vOfficeCode
  retrieve/e "DATA"

  pData = DATA_VALUE.DATA

  return 0

end ;-getSomeData

We can see from the new DATA_SVC operations that all the plumbing code has been removed since this is now being handled elsewhere. This allows the code in DATA_SVC to concentrate on doing its job and should be easier to read and maintain as a result. In this example all we’ve really done is move this logic out of DATA_SVC and into a dedicated “Factory” service which is purely responsible for creating and configuring, what would be called in the OO world, the object graph. In our case this is an instance of a service with all its dependant services created, setup, injected and ready to go.

We also now have the ability to add new configuration services without altering in any way the services which consume them. We can add a case to our Factory service and that’s the only place we need to make the change. Swapping out the Factory for a unit testing framework also allows us to inject mock configuration services so that we can truly test DATA_SVC in isolation.

Hopefully this has given a flavour of the sort of design pattern that can easily be applied to a Uniface application. As with all these things a great many people will have been instinctively doing this for many years, but there is value in being able to recognise common patterns when using them and to using a common vocabulary when discussing them. If only because it allows you to read the wealth of software literature out there and immediately apply it your Uniface coding.