Category Archives: Blog


This Week: Uniface Hosts Inaugural, Global Distributors and Resellers Conference 2014

Uniface’s Inaugural Global Distributors and Resellers Conference Entitled – “ABC = Always Be Closing”

Uniface is looking forward to its inaugural conference, which brings together its leading distributors and resellers from across the globe in Amsterdam this week.

Uniface is pleased to offer the chance to its distributors and resellers to attend the Inaugural Conference, which gives attendees a range of commercial and technical topics, over three intense days of sessions. These are all aimed to give maximum knowledge to key players in our global distributors and resellers’ network along with the opportunity to increase awareness and understanding of our product and services, and develop enhanced partnerships within our partner ecosystem. The event aims to boost our understanding of distributors and resellers’ requirements by providing feedback opportunities on the range of benefits offered.

We would like to take this opportunity to welcome this year’s attendees: Acenet Oy, Finland; ACRUX, Mexico; COMPUAMERICA C.A., Venezuela; Freelance Consultant, Italy; Icignus Tecnologia, Brazil; IT IS, The Netherlands; Labinf Sistemi, Italy; ONE1, Israel; Shanghai Yungoal Info Tech Co., Ltd, China; Sogeti, The Netherlands; TaKT, Japan; Techshire, India; Wizrom Software, Romania; XEE Tech – Mobilne Aplikacije d.0.0., Croatia.

Job blog1

Automated Security Analysis for Uniface Web Applications

Guest contributor, Job Jonkergouw, Uniface Intern

Last February I started my internship at the Uniface. In need of a research project for my Master’s in Software Engineering, I tried my luck at the Uniface headquarters in Amsterdam which offered a subject that was both challenging and socially relevant: security of web applications.

Security is a hot issue in today’s IT landscape as news of stolen user databases and hacked websites regularly hit the headlines. Traditionally, developers react by implementing counter measures such as firewalls and SSL but according to experts this is not enough: “secure features do not equal security features” (see Howard & LeBlanc, Writing Secure Code). Software has to be written with security in the mind and hearts of the developers.

In an attempt at ensuring code security, models like Microsoft’s MS SDL and OWASP’s SAM recommend various steps in development. These include security requirement specification, architecture review, threat modeling and other practices. Another important guideline is security code review. However, done by humans this can be tedious and requires a high level of expertise, which is why many developers opt for something quicker.

Automated code review will be familiar to anyone who has used Word’s spell checker or a sophisticated IDE such as Eclipse. For the purpose of security analysis, automated tools can check each line of code for dangerous function calls, iffy comments or unchecked in and output. This is commonly known as static security analysis, contrasting with a technique called dynamic security analysis: emulating actual attacks on the web application. Also known as pen testing, it is commonly executed by sending HTTP requests containing dangerous payloads such as SQL injection or cross-site scripting.

The objective of my research project was to gauge the difference between using dynamic and static security analysis for Uniface web applications. To test this empirically, I designed an experimental website that contained several exploitable vulnerabilities. Several tools — both dynamic and static — were then tested by their ability to find each of these exploits.

The first objective was to identify the security analysis tools that were to be used. Some of the popular brands such as IBM’s AppScan and HP’s WebInspect require thousands of dollars of licensing fees, making them impractical for my studies, while others don’t support the technologies used by the Uniface framework. Another issue concerned how more and more commercial products are being offered as a Software-as-a-Service (SaaS) on the cloud. While this makes it easier for the vendor to manage their licenses, it can be detrimental for developers who would not like to upload their source code to a third party or to have a testable web application deployed live on the web.

Although the previously mentioned scrapped many of the popular solutions from my list, there were still enough tools left to experiment with, most of the open source. Making the final cut were five static analysis tools – FindBugs, LAPSE+, VCG, Veracode and Yasca ­– and five dynamic analysis tools – IronWASP, N-Stalker, Wapiti, w3af and ZAP.

The test environment was developed quickly using the Uniface Development Framework. During this step, I injected several vulnerabilities by removing a few important lines of proc code and twisting the properties of some of the widgets. These included accessing other user pages by modifying the user ID in the URL and unrestricted file uploading. As these were mainly behavioral issues, these types of exploits were only detectable with dynamic analysis as no static tools can read proc code.

Other modifications I made at the Java source code level on the web server. These included important sanitization checks that normally prevent dangerous attacks such SQL injection and cross-site scripting. Notably different is that Java code is well understood by many static analysis tools.


Job blog1


The resulting website containing the vulnerabilities is shown above. Each tool was tested on its rate of discovery and the number of false positives. This latter number was much higher for most static tools, but was expected due to prior research and for theoretical reasons. The number of vulnerabilities tool found what varied widely as can be seen in the graphs below. Some vulnerabilities itself were hard to found altogether (such as path traversal requiring guessing of the right file name). But this was perhaps due to the nature of Uniface of being hard to scan, which makes it harder for actual attackers. A more detailed discussion on the results can be found in my final thesis [link].

Job blog2

Despite the results containing few surprises, the internship offered me a great time at the Uniface development department, which proved to be both helpful and educational.

In just a few months’ time I was able to learn a new development language, build an application and carry out the work for my thesis thanks to the working environment and colleagues that helped me overcome any big hurdle. For this, my gratitude.

HTML Widget with a Java Applet: How to stop security warnings

Security warnings are hindering the end user when starting a Java applet in the Uniface HTML widget. This document provides a step-by-step guide on how to stop the security warning and even block them with a so called “rule set”.

Security warnings the old way

According to the documentation of Oracle, the end user will in almost all cases be presented with a warning when starting a Java applet in the browser for the first time. Even the lowest possible security setting in the Java console explains:

Medium – All applications are allowed to run with security prompts.


Also the list of exceptions shows in the java console that you can be prompted by a security warning:

Image 1

By switching the cache ON in the Java console the warning is only displayed once. After this the application runs without warnings and can even be re-started.

Image 2

Other options

Keep the security settings in the Java console on High which by default blocks the applet completely.

Image 3

Add the URL to the list of exceptions:

Image 4

In my case this was:


Including the page name!

This means that the security is not compromised and the warning is only shown once when the cache is on.

Rule set and no warnings at all

As explained earlier, you can run a java applet without security warnings by using a rule set however the applet must be signed for this and a so called deployment rule set jar file must be added. In the following places you can find some documentation. In the next chapters I describe a step by step process to get the Java applet running in a UNIFACE html widget without warnings.

How to stop the security warnings for a known applet

In the following chapters I will take a step by step approach to make it possible to run a known applet in the UNIFACE html widget without bothering the end user with security warnings. The applet JAR file, in this sample, is on the end user computer as well as the html file referring to this applet. Of course the file:/// can be replaced by a server site location like http://

This small manual on how to get the “rule set” working is based on the Dynamictree sample of Oracle. You can find this sample on the following address:

Download the zip file with all the bits and pieces you need:

Before you start, be sure that you have your path variable set to the java bin folder otherwise the command lines shown in the steps won’t work.

Used command line tools

Command Description
Jar Creates a Jar archive
Keytool To create a keystore, certificate
Jarsigner To sign a jar archive with a keystore file.


Download the step-by-step document


Do we need a JSON data type?

I recently read a few articles raving about how good PostgreSQL is.  One article in particular explained how great it is that they have a JSON data type.  I wondered exactly what that would mean for developers, and whether Uniface needs one too.

The PostgreSQL documentation states that JSON data can be stored just fine in a text data type, but that a specific data type for JSON adds specific validation for JSON strings.  The documentation then adds that there are related support functions available.  Indeed there are JSON operators and functions that massage data between JSON strings and table rows and columns.  Suppose that you have a use case to exploit these functions, should you use them?  The simple answer for a Uniface developer is “of course not”.

Looking at those JSON support functions I would suggest that you can write Uniface functions / local proc modules to manipulate and transform data in similar ways.  Uniface Structs and the new 9.6.04 structToJson and jsonToStruct statements are particularly helpful for this.  So, provided that there is no extreme performance advantage in doing such manipulation on a DB server, it would not be a good idea to tie your application to a specific DB vendor, and lose that DBMS independence that Uniface gives you.  Bear in mind that there is no JSON data type in the current SQL Standard from 2011, and the major RDBMS vendors have not found a need to add such a non-standard extension.

Since we do have JSON manipulation tools, there is another consideration, based on our experiences with XML.  How do we validate the meaning of data transported by JSON?  With the xmlstream data type (and supporting proc statements) we have DTDs.  With our Structs transformations we have XML schema validation support.  With Uniface entities, we have the full support of the application model.

What is missing is a JSON Schema mechanism.  Thus I would suggest that if there is no supporting validation mechanism, there is no point in having a specific data type for JSON.

That situation may change in the future.  There are Internet Engineering Task Force (IETF) drafts available for a JSON Schema standard.  If you want to anticipate this as future standard, you can use this online tool to generate a JSON Schema:  from a sample JSON data stream.

At this time, to use this draft JSON Schema, you will need to write a validation module yourself.  However, you may be able to validate the data based on the Uniface Application Model.  After loading the Struct with the jsonToStruct statement, you may want to prepare the Struct for using the structToComponent statement.  Since 9.6.05+X501 the structToComponent supports a /firetriggers command option, which causes the Pre Save Occurrence and Post Save Occurrence triggers to be fired, thus allowing you to do further occurrence based validation or manipulation.  Of course the entities that you use for this purpose can be dummy entities created for this purpose, modelled or not.  This would avoid the need to reconnect with the database.

Hopefully we now have enough tools to deal with JSON data, without the need for a new data type.


Part 2: The threat of the Start-up and how traditional development teams can look to fight back

By Clive Howard, Principal Analyst and Bola Rotibi, Research Director, Creative Intellect Consulting

Part 2 (read part 1 here)

Appreciate the skills, knowledge and assets that you have

Once an organisation, however large, adopts a culture in which the development and IT teams believe that things can be done quickly but still within high standards of quality and compliance, then they can compete against their smaller challengers. This cultural shift can be difficult with often resistance coming from those entrenched in the old ways. Some may fear that the new processes will make them redundant. This is why organisations have to tailor new processes to their strengths and considerations such as governance have to be taken into account.

For example, when moving to Agile it is important not to be too fanatical about a certain methodology such as Scrum. The best Agile environments are those where the approach is tweaked to suit the organisation’s skills, needs and concerns. A start-up does not have to worry about large legacy investments with years of domain knowledge built around them. An enterprise most likely will and so that knowledge (people) needs to be retained. Equally some projects may still require a more Waterfall style approach due to the nature and scale of the systems involved. Enterprises therefore need new processes that embody Agile execution practices, but they must be sensible and balanced in their application.

Don’t forget operations

Agile will help developers add new features more quickly but it is only part of the overall process. Moving to CI and CD processes will create a development and operations environment that allows reliable and stable software to be released quickly. Embracing the concept of DevOps (the removal of artificial barriers between operations and development teams and finding a new working relationship that benefits the entire software process) will reduce the friction between the development and operations teams and so help to get new releases into production more quickly.

In addition the development teams need to make sure that speed does not sacrifice quality. Something that start-ups have learned is the importance of testing. The growth in popularity of Unit Testing and Test Driven Development (TDD) has been fueled by this. Enterprises need to make sure that they have the necessary testing tools, capabilities and culture in place – something that has been lagging within enterprise development teams. By making testing a constant within the development process they can increase the quality of code. Often in traditional Waterfall environments the test phase was squeezed and so in reality quality and software stability, was sacrificed.

All that glitters is not gold

Finally there is the question of technology. Start-ups have become synonymous with new technologies such as PHP, Ruby on Rails, Django and a host of other platforms, frameworks and services. They tend to gravitate towards these as they believe that they allow them to work more quickly and so focus more time on concerns such as the User Experience of the product. In reality some of these are immature and result in more time being spent firefighting than working on making the product better. Enterprises often deal in legacy software and far larger usage requirements than many start-ups have to deal with initially. A MySQL database may work great with a certain amount of data but as Facebook discovered at scale it can pose challenges. So, don’t throw out the Oracle or the IBM Database just yet.

That does not mean that technology is not an issue in the enterprise. With applications now needing to be deployed to an ever increasing number of platforms and devices the underlying technology choices will impact speed of delivery. Having a solution that places as much logic into a single codebase utilising a common language, skillset and tools will have great time and cost saving benefits. As many organisations are constantly discovering, having to maintain multiple codebases in different languages and tools that effectively do the same thing is increasingly time and cost intensive. Therefore approaches such as hybrid mobile development or model driven development will reap rewards especially over time.