Tag Archives: security

Uniface’s use of Authenticode

In this blog post I discuss how Uniface uses Authenticode for signing Uniface executables on the Windows platform. A word on the merits of signing your executables. Code signing is nothing more than taking an executable and calculating a checksum, attaching that checksum in a cryptographically secure way to the executable. This way any customer can be assured of the integrity of the code they download from Uniface’s download server: it has not been tampered with nor was it altered while in transit. The use of a public-private key certificate from a reputable vendor adds the advantage that you can rest assured the code you downloaded really originated from Uniface.

Designing code signing requires you to take a step back revisiting your existing processes to identify potential issues. Any statements on the integrity of the code can only be satisfied if you manage your code signing keys in a defined way. We have a defined process of managing where keys reside, who has access to them and what people having access can do. Keys reside in a physically secured location, with access being controlled and limited to a small group of people in the company. Only these people can get their hands on the Uniface code signing keys for a limited set of defined purposes. Strict logging is in place so that key usage can be reviewed from audit logs.

The Uniface build factory consists of machines, that take source code from a version control system and run the build scripts to produce signed executables. The code signing is run directly from our makefiles. We use a set of ‘internal’ certificates when building Uniface. Machines that are part of the Uniface build factory have access to the ‘internal’ certificate and physical access to the build machine is limited. Only Windows executables that were produced in an official build can thus be signed using this ‘internal’ certificate. The certificate is only valid within the Uniface development lab. Outside the Uniface development lab a machine with Windows installed would lack the Uniface development lab ‘root’ certificate, which is needed to build a trust chain required to validate executables signed with the ‘internal’ certificate. Once we package a Uniface q-patch, patch, service pack or distribution, we also sign these deliverables. This effectively seals the package and protects its contents from unauthorized modifications.

We also timestamp all files, which means all signed files also carry an official counter signature with a timestamp. Should there be an irregularity, forcing us to withdraw our software, we can do this by revoking our certificate. This comes in two flavours: Either we fully revoke a certificate, or we revoke the certificate from a certain cut off timestamp. When the certificate is fully revoked, all files signed with this certificate become invalid and hence cannot be trusted anymore. If the exact moment in time when the irregularity occurred is known, we can revoke the certificate from this moment in time. This results in all files signed after this moment to become invalid. Files signed before this moment in time remain valid.

When we decide that a package is ready for shipping to our customers, we go through a process of re-signing such a package with our ‘external’ certificate.  This is done as part of the publication process. What we do is check every file in the package to see if it was signed using the Uniface ‘internal’ certificate. If a file was signed with the ‘internal’ certificate, it is resigned using our ‘external’ certificate. This ‘external’ certificate was obtained from a reputable vendor and the public key of the root certificate from that vendor is present in every Windows distribution. Hence using public-private key encryption, your Windows distribution can check that files that we have signed in our Uniface distribution have not been modified since we signed them and that the software is actually from us. So the next time you install Uniface, you can be sure the software is fine.

What does the cloud bring to application development?

Following our line of thought of keeping up with technology, I had the privilege and pleasure to join a diverse group of Uniface engineers who participated in the Google Cloud Next  event in Amsterdam. As mentioned earlier, Uniface is at the leading edge of application technology so in that respect we participate by learning about the newest trends. We do this also for cloud with great partners like Google by obtaining the technological highlights,  and diving deeper into some examples like spanner and app maker.  All this to drive momentum and to spark innovation at Uniface.

Next Amsterdam being such a nice and big event consisted of several tracks with different areas of focus all around the cloud. Tracks that were visionary, strategic and technical besides the experimental breakout sessions; handling everything from the business, the technology and innovation.

I attended several sessions and had a look at the experimental/technical campground as presented by Google and some of its technology partners at the conference.

The most outstanding thing I realized while at the event was that cloud is moving everywhere, from application development, to deployment and innovation.

So, in that sense, cloud is becoming a game changer in application development. What do I mean by that? Well, in general, we are used to waves of technologies and application architectures like mainframe, client/server, Static Web, Dynamic Web, mobile apps, and now the cloud.

The cloud is reshaping the way we think about software; whether that is containerizing, micro services, contributing to developing new applications, exploiting the data produced by the usage of applications, all in all, taking software to a new level. Actually, one could say, it is being changed in several dimensions.

What does the could bring to application development?

Think about security which appeared to be something for the experts, and nowadays reshapes the way we think about software. And some of the thoughts around security today may involve user behaviour as an additional way to authenticate us. Wow! Nice. Although it does also imply user behaviour is something you need to consider.

What does the could bring to application development?

What does the could bring to application development?

Well, you may think “but there is a lot of data that now needs to be processed for that”, and “what about the structure of such data?” Well, have you seen all the developments around big data and high performing databases which the cloud is enabling? Ok, I give it to you… but then how can I, as a developer, make use of that data? Well, API’s is the answer. An old and beautiful concept that is being embedded in software development now, as collaboration with others is a must. Your software needs to be easy to interface with and as such it must provide a clear and easy API for others to use. Better even is the fact that software in the cloud must have API’s, becoming a de facto standard otherwise you are out. (By the simple fact that adoption will be hard if not impossible with all the competition around.)

What does the could bring to application development?

The more common areas where the cloud appear initially to have impact was on whether the application was executed on bare-metal or on a virtualized environment reshaping componentizing the hardware and the different layers of software. This too, is something that affects application development as we need to think also on those components/containers we can use/enable others to use. Consider frameworks for it and make the necessary provisions in your application architecture.

What does the could bring to application development?

Also of utmost interest were the innovation presentations that took place on a plenary/breakout, or campground sessions. It was amazing to see how creativity is being applied to develop the technological step around the cloud; think about natural language support API, and its applicability on the artificial intelligence spectrum, which nowadays is within our reach, it is in our hands (literally) with our phones/tablets.

What amazed us too was to see synergy in our approach to application development and the new trends like App Maker.

Whether you use the cloud to deploy your applications, execute on the cloud and or to innovate, the cloud is here to stay.

All in all, the value proposition around the cloud is to think not only of what the cloud can do for you, but what you can do in the cloud too.

 

 

 

 

 

Picking up on the latest and greatest on Microsoft’s Azure Platform

I recently attended Microsoft’s tech summit, held at Amsterdam’s RAI convention centre. For those of you who know me, my computing background is on the other side of the spectrum with predominantly UNIX and Linux derivatives. This was my first Microsoft event ever so it was with great anticipation and somewhat uncertainness that I attended the keynote.

From the word go it was clear that Microsoft is heavily vested in Cloud Technologies with customer stories from the Dutch Railway (Nederlandse Spoorwegen) who use Azure’s Big Data platform to predict when train components are about to fail, before failing and causing unnecessary disruptions. Abel Wang proceeded to guide us through a demo using Azure which would predict crime hotspots in certain areas around Seattle. Very impressive all of it.

The main reason however for attending the conference was to pick up on the latest and greatest on Microsoft’s Azure Platform. Microsoft Azure holds second place in the Cloud provider arena but, did experience the biggest growth compared to other players over the last year. Here at Uniface we already use Azure daily, the goal was to see if there were ways to better utilise Azure’s IaaS and PaaS offerings.

From all the Azure and Application Development sessions I learned a lot more about Azure’s PaaS offerings. In the ‘Protect your business with Azure’ session it was evident that Microsoft is fully committed to security and availability. By far, one of the most interesting sessions was ‘Building Serverless Applications with Azure Functions’ in fact. The session demonstrated how simple it is to run a basic event driven application without vesting any time in infrastructure or PaaS offerings.

All in all, the Tech Summit was a great success, I learnt a lot and will be applying the knowledge on workloads we execute in Azure.

Keeping up-to-date: Mobile security & Native UI

To catch-up on the latest mobile security and native UI trends, the Uniface mobile development team recently attended the appDevcon conference. A conference by app developers, for app developers. An event which targets developers for Apple iOS and Google Android, Windows, Web, TV and IoT devices in multiple tracks.

In advance, we were especially interested in two main topics: smartphone security and sharing code between web and native apps.

Mobile security

The mobile security presentations were given by Daniel Zucker, a software engineer manager at Google, and Jan-Felix Schmakeit, an Android engineer also at Google. In their – in my view – impressive presentation, they confirmed what I already thought: securing mobile phones is not something which you do after you have designed and developed your apps. It is a key area of app development to consider in advance.

Securing mobile phones starts with a good understanding of the architecture of at least the Android and iOS platforms. How is it built up? For example, as Android is based on the Linux kernel, you get all the Linux security artefacts, like Process isolation, SELinux, verified boot and cryptography. While some security services provided to mobile apps have a platform specific nature, others are platform independent.  An example of the first one is the new Android Permissions, which have now become more transparent to users, as they now get permission requests ‘in context’. An example of the platform independent security artefacts is the certificate validation, which done in an incorrect way, would still make your app vulnerable.


Native UI

Sharing code between native and web apps promised to be an interesting session. Some context: mobile users tend to spend significant more time on native UI enriched apps than on web apps, while web apps are attracting more unique visitors than native apps, as web apps are more widely approachable using different devices.

The best way to share code between native and web apps is simply by writing them as much as possible in the same code. Of course! But how do you do that? In this session the solution was to write fully native apps using a mix of NativeScript (an open-source framework to develop apps on iOS and Android platforms) and AngularJS (JavaScript-based open-source front-end web application framework). These native apps are built using platform agnostic programming languages such as JavaScript or TypeScript. They result in fully native Apps, which use the same APIs as if they were developed in Xcode or Android Studio. That is quite interesting! So using JavaScript you can develop fully native apps. That sounds like music to my ears.

Looking at this trend, it promises a lot. The mobile community seems to put a lot of effort in trying to simplify the creation of fully native enriched apps using plain JavaScript and HTML5 functionalities. Until now, we support our users in creating native/hybrid apps with fully native functionality with our Dynamic Server Page (DSP) technology. As we are looking into ways to enrich this technology further, we will follow the developments on this trend as it is fully in-line with our philosophy to share code between applications (client-server, web and mobile apps) and to support rapid application development, which saves our users time and resources in developing and maintaining fully enriched and cool applications. 

 

There May be Trouble Ahead for Mobile Commerce

By Clive Howard, Principal AnalystCreative Intellect Consulting

With the Christmas holidays just ahead of us there will undoubtedly be new figures showing that e-commerce has once again generated more revenue and accounted for more of the holiday spending than ever before. The same figures will also probably show a rise in the amount that was spent via mobile. M-commerce (mobile e-commerce) has been steadily rising for a number of years and most predictions are that it will continue to do so.

However recent data points to a tapering off of growth in M-commerce over the next few years. This could be significant with growth rates barely rising at all by 2017 and stagnating at around 30+% of total e-commerce spend. The key questions for many organisations should be however, why and how can they potentially buck this trend?

A number of recent surveys by eMarketer show that there are two key reasons why people are growing reluctant to buy online using mobile devices. The first is security and the second is the user experience of mobile transactions.

Consumers don’t have faith in the phone…

The security issue is a significant one and is only becoming more so. With recent high profile breaches such as that of iCloud (although not Apple’s fault) and Target stores in the US, consumers are becoming increasingly aware and concerned about security online. This is accentuated when it comes to smartphones. They are right to express concerns as the last year has seen 167% increase in mobile malware with McAfee estimating 200 threats per minute.

The Android operating system which is installed on 80% of all smartphones shipped has been especially susceptible to such attacks. Some figures put “rooted” Android devices at 20% of the total. A rooted device gives rogue agents potential access to data stores on the device with the ability of data being entered into device or being passed to and from the device.

Google is addressing these issues and Android is not the only mobile operating system to suffer from security challenges. For example, Apple’s much touted fingerprint recognition sensor was cracked shortly after launch.

Technology has responded to try and address these concerns. For example, mobile wallets (such as ApplePay, Google Wallet and PayPal) try to avoid the need to input and store payment data on the device. Apple’s latest ApplePay mobile wallet does not store credit card data either on the phone or in the cloud. Instead an alternate version of the credit card information is stored in a secure element on the phone.

If there is a breach, such as a lost device, then the payment information can be cancelled without cancelling the credit card itself. Mobile wallets which also work in conjunction with physical payment mechanism such as Near Field Communication (NFC) where you only need to put your mobile device in close proximity of the payment terminal are widely seen as the future of mobile payment.

When they don’t even trust the technology leaders in mobile payment…who do they trust?

The challenge is that data shows people do not trust most of the mobile wallet providers. Apple, Google and even PayPal are only trusted by approximately 20+% of consumers according to eMarketer. So while the technology addresses the security challenges the consumer is less likely to put their faith in it.

Surprisingly this issue cannot be put down to age. People in the 25-35 bracket are more likely to try a mobile wallet than those in the 16-24 age group. Some of this might be down to available income but younger generations often show greater awareness of security issues and therefore more likely to take precautions such as not using them.

When asked who they would trust with regards to mobile wallets almost 80% of consumers answer banks or credit card companies. These organisations are now starting to enter into this market. Therefore development teams should start looking into mobile wallets, particularly those provided by banks, as a method for taking payment on mobile devices.

Mobile payments are simply too hard

After security the next significant issue that is deterring users is the experience of paying via mobile. This is especially relevant to smartphones. In a recent survey IBM found that while consumers use smartphones to research purchases far more than tablets, the reverse is true when it comes to actually making a purchase. A clear reason for this is the difference in the form factors, in other words tablets are larger devices than smartphones.

I’m sure that we have all tried entering data using a smartphone and have probably found it difficult. The small screen size often combined with touch keyboards make entering any significant data awkward. When it comes to making payments in the traditional way, using a credit card, the amount of data that needs entering is a lot. There is not just the card information which includes the long number but also address information. In total this process often involves a combination of character entry and drop down list selections.

With websites this can be particularly problematic where they do not correctly fit within the confines of a small screen. Surprisingly even some websites optimised for mobile do not fit all screens. Then there can be the issue of the underlying code, especially the use of JavaScript which can sometimes not work as expected within mobile browsers. In a Jumio survey, 23% of people reported having a transaction fail to go through on mobile. This represents some kind of technical problems affecting mobile.

This is not surprising as there are a large number of different devices available running a number of different versions of operating systems. In many cases these operating systems have been customised by handset manufacturers of telco networks. In addition there are new devices and updates to operating systems becoming available very regularly. The result is a very large number of potential environments. Testing all of these is near impossible and many organisations only test on the most popular according market statistics.

Such problems with the experience of making a payment leads customers to lose trust, which in turns raises even greater concerns over security.