Tag Archives: development

SCRUM to strategically win from the competition as pit stops do in F1

Congratulations to Max Verstappen on winning the Malaysian Grand Prix last weekend. You see, strategy pays out when everything falls into place.

Uniface Formula 1

So, my drive 😉 is to apply scrum in your business strategy to win the race too.

So in F1 the pit stop, besides being a masterly synchronized ballet of disciplined execution and expertise, the pit stop is used strategically by the team to win the race. How? The amount of pit stops depends on the desired lap time while gauging fuel consumption, tire wearing out, undercutting (taking over a car while making the pit stop or leaving one). With the above in mind the team determines to use certain amount of pit stops, or to add one more in order to win.

In SCRUM terms, the sprints are the perfectly synchronized production of software which can be strategically used to deliver value to our customers. Whether we deliver features gradually or change the order of delivery as to meet business value.

Here at Uniface, we are busy trying to get SCRUM to the next level where alignment between business and IT are essential to make a difference. We must be aligned to adapt to change and therefore better serve our customers. In that context, we already have a track record as we have been using SCRUM for more than 9 years and have done the necessary improvements to the processes ourselves.

As an example, we have even invented our own ceremony to facilitate the alignment among teams called  a Sprint Pitch (an already 3-year-old ceremony for us).

To stress why aligning the business with IT is important, I want to emphasize the analogy from the F1 championships; I was inspired to use it when watching a Red Bull documentary about “The history of the pit stop” during my last flight.

You know the thrill of changing tires and refueling the car in the shortest amount of time possible?

In the early days, the pit stop was just a pause that took up to a minute, there was no changing of tires. That came in the 1970’s when an unplanned pit stop to change tires would take 3 to 5 minutes. In the early 1980’s Gordon Murray turned them into the strategic pit stops, considering the car weight, the tire degradation and saw a relation on how all that influenced lap times. At that moment another race began, the one to bring the pit stop’s time down to the minimum. In order, to use the pit stop more strategically and make the time necessary for a pit stop negligible.

Well, it is no surprise that to reach the shortest time, it took analysis, collaboration, improvements to get to the changing of the tires or better even the entire wheel set and refueling the car, cooling the car’s engine in just under 2 seconds. Bear in mind that actually it takes a crew of 18 to 20 highly skilled individuals to handle a pit stop.

You may wonder how do we do that in SCRUM at Uniface, but first time for a pit stop … (to be continued)!

Uniface’s use of Authenticode

In this blog post I discuss how Uniface uses Authenticode for signing Uniface executables on the Windows platform. A word on the merits of signing your executables. Code signing is nothing more than taking an executable and calculating a checksum, attaching that checksum in a cryptographically secure way to the executable. This way any customer can be assured of the integrity of the code they download from Uniface’s download server: it has not been tampered with nor was it altered while in transit. The use of a public-private key certificate from a reputable vendor adds the advantage that you can rest assured the code you downloaded really originated from Uniface.

Designing code signing requires you to take a step back revisiting your existing processes to identify potential issues. Any statements on the integrity of the code can only be satisfied if you manage your code signing keys in a defined way. We have a defined process of managing where keys reside, who has access to them and what people having access can do. Keys reside in a physically secured location, with access being controlled and limited to a small group of people in the company. Only these people can get their hands on the Uniface code signing keys for a limited set of defined purposes. Strict logging is in place so that key usage can be reviewed from audit logs.

The Uniface build factory consists of machines, that take source code from a version control system and run the build scripts to produce signed executables. The code signing is run directly from our makefiles. We use a set of ‘internal’ certificates when building Uniface. Machines that are part of the Uniface build factory have access to the ‘internal’ certificate and physical access to the build machine is limited. Only Windows executables that were produced in an official build can thus be signed using this ‘internal’ certificate. The certificate is only valid within the Uniface development lab. Outside the Uniface development lab a machine with Windows installed would lack the Uniface development lab ‘root’ certificate, which is needed to build a trust chain required to validate executables signed with the ‘internal’ certificate. Once we package a Uniface q-patch, patch, service pack or distribution, we also sign these deliverables. This effectively seals the package and protects its contents from unauthorized modifications.

We also timestamp all files, which means all signed files also carry an official counter signature with a timestamp. Should there be an irregularity, forcing us to withdraw our software, we can do this by revoking our certificate. This comes in two flavours: Either we fully revoke a certificate, or we revoke the certificate from a certain cut off timestamp. When the certificate is fully revoked, all files signed with this certificate become invalid and hence cannot be trusted anymore. If the exact moment in time when the irregularity occurred is known, we can revoke the certificate from this moment in time. This results in all files signed after this moment to become invalid. Files signed before this moment in time remain valid.

When we decide that a package is ready for shipping to our customers, we go through a process of re-signing such a package with our ‘external’ certificate.  This is done as part of the publication process. What we do is check every file in the package to see if it was signed using the Uniface ‘internal’ certificate. If a file was signed with the ‘internal’ certificate, it is resigned using our ‘external’ certificate. This ‘external’ certificate was obtained from a reputable vendor and the public key of the root certificate from that vendor is present in every Windows distribution. Hence using public-private key encryption, your Windows distribution can check that files that we have signed in our Uniface distribution have not been modified since we signed them and that the software is actually from us. So the next time you install Uniface, you can be sure the software is fine.

Technology Highlights Google Cloud Next 2017

Google cloud next-2017 The largest Google developer and IT gathering in Amsterdam to explore the latest developments in cloud technology. A chance to engage with the foremost minds leading the cloud revolution and learn how the modern enterprise is benefiting from the latest in cloud technology in unprecedented ways. As usual for us one more way to keep up with technology.

We saw some very interesting new innovations (spanner and app maker to name two) and how they relate to application development in the cloud.

Given below are the other highlights of the technologies talked about during the event:

1) Microservices & Kubernetes:

Microservices – is an architectural style that structures an application as a collection of loosely coupled services, which implement business capabilities. It enables the continuous delivery/deployment of large, complex applications and enables an organization to evolve its technology stack and can develop and deploy faster. It’s an evolution of software development and deployment that embraces DevOps and containers and breaks applications down to smaller individual components.

The emerging combination of micro-service architectures, Docker containers, programmable infrastructure, cloud, and modern Continuous Delivery (CD) techniques have enabled a true paradigm shift for delivering business value through software development.

The combination of microservices and containers promotes a totally different vision of how services change application development.

Kubernetes – is an open-source system for automating deployment, scaling and management of containerized applications that was originally designed by Google and donated to the cloud native computing foundation. It aims to provide a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts”. It supports a range of container tools, including Docker.

Technology Highlights in Google Cloud Next 2017

2) Choosing the right compute option in a cloud project: a decision tree

To understand the trade-offs and decide which models are the best fit for your systems as well as how the models map to Cloud services —Compute Engine, Container Engine, App Engine, cloud functions.

Compute Engine is an Infrastructure-as-a-Service. The developer has to create and configure their own virtual machine instances. It gives them more flexibility and generally costs much less than App Engine. The drawback is that the developer has to manage their app and virtual machines yourself.

Container Engine is another level above Compute Engine, i.e. it’s cluster of several Compute Engine instances which can be centrally managed.

App Engine is a Platform-as-a-Service. It means that the developer can simply deploy their code, and the platform does everything else for them.

Cloud Functions is a serverless computing service, the next level up from App Engine in terms of abstraction. It allows developers to deploy bite-size pieces of code that execute in response to different events, which may include HTTP requests, changes in Cloud Storage, etc.

Technology Highlights in Google Cloud Next 2017

3) Big data – Big data refers to data that would typically be too expensive to store, manage, and analyse using traditional (relational and/or monolithic) database systems. Usually, such systems are cost-inefficient because of their inflexibility for storing unstructured data (such as images, text, and video), accommodating “high-velocity” (real-time) data, or scaling to support very large (petabyte-scale) data volumes. There are new approaches to managing and processing big data, including Apache Hadoop and NoSQL database systems. However, those options often prove to be complex to deploy, manage, and use in an on-premise situation.

Cloud computing offers access to data storage, processing, and analytics on a more scalable, flexible, cost-effective, and even secure basis than can be achieved with an on-premise deployment.

What does the cloud bring to application development?

Following our line of thought of keeping up with technology, I had the privilege and pleasure to join a diverse group of Uniface engineers who participated in the Google Cloud Next  event in Amsterdam. As mentioned earlier, Uniface is at the leading edge of application technology so in that respect we participate by learning about the newest trends. We do this also for cloud with great partners like Google by obtaining the technological highlights,  and diving deeper into some examples like spanner and app maker.  All this to drive momentum and to spark innovation at Uniface.

Next Amsterdam being such a nice and big event consisted of several tracks with different areas of focus all around the cloud. Tracks that were visionary, strategic and technical besides the experimental breakout sessions; handling everything from the business, the technology and innovation.

I attended several sessions and had a look at the experimental/technical campground as presented by Google and some of its technology partners at the conference.

The most outstanding thing I realized while at the event was that cloud is moving everywhere, from application development, to deployment and innovation.

So, in that sense, cloud is becoming a game changer in application development. What do I mean by that? Well, in general, we are used to waves of technologies and application architectures like mainframe, client/server, Static Web, Dynamic Web, mobile apps, and now the cloud.

The cloud is reshaping the way we think about software; whether that is containerizing, micro services, contributing to developing new applications, exploiting the data produced by the usage of applications, all in all, taking software to a new level. Actually, one could say, it is being changed in several dimensions.

What does the could bring to application development?

Think about security which appeared to be something for the experts, and nowadays reshapes the way we think about software. And some of the thoughts around security today may involve user behaviour as an additional way to authenticate us. Wow! Nice. Although it does also imply user behaviour is something you need to consider.

What does the could bring to application development?

What does the could bring to application development?

Well, you may think “but there is a lot of data that now needs to be processed for that”, and “what about the structure of such data?” Well, have you seen all the developments around big data and high performing databases which the cloud is enabling? Ok, I give it to you… but then how can I, as a developer, make use of that data? Well, API’s is the answer. An old and beautiful concept that is being embedded in software development now, as collaboration with others is a must. Your software needs to be easy to interface with and as such it must provide a clear and easy API for others to use. Better even is the fact that software in the cloud must have API’s, becoming a de facto standard otherwise you are out. (By the simple fact that adoption will be hard if not impossible with all the competition around.)

What does the could bring to application development?

The more common areas where the cloud appear initially to have impact was on whether the application was executed on bare-metal or on a virtualized environment reshaping componentizing the hardware and the different layers of software. This too, is something that affects application development as we need to think also on those components/containers we can use/enable others to use. Consider frameworks for it and make the necessary provisions in your application architecture.

What does the could bring to application development?

Also of utmost interest were the innovation presentations that took place on a plenary/breakout, or campground sessions. It was amazing to see how creativity is being applied to develop the technological step around the cloud; think about natural language support API, and its applicability on the artificial intelligence spectrum, which nowadays is within our reach, it is in our hands (literally) with our phones/tablets.

What amazed us too was to see synergy in our approach to application development and the new trends like App Maker.

Whether you use the cloud to deploy your applications, execute on the cloud and or to innovate, the cloud is here to stay.

All in all, the value proposition around the cloud is to think not only of what the cloud can do for you, but what you can do in the cloud too.

 

 

 

 

 

Cloud Spanner and Application Development

I recently attended the Google Cloud Next Amsterdam, a one day conference covering services by Google Cloud Platform (GCP) According to Gartner, Google takes the third spot in the public cloud space with Amazon Web Services and Azure taking first and second place respectively. Amongst the plethora of GCP offerings (technological highlights) I was interested in Cloud Spanner one of the newer PaaS offerings which could prove interesting to Uniface’s supported RDBMS list. As it is also of interest how it applies to Application Development.

Cloud Spanner is a fully managed relational database which can scale globally, Google claims that Cloud Spanner is able to scale to thousands of servers and able to handle the biggest transactional workloads. Cloud Spanner joins the ranks of CockroachDB, Clustrix, VoltDB, MemSQL, NuoDB and Trafodion where are coined ‘NewSQL’ databases. NewSQL databases are databases which still offer the traditional Atomicity, Consistency, Isolation and Durability (ACID) with the massive scalability often associated with NoSQL databases.

Cloud Spanner

Being a distributed database, Cloud Spanner can distribute your data over the nodes that you have decided to use in your setup, Google calls this splitting. Data may be split by rows or by load for example if Spanner detects that there are a certain number of rows used more frequently it’s able to split these rows over multiple nodes.

Cloud Spanner offers a standard variety of SQL datatypes, BOOL, INT64, FLOAT64, String, BYTES, DATE, TIMESTAMP and ARRAYS. Various client libraries are already available and come in Java, Python, Go, NodeJS, Ruby and PHP flavours.

Cloud Spanner looks like a game changing distributed database and I’m sure that at Uniface we will be taking it for a test drive to demonstrate its capabilities.

Next to all this there were also exciting news on other areas like App Maker.