Tag Archives: Testing

Support for Uniface in the cloud: a DevOps project

For the last few months we have been working towards adding cloud providers to the Product Availability Matrix (PAM). This project is known internally as Cloud Phase 1 and has proven to be, on the whole, a DevOps project.

DevOps

For us to add support for a platform there are several things that we must do – the most important of which is to test it, on every platform, for every build, to make sure it works. The framework we use to test Uniface is a custom-built application with the imaginative name of RT (Regression Test) and contains tests targeted at proving the Uniface functionality. The tests have been built up or added to as new functionality is added, enhanced or maintained.

Up until the cloud project, the process of building and testing Uniface (and this is a very simplistic description) was to:

  • Create a Uniface installation by collecting the compiled objects from various build output locations (we have both 3GL and Uniface Script)
  • Compile the RT application and tests using the newly created version
  • Run the test suite
  • Analyze the output for failures and if successful
    • Create the installable distribution (E-Dist or Patch)

The testing and building was completed on pre-configured (virtual) machines with databases and other 3rd party applications already installed.

To add a new platform (or versions of existing platform) to our support matrix could mean manually creating a whole new machine, from scratch, that represents the new platform.

To extend support onto cloud platforms, we have some new dimensions to consider

  • The test platform needs to be decoupled from the build machine as we need build in-house and test in the cloud
  • Tests need to run on the same platform (i.e. CentOS) but in different providers (Azure, AWS, …)
  • Support for constantly updating Relational Database Service (RDS) type databases needs to be added
  • The environment needs to be scalable with the ability to run multiple test runs in parallel
  • It has to be easily maintainable

As we are going to be supporting the platforms on various cloud providers, we decided to use DevOps methodologies and the tools most common for this type of work. The process, for each provider and platform, now looks like this:

  • Template machine images are created at regular intervals using Packer. Ansible is used to script the installation of the base packages that are always required
  • Test pipelines are controlled using Jenkins
  • Machine instances (based on the pre-created packer image) and other cloud resources (like networks and storage) are created and destroyed using Terraform
  • Ansible is used to install Uniface from the distribution media and, if needed, overlay the patch we are testing
  • The RT application is installed using rsync and Ansible
  • RT is then executed one test suite at a time with Ansible dynamically configuring the environment
  • Docker containers are used to make available any 3rd party software and services we need for individual tests and they are only started if the current test needs them. Examples of containers we have made available to the test framework are mail server, proxy server, webserver and LDAP server
  • Assets such as log files are returned to the Jenkins server from the cloud based virtual machine using rsync
  • The results from Jenkins and the cloud tests are combined along with the results from our standard internal test run to give an overview of all the test results.

As quite large chunks of the processing are executed repeatedly (e.g. configure and run a test) we have grouped the steps together and wrapped them with make.

As most of the platforms go through the same process we have also been able to parameterize each step. This should mean that a new platform or database to test on, after the distribution becomes available, “could” be a simple as adding a new configuration file.

The result of Phase 1 of the Cloud project is that the Product Availability Matrix has been extended to include new platforms and databases. Internally we also have the benefit of having a much more scalable and extendable testing framework.

The new platforms added to the PAM in 9.7.04 and 10.2.02 by the cloud project:

Uniface Deployment-in-Cloud

In this initial phase, we have been concentrating on the Linux platforms; next (in Phase 2) we will be working on Windows and MS SQL server.

During this process, I have learnt a lot about our test framework and the tests it runs. Much of the work we have undertaken has just been a case of lifting what we already have and scripting its execution. This has not been the case for everything, there have been some challenges. An example of something that has been more complex than expected is testing LDAP. The existing environment would use a single installation of LDAP for every platform being tested. Tests would connect to this server and use it to check the functionality. As the tests are for both read and write, we could only allow one test to be active at a time; other tests and platforms would have to wait until the LDAP server was released and become available before continuing. With the cloud framework, we have an isolated instance of the service for each test that needs it.

The project to bring cloud support to Uniface has been an interesting one. As well as allowing us to add new platforms and providers onto our supported matrix, it has also allowed us to be more scalable and flexible when testing Uniface.

 

When thinking Desktop “first” still matters

By Clive Howard, Principal AnalystCreative Intellect Consulting

A few months back, I registered for Mobile World Congress 2015 in Barcelona. As an Analyst, there is a different registration process to the one used for regular attendees. This is so the organisers can validate that someone is a legitimate industry analyst. As well as entering a significant amount of personal data, additional information such as links to published work and document uploads are also required. Crucially, there are a number of screens to complete the registration and accreditation process. But more to the point, many different types of data must be entered – from single and multiple line text entry to file uploads. Some data (such as hyperlinks) requires cut and pasting.

I’m sure that I could have done this using a mobile phone but it would have taken a long time, been awkward and irritating and probably highly prone to mistakes. In short, I would never have considered doing something like this using my phone. Could I have used a tablet? Without a keyboard and mouse it would have been problematic, especially if the screen is small. Using a tablet only Operating System might also have had its problems in places: such as uploading documents from centrally managed systems. Actually I did use a tablet but one connected to a 20inch monitor, keyboard and mouse and running Windows. In that traditional desktop looking environment the process was relatively quick and painless.

Rumours of the desktop’s demise are greatly exaggerated

It is not just complex data entry scenarios such as this that challenge mobile devices. Increasingly I see people attach keyboards to their tablets and even phones. Once one moves beyond writing a Tweet or one line email many mobile devices start to become a pain to use. The reality of our lives, especially at work, is that we often have to enter data into complex processes. Mobile can be an excellent complement, but not a replacement. This is why we see so many mobile business apps providing only a tiny subset of functionality found in the desktop alternative; or they are apps that extend desktop application capabilities rather than replicate or replace them.

One vendor known for their mobile first mantra recently showed off a preview version for one of its best known applications. This upgrade has been redesigned from the ground up. When I asked if it worked on mobile the answer was no, they added (quite rightly) no one is going to use this application on a mobile device. These situations made me think about how over the last couple of years we have heard relentlessly about designing “mobile first”. As developers we should build for mobile and then expand out to the desktop. The clear implication has been that the desktop’s days are over.

This is very far from the truth. Not only will people continue to support the vast number of legacy desktop applications but will definitely be building new ones. Essentially, there will continue to be applications that are inherently “desktop first”. This statement should not be taken to mean that desktop application development remains business as usual. A new desktop application may still spawn mobile apps and need to support multiple operating systems and form factors. It may even need to engage in the Internet of Things.

The days of building just for the desktop safe in the knowledge that all users will be running the same PC environment (down to the keyboard style and monitor size) are gone in many if not the majority of cases. Remember that a desktop application may still be a browser based application, but one that works best on a desktop. And with the growth of devices such as hybrid laptop/tablet combinations, a desktop application could still have to work on a smaller screen that has touch capabilities.

It’s the desktop, but not as we know it

This means that architects, developers and designers need to modernise. Architects will need to design modern Service Orientated Architectures (SOA) that both expose and consume APIs (Application Programming Interfaces). SOA has been around for some time but has become more complex in recent years. For many years it meant creating a layer of SOAP (Simple Object Access Protocol) Web Services that your in-house development teams would consume. Now it is likely to mean RESTful services utilising JSON (JavaScript Object Notation) formatted data and potentially being consumed by developers outside of your organisation. API management, security, discovery, introspection and versioning will all be critical considerations.

Developers will equally need to become familiar with working against web services APIs instead of the more traditional approach where application code talked directly to a database. They will also need to be able to create APIs for others to consume. Pulling applications together from a disparate collection of micro services (some hosted in the cloud) will become de rigueur. If they do not have skills that span different development platforms then they will at least need to have an appreciation for them. One of the problems with mobile development inside enterprise has been developers building SOAP Web Services without knowing how difficult these have been to consume from iOS apps. Different developers communities will need to engage with one another far more than they have done in the past.

Those who work with the data layer will not be spared change. Big Data will affect the way in which some data is stored, managed and queried, while NoSQL data stores will become more commonplace. The burden placed on data stores by major increases in the levels of access caused by having more requests coming from more places will require highly optimised data access operations. The difference between data that is accessed a lot for read-only purposes and data which needs to be changed will be highly significant. We are seeing this with banking apps where certain data such as a customer’s balance will be handled differently compared to data involved in transactions. Data caching, perhaps in the cloud, is a popular mechanism for handling the read-only data.

Continuation of the Testing challenge

Testing will need to take into account the new architecture, design paradigms and potential end user scenarios. Test methodologies and tools will need to adapt and change to do this. The application stack is becoming increasingly complex. A time delay experienced within the application UI may be the result of a micro service deep in the system’s backend. Testing therefore needs to cover the whole stack – a long time challenge for many tools out there on the market – and the architects and developers will need to make sure that failures in third party services are managed gracefully. One major vendor had a significant outage of a new Cloud product within the first few days of launch due to a dependency on a third party service and they had not accounted for failure.

Part 2: The threat of the Start-up and how traditional development teams can look to fight back

By Clive Howard, Principal Analyst and Bola Rotibi, Research Director, Creative Intellect Consulting

Part 2 (read part 1 here)

Appreciate the skills, knowledge and assets that you have

Once an organisation, however large, adopts a culture in which the development and IT teams believe that things can be done quickly but still within high standards of quality and compliance, then they can compete against their smaller challengers. This cultural shift can be difficult with often resistance coming from those entrenched in the old ways. Some may fear that the new processes will make them redundant. This is why organisations have to tailor new processes to their strengths and considerations such as governance have to be taken into account.

For example, when moving to Agile it is important not to be too fanatical about a certain methodology such as Scrum. The best Agile environments are those where the approach is tweaked to suit the organisation’s skills, needs and concerns. A start-up does not have to worry about large legacy investments with years of domain knowledge built around them. An enterprise most likely will and so that knowledge (people) needs to be retained. Equally some projects may still require a more Waterfall style approach due to the nature and scale of the systems involved. Enterprises therefore need new processes that embody Agile execution practices, but they must be sensible and balanced in their application.

Don’t forget operations

Agile will help developers add new features more quickly but it is only part of the overall process. Moving to CI and CD processes will create a development and operations environment that allows reliable and stable software to be released quickly. Embracing the concept of DevOps (the removal of artificial barriers between operations and development teams and finding a new working relationship that benefits the entire software process) will reduce the friction between the development and operations teams and so help to get new releases into production more quickly.

In addition the development teams need to make sure that speed does not sacrifice quality. Something that start-ups have learned is the importance of testing. The growth in popularity of Unit Testing and Test Driven Development (TDD) has been fueled by this. Enterprises need to make sure that they have the necessary testing tools, capabilities and culture in place – something that has been lagging within enterprise development teams. By making testing a constant within the development process they can increase the quality of code. Often in traditional Waterfall environments the test phase was squeezed and so in reality quality and software stability, was sacrificed.

All that glitters is not gold

Finally there is the question of technology. Start-ups have become synonymous with new technologies such as PHP, Ruby on Rails, Django and a host of other platforms, frameworks and services. They tend to gravitate towards these as they believe that they allow them to work more quickly and so focus more time on concerns such as the User Experience of the product. In reality some of these are immature and result in more time being spent firefighting than working on making the product better. Enterprises often deal in legacy software and far larger usage requirements than many start-ups have to deal with initially. A MySQL database may work great with a certain amount of data but as Facebook discovered at scale it can pose challenges. So, don’t throw out the Oracle or the IBM Database just yet.

That does not mean that technology is not an issue in the enterprise. With applications now needing to be deployed to an ever increasing number of platforms and devices the underlying technology choices will impact speed of delivery. Having a solution that places as much logic into a single codebase utilising a common language, skillset and tools will have great time and cost saving benefits. As many organisations are constantly discovering, having to maintain multiple codebases in different languages and tools that effectively do the same thing is increasingly time and cost intensive. Therefore approaches such as hybrid mobile development or model driven development will reap rewards especially over time.

Part 2: Secure software delivery: who really owns it?

Guest contributors, Bola Rotibi and Ian Murphy from analyst firm Creative Intellect Consulting

Read Part 1 of this blog here.

Fighting back: An important role for Quality Assurance (QA) and testing

Inside the IT department, many see QA as being responsible for catching problems. As a result they are often seen as being the least prone to errors in the way that they work, but they are certainly not blameless. What is needed is more integrated approaches to the way that testing and Quality Assurance is carried out and processes were the teams can contribute to the discussion for better security and in reducing the bug count earlier.

One of the weaknesses in QA and testing has been the “last minute” approach that many organizations adopt. When projects are running late, QA is the first thing to get squeezed in terms of time. There is also a significant cost associated with QA and testing with test tools often being seen as too expensive and too hard and complex to understand and implement. However, the costs of poorly developed software must surely outweigh the costs of getting it right? Besides which, the costs have rapidly decreased with new players on the scene and the availability of on-demand elastic delivery models of Cloud and Software as a Service (SaaS) offerings. There are also vendors with tools and services that look to improve and simplify the testing process across heterogeneous infrastructure and application platforms. A security perspective added to the testing strategy of such solutions will do much to address the security holes as a result of the complex environments many applications now operate across.

Clearly, the earlier software security is addressed the better. Addressing security from the outset should have significant impact on the security quality further downstream. Strategies that look to promote continuous testing throughout the process, especially for security issues, will help strengthen the overall quality goals. Improving the foundations is a good basis for secure software delivery. Team review processes and handover policies also serve as good security check points as does the build and source configuration management process where continuous integration can be employed.

A better process for securing software required:  10 guiding points

To minimise the risk of insecure software entering the enterprise software stack, there is a need to rethink how the software development process works. This is not just about the developer but about the entire business, architects developers, operations, security and even users.

  1. Board level commitment: No enterprise wide process can be effective without board level commitment. Any security breach resulting in the loss of money, data or intellectual property (IP) will raise questions as to governance. This is a role owned by the board and every C-level executive must engage and understand the risks.
  2. Secure enterprise: At the heart of any secure software process is an enterprise wide secure by design philosophy. This requires an understanding of how software will work and a solid security framework for data and access. Responsibility for this lies with architects, security and operations.
  3. Processes: A lack of proper process is the wide open door a hacker is looking for. Testing, version control, change control, staging, patch management – these are all essential processes that have to be implemented to create a secure environment.
  4. Encryption: All data inside the enterprise should be encrypted irrespective of where it is stored. Encryption of data in transit is also a requirement. Software must be able to ensure that when it uses data, it does not then store it unencrypted, even in temporary files.
  5. Secure architecture: Software architects are responsible for making sure that all software is designed securely. Validate all technical and functional requirements and ensure that the software specification for the developers is clear and unambiguous.
  6. Unit tests: This is an area that has grown substantially over recent years but one where more needs to be done. Security teams need to engage fully with developers and software testing teams to identify new risks and design tests for them. The ideal scenario would be for the security team to have their own developers focused on creating unit tests that are then deployed internally.
  7. Maintain coding skills: One of the reasons that software is poorly written is a failure to maintain coding skills. This is often caused by the introduction of new tools and technologies and the failure to establish proper training for developers. Providing “how-to” books or computer based training (CBT) is not enough. Developers need training and it should be part of an ongoing investment and quality improvement programme.
  8. Test, test, test: When a project runs late, software testing is often reduced or abandoned in favour of “user testing”. The problem with this is that proper test programmes look at a much wider range of scenarios than users. In addition, once software is in use, even as a test version, it rarely gets completely withdrawn and fixed properly. Instead it gets intermittent and incomplete patching which creates loopholes for hackers.
  9. Implementation: As software delivery timescales reduce, there is pressure on operations to reduce their acceptance testing of applications. Unless operations are part of the test process, unstable applications will end up in production exposing security risk. This is an area that has always been a bottleneck and poorly planned attempts to reduce that bottleneck increase the risk to enterprise data.
  10. Help desk should be part of software development: This is an area that is rarely dealt with properly. Most help desk interaction is about bug and issue reporting with a small amount of new feature work. Help desk has a view across all enterprise software and has an operational and user view of risk. Using that data to reduce the risk of making repetitive mistakes will improve software quality.

The above is not an exhaustive list of steps that can be taken to harden the software security process. However, it does provide a framework against which an enterprise can assess where it has weakness.

Are there differences between small and large software code providers?

A worrying issue voiced by small consultancies attending the CIC secure development forum was that one of the biggest challenges to software security came from the fact that almost everyone is expecting someone else to tackle and deal with any of the concerns and requirements.

There is clearly a divide between what developers in smaller firms can expect to achieve from a secure software perspective, against those within larger teams and larger enterprise organizations.

A small consultancy that does not specialize in software security can only expect to focus on what it can do within the context of its remit if clients are not willing to pay for the “bank” grade security they might desire or their marketing messages all too often claim. Such firms rarely use any tools or processes over and above the normal mechanics of their job or that of their competitors. This doesn’t mean they are entirely bereft of secure software principles or education. On the contrary, a number of these firms discuss security processes and issues regularly (quarterly for the most part) and review policies annually.  That said, some expressed a desire to do more to provide better education and training within a realistic and pragmatic framework for continual improvements. Current governance models are loosely based on internal standards that developers are encouraged to follow. Quality is checked and maintained through peer reviews. But it is not enough.

The development teams within large organizations are not without such challenges either, because whilst code reviews are carried out, they are not always up to standard. Worse still is the lack of knowledge often found within the development team for tools that exist to help support the delivery of secure software applications for a broad spectrum of deployment environments. Nor is this helped by the fact that equipping all the developers of a large team with software security tools such as static analyzers can be significantly costly and time consuming to implement and train.

The communication hurdle

There is a challenge of communication that is no less different for larger enterprise organizations with the resources to employ and assign dedicated software security specialists than it is for smaller ones without the expertise. At the heart of the communication issue is language and process, where the language used by software security roles is not couched in terms recognizable to the software development and delivery teams. Nor is it clearly couched in terms of business risk or business issues that would then allow a level of prioritisation to be applied. The process issue is one of lack of insight into the workflows across the application lifecycle where intervention can have the significant impact.

There is often a need to translate the security concepts within the context of the development process before a software security risk or vulnerability can be addressed sufficiently. For too many, the disconnect between the language of the security experts and those responsible for, involved in or governed by the development process, is a significant barrier.

Because I have to…not because I want to

Whilst organizations within the financial sector generally tend to put in more effort in focusing on secure software strategies and are for the most part open to addressing and fixing the issues, they are driven to do so by regulations and the regulatory bodies. Or, as one organization so succinctly stated: If we didn’t have to comply with the PCI standards, we wouldn’t be as far along in our capabilities for addressing software security and implementing the necessary prevention measures and checks as we are.

Accountability for all

Software security is not all about the code. The way an application is architected, the choice of platforms, tools, and even methodology, all impact the way the code is written. Despite this, we still resort to blaming the developer when it all goes wrong. The main reason for this is because the other roles, tools, and elements of the development process are often invisible to the software owner or end-user client. This makes the developer an easy target for derision.

For a short period of time in the late 1980’s and early 1990’s we tried to make each developer a one-person team (analysts, developer, DBA, QA). As a result placing the majority of the blame on the developer had some limited relevance, but this is not the case today. As we can see, ownership of blame when it comes to secure software delivery lies in many quarters and for many reasons. Developers are far from being the weakest link in the secure software chain of defense.

There are many layers that you can use to improve the security of software and the application lifecycle process, one of which is to go beyond code development, and to look to empowering developers to become more involved and accountable for secure software strategies. They need to be better educated in the language of security vulnerability and the processes that lead to it as well as being and more aware of the operational and business consequences. They need support in their endeavours.

Good quality code is a must. There is still much needed focus on well developed quality code to combat the occurrences of security breaches. But so long as securing software is dealt with as a separate channel and a siloed functionality abstracted from the wider development and delivery workflow, and laid solely as an issue for the developer, delivering secure software will be a challenge hard to overcome.

Ultimately, businesses determine the risks they can accept. The job of the IT organization is to provide the language that can detail their risks in terms that business owners understand i.e. predominantly financial, and or economic, both of which underpin competitive goals and growth aims. Only when they do will the business owner understand what the risks truly equate to and thereby act and support accordingly. Understanding the security vulnerabilities within the development and delivery process from a business risk perspective will be a crucial first step in that direction.

The 6 Most Common Problems in Software Development

Today I would like to take a look at problem solving. Now what is a problem? According to the dictionary, a problem is “a matter or situation regarded as unwelcome or harmful and needing to be dealt with and overcome”. Fast.

What I will do below is discuss the 6 most common problems in software development –  segregated by different stages of a Software Development Life Cycle – and obviously will tell you how Uniface helps in a very simple way.

Problem #1: Requirements Gathering

Garbage in, garbage out… Any design is as good as the completeness or correctness of the requirements.  If the requirements are not good, the project will fail. Because people will hate the result.

Uniface helps as it is a very agile ; it help developers and end-user to quickly develop a prototype that further clarifies the correctness of the design. And then revise the, quickly built the next prototype. And so on. Prototyping with Uniface is straightforward and simple.

Problem #2: Planning & Estimation

Very often estimations of cost and duration of projects are too optimistic resulting in overspending and a much reduced time to market. And frustration from management, something you don’t want.

Uniface helps as it simplifies app development by separating the design from the actual technical implication(s). Building an app becomes simple, hence everything else about it becomes simpler as well. Simple.

Problem #3: Development

Moving targets, feature creep: it does happen. Requirements do change. I am not one of those religious nut-bags that will tell you “thou shall not change the requirements, ever”. Change happens. Change is upon us. Full Stop.

Uniface helps as many objects are defined in one unique place. Which makes Uniface very versatile and a Uniface app very easy to change. Uniface is made for change. Because it’s so simple…

Problem #4: Testing

Bug-free software doesn’t exist. Learn to live with it…YOLO

Uniface helps as it needs 7 times less code that Java. Less code, less bugs. Simple.

Problem #5: Collaboration

Project Management and multi-user development are key processes that need a lot of attention.

Uniface helps, as there are no multi-user and collaboration issues. Uniface is multi-user. By design. Simple.

Problem #6: Deployment

One more thing, Uniface runs on many platforms too – from mobile to mainframe and you can actually change from one of those  platform to another just like that. Without any redevelopment. It’s really that simple.

There’s just one simple decision you now need to make for yourself: build your next app with Uniface.