Tag Archives: Testing

When thinking Desktop “first” still matters

By Clive Howard, Principal AnalystCreative Intellect Consulting

A few months back, I registered for Mobile World Congress 2015 in Barcelona. As an Analyst, there is a different registration process to the one used for regular attendees. This is so the organisers can validate that someone is a legitimate industry analyst. As well as entering a significant amount of personal data, additional information such as links to published work and document uploads are also required. Crucially, there are a number of screens to complete the registration and accreditation process. But more to the point, many different types of data must be entered – from single and multiple line text entry to file uploads. Some data (such as hyperlinks) requires cut and pasting.

I’m sure that I could have done this using a mobile phone but it would have taken a long time, been awkward and irritating and probably highly prone to mistakes. In short, I would never have considered doing something like this using my phone. Could I have used a tablet? Without a keyboard and mouse it would have been problematic, especially if the screen is small. Using a tablet only Operating System might also have had its problems in places: such as uploading documents from centrally managed systems. Actually I did use a tablet but one connected to a 20inch monitor, keyboard and mouse and running Windows. In that traditional desktop looking environment the process was relatively quick and painless.

Rumours of the desktop’s demise are greatly exaggerated

It is not just complex data entry scenarios such as this that challenge mobile devices. Increasingly I see people attach keyboards to their tablets and even phones. Once one moves beyond writing a Tweet or one line email many mobile devices start to become a pain to use. The reality of our lives, especially at work, is that we often have to enter data into complex processes. Mobile can be an excellent complement, but not a replacement. This is why we see so many mobile business apps providing only a tiny subset of functionality found in the desktop alternative; or they are apps that extend desktop application capabilities rather than replicate or replace them.

One vendor known for their mobile first mantra recently showed off a preview version for one of its best known applications. This upgrade has been redesigned from the ground up. When I asked if it worked on mobile the answer was no, they added (quite rightly) no one is going to use this application on a mobile device. These situations made me think about how over the last couple of years we have heard relentlessly about designing “mobile first”. As developers we should build for mobile and then expand out to the desktop. The clear implication has been that the desktop’s days are over.

This is very far from the truth. Not only will people continue to support the vast number of legacy desktop applications but will definitely be building new ones. Essentially, there will continue to be applications that are inherently “desktop first”. This statement should not be taken to mean that desktop application development remains business as usual. A new desktop application may still spawn mobile apps and need to support multiple operating systems and form factors. It may even need to engage in the Internet of Things.

The days of building just for the desktop safe in the knowledge that all users will be running the same PC environment (down to the keyboard style and monitor size) are gone in many if not the majority of cases. Remember that a desktop application may still be a browser based application, but one that works best on a desktop. And with the growth of devices such as hybrid laptop/tablet combinations, a desktop application could still have to work on a smaller screen that has touch capabilities.

It’s the desktop, but not as we know it

This means that architects, developers and designers need to modernise. Architects will need to design modern Service Orientated Architectures (SOA) that both expose and consume APIs (Application Programming Interfaces). SOA has been around for some time but has become more complex in recent years. For many years it meant creating a layer of SOAP (Simple Object Access Protocol) Web Services that your in-house development teams would consume. Now it is likely to mean RESTful services utilising JSON (JavaScript Object Notation) formatted data and potentially being consumed by developers outside of your organisation. API management, security, discovery, introspection and versioning will all be critical considerations.

Developers will equally need to become familiar with working against web services APIs instead of the more traditional approach where application code talked directly to a database. They will also need to be able to create APIs for others to consume. Pulling applications together from a disparate collection of micro services (some hosted in the cloud) will become de rigueur. If they do not have skills that span different development platforms then they will at least need to have an appreciation for them. One of the problems with mobile development inside enterprise has been developers building SOAP Web Services without knowing how difficult these have been to consume from iOS apps. Different developers communities will need to engage with one another far more than they have done in the past.

Those who work with the data layer will not be spared change. Big Data will affect the way in which some data is stored, managed and queried, while NoSQL data stores will become more commonplace. The burden placed on data stores by major increases in the levels of access caused by having more requests coming from more places will require highly optimised data access operations. The difference between data that is accessed a lot for read-only purposes and data which needs to be changed will be highly significant. We are seeing this with banking apps where certain data such as a customer’s balance will be handled differently compared to data involved in transactions. Data caching, perhaps in the cloud, is a popular mechanism for handling the read-only data.

Continuation of the Testing challenge

Testing will need to take into account the new architecture, design paradigms and potential end user scenarios. Test methodologies and tools will need to adapt and change to do this. The application stack is becoming increasingly complex. A time delay experienced within the application UI may be the result of a micro service deep in the system’s backend. Testing therefore needs to cover the whole stack – a long time challenge for many tools out there on the market – and the architects and developers will need to make sure that failures in third party services are managed gracefully. One major vendor had a significant outage of a new Cloud product within the first few days of launch due to a dependency on a third party service and they had not accounted for failure.

Part 2: The threat of the Start-up and how traditional development teams can look to fight back

By Clive Howard, Principal Analyst and Bola Rotibi, Research Director, Creative Intellect Consulting

Part 2 (read part 1 here)

Appreciate the skills, knowledge and assets that you have

Once an organisation, however large, adopts a culture in which the development and IT teams believe that things can be done quickly but still within high standards of quality and compliance, then they can compete against their smaller challengers. This cultural shift can be difficult with often resistance coming from those entrenched in the old ways. Some may fear that the new processes will make them redundant. This is why organisations have to tailor new processes to their strengths and considerations such as governance have to be taken into account.

For example, when moving to Agile it is important not to be too fanatical about a certain methodology such as Scrum. The best Agile environments are those where the approach is tweaked to suit the organisation’s skills, needs and concerns. A start-up does not have to worry about large legacy investments with years of domain knowledge built around them. An enterprise most likely will and so that knowledge (people) needs to be retained. Equally some projects may still require a more Waterfall style approach due to the nature and scale of the systems involved. Enterprises therefore need new processes that embody Agile execution practices, but they must be sensible and balanced in their application.

Don’t forget operations

Agile will help developers add new features more quickly but it is only part of the overall process. Moving to CI and CD processes will create a development and operations environment that allows reliable and stable software to be released quickly. Embracing the concept of DevOps (the removal of artificial barriers between operations and development teams and finding a new working relationship that benefits the entire software process) will reduce the friction between the development and operations teams and so help to get new releases into production more quickly.

In addition the development teams need to make sure that speed does not sacrifice quality. Something that start-ups have learned is the importance of testing. The growth in popularity of Unit Testing and Test Driven Development (TDD) has been fueled by this. Enterprises need to make sure that they have the necessary testing tools, capabilities and culture in place – something that has been lagging within enterprise development teams. By making testing a constant within the development process they can increase the quality of code. Often in traditional Waterfall environments the test phase was squeezed and so in reality quality and software stability, was sacrificed.

All that glitters is not gold

Finally there is the question of technology. Start-ups have become synonymous with new technologies such as PHP, Ruby on Rails, Django and a host of other platforms, frameworks and services. They tend to gravitate towards these as they believe that they allow them to work more quickly and so focus more time on concerns such as the User Experience of the product. In reality some of these are immature and result in more time being spent firefighting than working on making the product better. Enterprises often deal in legacy software and far larger usage requirements than many start-ups have to deal with initially. A MySQL database may work great with a certain amount of data but as Facebook discovered at scale it can pose challenges. So, don’t throw out the Oracle or the IBM Database just yet.

That does not mean that technology is not an issue in the enterprise. With applications now needing to be deployed to an ever increasing number of platforms and devices the underlying technology choices will impact speed of delivery. Having a solution that places as much logic into a single codebase utilising a common language, skillset and tools will have great time and cost saving benefits. As many organisations are constantly discovering, having to maintain multiple codebases in different languages and tools that effectively do the same thing is increasingly time and cost intensive. Therefore approaches such as hybrid mobile development or model driven development will reap rewards especially over time.

Part 2: Secure software delivery: who really owns it?

Guest contributors, Bola Rotibi and Ian Murphy from analyst firm Creative Intellect Consulting

Read Part 1 of this blog here.

Fighting back: An important role for Quality Assurance (QA) and testing

Inside the IT department, many see QA as being responsible for catching problems. As a result they are often seen as being the least prone to errors in the way that they work, but they are certainly not blameless. What is needed is more integrated approaches to the way that testing and Quality Assurance is carried out and processes were the teams can contribute to the discussion for better security and in reducing the bug count earlier.

One of the weaknesses in QA and testing has been the “last minute” approach that many organizations adopt. When projects are running late, QA is the first thing to get squeezed in terms of time. There is also a significant cost associated with QA and testing with test tools often being seen as too expensive and too hard and complex to understand and implement. However, the costs of poorly developed software must surely outweigh the costs of getting it right? Besides which, the costs have rapidly decreased with new players on the scene and the availability of on-demand elastic delivery models of Cloud and Software as a Service (SaaS) offerings. There are also vendors with tools and services that look to improve and simplify the testing process across heterogeneous infrastructure and application platforms. A security perspective added to the testing strategy of such solutions will do much to address the security holes as a result of the complex environments many applications now operate across.

Clearly, the earlier software security is addressed the better. Addressing security from the outset should have significant impact on the security quality further downstream. Strategies that look to promote continuous testing throughout the process, especially for security issues, will help strengthen the overall quality goals. Improving the foundations is a good basis for secure software delivery. Team review processes and handover policies also serve as good security check points as does the build and source configuration management process where continuous integration can be employed.

A better process for securing software required:  10 guiding points

To minimise the risk of insecure software entering the enterprise software stack, there is a need to rethink how the software development process works. This is not just about the developer but about the entire business, architects developers, operations, security and even users.

  1. Board level commitment: No enterprise wide process can be effective without board level commitment. Any security breach resulting in the loss of money, data or intellectual property (IP) will raise questions as to governance. This is a role owned by the board and every C-level executive must engage and understand the risks.
  2. Secure enterprise: At the heart of any secure software process is an enterprise wide secure by design philosophy. This requires an understanding of how software will work and a solid security framework for data and access. Responsibility for this lies with architects, security and operations.
  3. Processes: A lack of proper process is the wide open door a hacker is looking for. Testing, version control, change control, staging, patch management – these are all essential processes that have to be implemented to create a secure environment.
  4. Encryption: All data inside the enterprise should be encrypted irrespective of where it is stored. Encryption of data in transit is also a requirement. Software must be able to ensure that when it uses data, it does not then store it unencrypted, even in temporary files.
  5. Secure architecture: Software architects are responsible for making sure that all software is designed securely. Validate all technical and functional requirements and ensure that the software specification for the developers is clear and unambiguous.
  6. Unit tests: This is an area that has grown substantially over recent years but one where more needs to be done. Security teams need to engage fully with developers and software testing teams to identify new risks and design tests for them. The ideal scenario would be for the security team to have their own developers focused on creating unit tests that are then deployed internally.
  7. Maintain coding skills: One of the reasons that software is poorly written is a failure to maintain coding skills. This is often caused by the introduction of new tools and technologies and the failure to establish proper training for developers. Providing “how-to” books or computer based training (CBT) is not enough. Developers need training and it should be part of an ongoing investment and quality improvement programme.
  8. Test, test, test: When a project runs late, software testing is often reduced or abandoned in favour of “user testing”. The problem with this is that proper test programmes look at a much wider range of scenarios than users. In addition, once software is in use, even as a test version, it rarely gets completely withdrawn and fixed properly. Instead it gets intermittent and incomplete patching which creates loopholes for hackers.
  9. Implementation: As software delivery timescales reduce, there is pressure on operations to reduce their acceptance testing of applications. Unless operations are part of the test process, unstable applications will end up in production exposing security risk. This is an area that has always been a bottleneck and poorly planned attempts to reduce that bottleneck increase the risk to enterprise data.
  10. Help desk should be part of software development: This is an area that is rarely dealt with properly. Most help desk interaction is about bug and issue reporting with a small amount of new feature work. Help desk has a view across all enterprise software and has an operational and user view of risk. Using that data to reduce the risk of making repetitive mistakes will improve software quality.

The above is not an exhaustive list of steps that can be taken to harden the software security process. However, it does provide a framework against which an enterprise can assess where it has weakness.

Are there differences between small and large software code providers?

A worrying issue voiced by small consultancies attending the CIC secure development forum was that one of the biggest challenges to software security came from the fact that almost everyone is expecting someone else to tackle and deal with any of the concerns and requirements.

There is clearly a divide between what developers in smaller firms can expect to achieve from a secure software perspective, against those within larger teams and larger enterprise organizations.

A small consultancy that does not specialize in software security can only expect to focus on what it can do within the context of its remit if clients are not willing to pay for the “bank” grade security they might desire or their marketing messages all too often claim. Such firms rarely use any tools or processes over and above the normal mechanics of their job or that of their competitors. This doesn’t mean they are entirely bereft of secure software principles or education. On the contrary, a number of these firms discuss security processes and issues regularly (quarterly for the most part) and review policies annually.  That said, some expressed a desire to do more to provide better education and training within a realistic and pragmatic framework for continual improvements. Current governance models are loosely based on internal standards that developers are encouraged to follow. Quality is checked and maintained through peer reviews. But it is not enough.

The development teams within large organizations are not without such challenges either, because whilst code reviews are carried out, they are not always up to standard. Worse still is the lack of knowledge often found within the development team for tools that exist to help support the delivery of secure software applications for a broad spectrum of deployment environments. Nor is this helped by the fact that equipping all the developers of a large team with software security tools such as static analyzers can be significantly costly and time consuming to implement and train.

The communication hurdle

There is a challenge of communication that is no less different for larger enterprise organizations with the resources to employ and assign dedicated software security specialists than it is for smaller ones without the expertise. At the heart of the communication issue is language and process, where the language used by software security roles is not couched in terms recognizable to the software development and delivery teams. Nor is it clearly couched in terms of business risk or business issues that would then allow a level of prioritisation to be applied. The process issue is one of lack of insight into the workflows across the application lifecycle where intervention can have the significant impact.

There is often a need to translate the security concepts within the context of the development process before a software security risk or vulnerability can be addressed sufficiently. For too many, the disconnect between the language of the security experts and those responsible for, involved in or governed by the development process, is a significant barrier.

Because I have to…not because I want to

Whilst organizations within the financial sector generally tend to put in more effort in focusing on secure software strategies and are for the most part open to addressing and fixing the issues, they are driven to do so by regulations and the regulatory bodies. Or, as one organization so succinctly stated: If we didn’t have to comply with the PCI standards, we wouldn’t be as far along in our capabilities for addressing software security and implementing the necessary prevention measures and checks as we are.

Accountability for all

Software security is not all about the code. The way an application is architected, the choice of platforms, tools, and even methodology, all impact the way the code is written. Despite this, we still resort to blaming the developer when it all goes wrong. The main reason for this is because the other roles, tools, and elements of the development process are often invisible to the software owner or end-user client. This makes the developer an easy target for derision.

For a short period of time in the late 1980’s and early 1990’s we tried to make each developer a one-person team (analysts, developer, DBA, QA). As a result placing the majority of the blame on the developer had some limited relevance, but this is not the case today. As we can see, ownership of blame when it comes to secure software delivery lies in many quarters and for many reasons. Developers are far from being the weakest link in the secure software chain of defense.

There are many layers that you can use to improve the security of software and the application lifecycle process, one of which is to go beyond code development, and to look to empowering developers to become more involved and accountable for secure software strategies. They need to be better educated in the language of security vulnerability and the processes that lead to it as well as being and more aware of the operational and business consequences. They need support in their endeavours.

Good quality code is a must. There is still much needed focus on well developed quality code to combat the occurrences of security breaches. But so long as securing software is dealt with as a separate channel and a siloed functionality abstracted from the wider development and delivery workflow, and laid solely as an issue for the developer, delivering secure software will be a challenge hard to overcome.

Ultimately, businesses determine the risks they can accept. The job of the IT organization is to provide the language that can detail their risks in terms that business owners understand i.e. predominantly financial, and or economic, both of which underpin competitive goals and growth aims. Only when they do will the business owner understand what the risks truly equate to and thereby act and support accordingly. Understanding the security vulnerabilities within the development and delivery process from a business risk perspective will be a crucial first step in that direction.

The 6 Most Common Problems in Software Development

Today I would like to take a look at problem solving. Now what is a problem? According to the dictionary, a problem is “a matter or situation regarded as unwelcome or harmful and needing to be dealt with and overcome”. Fast.

What I will do below is discuss the 6 most common problems in software development –  segregated by different stages of a Software Development Life Cycle – and obviously will tell you how Uniface helps in a very simple way.

Problem #1: Requirements Gathering

Garbage in, garbage out… Any design is as good as the completeness or correctness of the requirements.  If the requirements are not good, the project will fail. Because people will hate the result.

Uniface helps as it is a very agile ; it help developers and end-user to quickly develop a prototype that further clarifies the correctness of the design. And then revise the, quickly built the next prototype. And so on. Prototyping with Uniface is straightforward and simple.

Problem #2: Planning & Estimation

Very often estimations of cost and duration of projects are too optimistic resulting in overspending and a much reduced time to market. And frustration from management, something you don’t want.

Uniface helps as it simplifies app development by separating the design from the actual technical implication(s). Building an app becomes simple, hence everything else about it becomes simpler as well. Simple.

Problem #3: Development

Moving targets, feature creep: it does happen. Requirements do change. I am not one of those religious nut-bags that will tell you “thou shall not change the requirements, ever”. Change happens. Change is upon us. Full Stop.

Uniface helps as many objects are defined in one unique place. Which makes Uniface very versatile and a Uniface app very easy to change. Uniface is made for change. Because it’s so simple…

Problem #4: Testing

Bug-free software doesn’t exist. Learn to live with it…YOLO

Uniface helps as it needs 7 times less code that Java. Less code, less bugs. Simple.

Problem #5: Collaboration

Project Management and multi-user development are key processes that need a lot of attention.

Uniface helps, as there are no multi-user and collaboration issues. Uniface is multi-user. By design. Simple.

Problem #6: Deployment

One more thing, Uniface runs on many platforms too – from mobile to mainframe and you can actually change from one of those  platform to another just like that. Without any redevelopment. It’s really that simple.

There’s just one simple decision you now need to make for yourself: build your next app with Uniface.

Laying the Groundwork for Solid Foundations: Attributes Supporting Accelerated Delivery

Guest contributor, Bola Rotibi from analyst firm Creative Intellect Consulting

Read Part 1 here
Read Part 2 here

Laying the groundwork for solid foundations: Attributes supporting accelerated delivery

So “how” does one go about laying the foundations for accelerated delivery? Of course it is not always easy when one already has processes and tools already in place since changing established (or to put it more succinctly, ingrained) habits can be challenging.

Our research was able to determine the core focus areas that must be addressed in order to support, manage and govern the workflow for accelerated or continuous delivery so that it can be successfully executed. These centred on having strong process foundation This means support for best practice operational processes such as Application Lifecycle Management (ALM), which addresses the governance of the application delivery process. It also means employing Agile development practices for planning and managing the delivery process and ensuring the delivery of working applications. Collaboration between key stakeholders particularly client ones will guarantee that the outcome is more in line with the client’s expectation and need at the time. The process foundations require a focus on DevOps to ensure smooth transition and handover from development and operations. This means having in place systems as well as processes that support that transition and handover processes to ensure a level of trust for both sides i.e. trust in the environment that the development team will be deploying the application into and trust in the code or application change that operations will be receiving from development. This is why we found Continuous Test, Build and Integration vital for ensuring a level of automation and validation and reinforcing the trust circle. It is also why ITIL/ITSM processes proved to be a common starting point for many operational departments looking to address DevOps.

Strong process governance is another core requirement and this is fundamentally about having in place traceability of actions taken and a level of depth to the traceability, along with the completeness of audits in order to ensure rollback can occur. The challenge we have found within many organization is that there is not always sufficient traceability to reconstruct the same environmental state for effective rollback. Last year three European financial institutes experienced highly public failures during their application updates that later transpired to be down to insufficient traceability.

The second two attributes: Speed Control and Workflow Orchestration are concerned with controlling the speed of through put. It is why Agile organization, automation support, tools, application and service knowledge, integration and interoperability support feature strongly here because they are focused on improving and unifying the flow and through put of delivery.

In short, the attributes for supporting faster and more frequent releases centre on strong process quality and automation support and control. Process foundations and process governance together indicate the strength of Process Quality i.e. how organizations validate, approve, apply and regulate their processes but also the completeness of traceability of actions, tasks and events. The flip side to the attributes supporting accelerated delivery are the inhibitors (complexity, risk tolerance, culture, mind-set etc.) that have a negative impact on the delivery process.

Ten guiding points for further consideration

The research we conducted identified 10 guide points to establishing an environment geared for Accelerated Delivery:

  • A commitment to Business Agility must be comprehensive
  • Assessment, but in particular “risk” assessment must be part of the decision process
  • Solid processes underpin a strategy for continuous and accelerated delivery
  • Employing processes that help to bridge and improve the interactions between development and operations team (such as ITIL/ITSM) is an important starting point
  • Raise the bar for change and asset management
  • Get a firm handle on the complexity profile within the IT organization
  • Identify any build and integration bottleneck and consider the wider implications
  • Rethink the release process
  • Testing needs to be comprehensive and continually updated
  • Automation and governance not only matter, they are vital steps for success

The full report of our research – CIC Guide: Continuous Delivery Realization – Enterprise DevOps realities and a path towards Continuous Delivery” can be found on the CIC website at www.creativeintellectuk.com. The report deals with the question of how accelerated delivery can work in a “Change the organization stroke Run the organization” dichotomy that pervades most large institutions. For true success it needs end-to-end design and agile implementation.

Is accelerated delivery for your organization?

It would be trite to suggest that all organizations can move at the same speed when it comes to the development and deployment of applications, nor is improving the cadence of release right for all applications.

A tax application that has to deal with complex tax codes may not suit high levels of automation. It may require more human intervention and staging points to ensure that the deployed application delivers the correct calculation and works in the right way. The consequences of a mistake could be financially disastrous. There may also be very good security reasons for not improving the release cycle of an application process.

Ultimately the release masters that can look to truly deliver faster, more often and with great stability have invested in building an environment where risk is fully understood and is effectively balanced against the needs for accelerated delivery. The organizations of these champions stand to gain substantial benefits from greater flexibility and adaptability to meet the needs of business without risking the collapse of the IT environment and team.