Tag Archives: risk

Make some noise!

In my previous blog post I told you about my worries. I was thinking about it and even talked to a few of you about it. It reminded me of something that happened to me once. A few years ago I was hired by an organisation. I was just one of the Uniface pro’s. Besides Uniface they used another development platform. I witnessed something very interesting, let me share this with you.

Both disciplines had their own room. In one room all went well, a dozen men worked in silence, behind a closed door they achieved their goals on time and within budget. The applications they maintained were very stable and performed as expected. The other room on the other hand was very lively, the walls were filled with all kinds of merchandise. The young developers had all kinds of technical issues, played arcade games in their breaks and drank beer after work.

Can you guess in which room the Uniface developers worked? Easy one, I know. If you can guess the next answer, I’ll buy you a beer. Which platform was preferred by the management?

I am afraid this is going to cost me a fortune on beer. We all know the answer. Of course, the other guys did a great job. I am, like most of us, too negative about them and the tooling they use. The reason they were not that productive was caused by the tools they used. But did they win?

Change the point of view. Let’s say you are the management of a company. The company depends completely on a few Uniface applications. Very stable and low on maintenance costs. In the near future you expect major changes in the organisations strategy and the markets are changing rapidly. You need to invest in new applications and/or change the existing ones. Are you going to use Uniface or go for something completely new? Choosing Uniface is the rational choice, isn’t it. Imagine, you have all these experienced guys (sorry ladies, but this is a man’s world…). But you never hear them. Sometimes you wonder if they even exist! How do you know if they use modern techniques? And what if you need a dozen more of these pro’s? Where and how can you find them? When you consult google.com you’ll find all kinds of software warehouses to deliver you support on that other tool, while on Uniface all you find in the top 10 is Uniface itself.

If I had to give advice to this management, my advice would be to choose this other tool. Regardless which tool. Being a Uniface developer, as I am, I can tell you this hurts a lot. But it’s just a rational thing. Or isn’t it?

This reminds me. Once in a small village in The Netherlands the only shop closed down. All inhabitants did their grocery in the large supermarket in the adjacent city. Quiet normal, I guess you see this everywhere in the world. The next day an alderman announced in the newspaper that it would be a good idea to not only close the small shop, but also close the complete village. If the inhabitants loved the city to do their grocery, why not go and live there. This action did not save the local shop. But what if this one guy achieved to create a kind of movement. Let’s say, he achieved to motivate some entrepreneurial people. With this small group they could create new business for the local store. Instead of competing with the large competitor, focusing on the strengths. Sometimes you need the help of a community. Today, the strength is the community!

All successful tools I know have communities. Some very successful tools even are created by the community! A product community can be a partner or a critic ally for the company, but always fight on the same side. But, where is the Uniface community? All I see is a great product and a website (uniface.info) with lots of fans. But that is not a community! It is something created by Uniface. I want to create a real Uniface community.  I truly believe we have the strength to unite and make the difference!

You can either participate or wait behind a closed door and I believe I can hear some melancholic seventies music. Let’s make some noise…. Let the world know we are here…. In my next blog I will share my ideas and plans with you all. Do you have idea’s? Please contact me  🙂

Windows XP – another nail in the Coffin

I recently read this article about Chrome 50 stopping support for some older operating systems, and the mention of Windows XP caught my eye. 

From a Uniface perspective, we stopped supporting Windows XP in May 2014. Purely from a technology perspective, it freed us up in regards to choices on MS Visual Studio and even how to implement certain functionality. I’m sure in the Uniface source code there is still code that states ‘if Windows XP’…! 

Getting out and about, talking to customers, I’ve had a few conversations about Windows XP, mainly in the context of browser support and Internet Explorer 7, as in the big WWW, it’s pretty well out of control what OS, and what browser an end user can use. (Although I do remember this article about an Australian online retailer who was going to add an IE tax for their transactions.) 

Something that has come up during conversations has been customers who are doing business in China, where there is still a significant amount of Windows XP use. I’m assuming that this is related to how easy it was to bypass the MS licensing model and the availability of older specification hardware which might struggle to run a new version of Windows. 

I’m expecting that with Chrome soon to stop supporting IE, that will start to accelerate the move away from Windows XP, and I’m guessing some of the hardware manufacturers will be rubbing their hands with the anticipation of a peak in new hardware sales, and the recyclers are preparing for more obsolete hardware to be stripped for precious metals. 

And on a personal note, it appears I need to buy a new Mac for use at home, as I’m also impacted by Chrome 50 not supporting my version of Mac OSX! 

Modelling: Essential Not Optional (Part 2)

By Ian Murphy, Principal Analyst and Bola Rotibi, Research Director, Creative Intellect Consulting

Read Part 1 here.

Complexity is inherent in our IT DNA

One of the goals of IT for decades has been to reduce the complexity of the systems it writes and maintains. There are several reasons for this. Users want solutions faster, budgets are shrinking and complexity fuels failure.

Agile development, automation, Cloud computing and DevOps are all helping IT deliver applications faster and at a lower cost. This is positive news for the business. But what about the rising issue of complexity?

Unfortunately, complexity is inherent in IT systems that are used to run businesses. Stock control needs to be integrated with sales order processing which in turn is integrated into accounting systems. Call centre teams need access to these same systems to deal with customer queries. Online shops must be able to create new customers, display stock levels, take orders, and pass data to fulfilment systems. These are just some of the very basic systems that companies use.

We are now in a mobile world where applications are now required to run in web browsers or be written for multiple operating systems and classes of devices. These devices are not owned exclusively by the business instead, they are increasingly the property of individuals.

This means that any applications deployed on the devices is not just running in the context of a controlled environment but has to coexist alongside other applications that IT has no knowledge or control over. The end result of this is an incredibly complex set of security and performance issues that IT cannot know yet has to write solutions to deal with.

A further complication is that security is a constant challenge. The rise of malware, the ability of hackers to penetrate systems, seemingly at will, the risk to corporate data and the surge of compliance requirements is seemingly never ending.

Modelling has a new relevance

There is a new relevance for modelling in IT systems. Let’s take the example of an application designed to help an insurance sales team.

The requirement from the sales force is that they want an application that runs on their tablets and smartphones, that is capable of validating user details and can help deliver quotes, on the spot, that customers can sign up to.

From an IT perspective the operating system is unknown. The local storage and security capability of the devices are unknown. The application needs to integrate with customer systems which means they have to do data validation at point of entry. Information gathered needs to be risk assessed in order to create a meaningful policy and payment schedule. If there are potential problems, the application needs to be able to pass all the data to an underwriter in order to get a response.

This is just a quick list of potential issues and at every point there will be integration with other systems and the need to pass data around.

A computer model of this system might be very simple to begin with. Mobile device connecting to customer system, check for existing or new customer, data validation required, policy risk assessed and then payment schedule set.

This simple model enables key areas to be highlighted for further investigation. For example, does this have to be real-time? What performance speed is required? Can it be done over 3G or does it need a WiFI link? How long does it take to validate customer details? What happens if an underwriter is needed to make an assessment? How many users can the external gateways support at any point in time?

In short, the model encapsulates the five key points that models in general must deliver in order to be effective: In short, the problem has been abstracted to a mobile device connecting to core systems. Understanding is achieved by all parties because the abstraction is clean and contains just enough detail to see where potential problems could occur. The model is accurate because it describes exactly what is needed and the key steps that are involved. The identification of the external gateway as a bottleneck and the time required to carry out key tasks means that predictions can be done. Finally, there has been little to no cost at this point in establishing the model.

This is an overly simple example of a system with limited integration points but it demonstrates how quickly a model can begin to highlight areas of concern and how they can be further addressed. There would be no reason why the data validation couldn’t be modelled in more detail to understand what was being gathered and how it would be validated. The same is true of the process that creates the policy and determines the payments.

Modelling: relevant and crucial for Cloud computing

One of the major impacts on the IT landscape has been the arrival of Cloud computing. Systems may exist in a private cloud, a public cloud or be split over the two in a hybrid cloud.

In all three cases there is a need to understand how an application will be architected to take advantage of the capabilities that Cloud computing offers. Six key questions surrounding any application deployment to the Cloud are:

  • Where will application components sit?
  • Where will data be stored?
  • What is required by data protection and compliance laws?
  • What level of performance and scalability does Cloud provide
  • What security and encryption will be used?
  • What cost savings do the different cloud models offer?

Modelling allows companies to begin to address all of these questions. At the very basic level it will show application components and highlight potential integration challenges. For data, it will enable compliance teams to determine whether the company has a legal problem. Security teams can begin to identify what is needed to meet corporate security needs.

Without a model, a lot is taken on trust and people fail to properly identify challenges. Many companies are beginning to realise that there is far more complexity in moving applications to Public and Hybrid Cloud than they would ever have realised. A model would enable them to not only see what was moving but then enable subject matter experts to ask questions about integration, security and suggest what further and detailed models are required.

Model or be damned

There is no excuse for not modelling IT systems and in particular software developments. The five stages are clear and easy to use.

The key is in keeping it simple, using models to explore potential challenges and not over complicating things. Many organisations will ultimately discover that they don’t need a new model for every application and system because the similarities at the model level are very high. For example, mobile applications share a lot of common elements. Where they differ is at the accuracy and prediction stages.

Those companies that use models will identify problems sooner, reduce cost, and understand complexity. They will also open up opportunities for greater reuse and flexibility. In an age where business agility is paramount, modelling enables a company to deliver what users want, faster and with less risk.

Modelling: Essential Not Optional (Part 1)

By Ian Murphy, Principal Analyst and Bola Rotibi, Research Director, Creative Intellect Consulting

As a relatively new engineering discipline, software development has been looking for a way to improve the quality and cut the cost of what it does. There are good reasons for this, multi-tier, computing systems can take hundreds of man years of effort and cost tens of millions of dollars to build. These systems have multiple points of integration with other pieces of software. Being able to model this complexity helps architects, developers and system engineers to examine the application before they start and make decisions as to how to approach the project.

The use of models in engineering disciplines has been going on for millennia. At a high level, models provide a view that abstract unnecessary detail to deliver an uncluttered picture of a solution for respective stakeholder audiences. Models are used to create a proof of concept that allows architects and engineers to study complex problems before time and money are committed to construction. For example, modelling a bridge over a canyon would enable architects to see how the bridge would look and highlight any issues with the materials used and the terrain.

There are of course varying levels of detail that different model layers will then go onto present that depict all relevant artefacts, along with their relationship to and dependencies on, each other.

You might think, therefore, that the use of modelling inside IT departments would be rife with significant attention and investment paid to its use. Sadly this is not always the case for too many organisations. Yes, lots of teams will use models and modelling in some aspects of the development and delivery process, but it will not be consistently applied or formally implemented.  Despite efforts to drive wider use of modelling in software development, the number of companies that actually do modelling as a core and formal function of their development and delivery processes are few and far between. So why is that?

Common modelling failures

There are three common failures of modelling that lead to them being dismissed as unusable:

  • The first is that the model offers too simplistic a view for the different stakeholders involved. It doesn’t provide the right level of basic information for the various viewpoints required with the result that little to no understanding of any problems can be ascertained from it.
  • The second is that the model is too detailed making it hard to abstract the relevant information for a particular viewpoint, easily and quickly, and making it hard to understand problems from a higher perspective. It might seem that when modelling something as complex as a multi-tier CRM product, there is no such thing as too much detail, but there is.
  • The third is the incompleteness of models to allow for automatic transformation of the visual representation into executable working code.

Modelling 101: 5 points of effectiveness  

Ultimately, the main objective of a model is to communicate a design more clearly: allowing stakeholders to see the bigger picture i.e. the whole system; assess different options, costs and risks before embarking on actual construction.  To achieve these there are five key characteristics that a model must look to convey:

  • Abstraction: A model must allow you to abstract the problem into simple meaningful representation. If you look at architectural building models, they use blocks to represent buildings. In an IT sense, the initial model will simply show connections between systems but not the underlying coding detail.
  • Understanding: Having abstracted the problem, what the model must then convey is sufficient information and detail in a way that is easy to understand for the different audiences concerned.
  • Accuracy: If a model is not an accurate representation of the problem then it is useless. A model would provide a useful start point to get a common view.
  • Alert for prediction: A model alone cannot necessarily provide all the key details of a problem. In modelling a banking system there would be a requirement to use additional tools to predict workload and throughput. This would have to be done in order to select the right hardware, design the correct network architecture and ensure that the software can scale to the predicted capacity demand. One common failure of many IT systems is that they are often under-scaled. If properly modelled with a clear prediction step, this problem would be reduced.
  • Implementation and execution cost: Models need to be cheap to produce and easy to change, especially in comparison to the cost of the product or solution delivered by the model. 

Part 2: Secure software delivery: who really owns it?

Guest contributors, Bola Rotibi and Ian Murphy from analyst firm Creative Intellect Consulting

Read Part 1 of this blog here.

Fighting back: An important role for Quality Assurance (QA) and testing

Inside the IT department, many see QA as being responsible for catching problems. As a result they are often seen as being the least prone to errors in the way that they work, but they are certainly not blameless. What is needed is more integrated approaches to the way that testing and Quality Assurance is carried out and processes were the teams can contribute to the discussion for better security and in reducing the bug count earlier.

One of the weaknesses in QA and testing has been the “last minute” approach that many organizations adopt. When projects are running late, QA is the first thing to get squeezed in terms of time. There is also a significant cost associated with QA and testing with test tools often being seen as too expensive and too hard and complex to understand and implement. However, the costs of poorly developed software must surely outweigh the costs of getting it right? Besides which, the costs have rapidly decreased with new players on the scene and the availability of on-demand elastic delivery models of Cloud and Software as a Service (SaaS) offerings. There are also vendors with tools and services that look to improve and simplify the testing process across heterogeneous infrastructure and application platforms. A security perspective added to the testing strategy of such solutions will do much to address the security holes as a result of the complex environments many applications now operate across.

Clearly, the earlier software security is addressed the better. Addressing security from the outset should have significant impact on the security quality further downstream. Strategies that look to promote continuous testing throughout the process, especially for security issues, will help strengthen the overall quality goals. Improving the foundations is a good basis for secure software delivery. Team review processes and handover policies also serve as good security check points as does the build and source configuration management process where continuous integration can be employed.

A better process for securing software required:  10 guiding points

To minimise the risk of insecure software entering the enterprise software stack, there is a need to rethink how the software development process works. This is not just about the developer but about the entire business, architects developers, operations, security and even users.

  1. Board level commitment: No enterprise wide process can be effective without board level commitment. Any security breach resulting in the loss of money, data or intellectual property (IP) will raise questions as to governance. This is a role owned by the board and every C-level executive must engage and understand the risks.
  2. Secure enterprise: At the heart of any secure software process is an enterprise wide secure by design philosophy. This requires an understanding of how software will work and a solid security framework for data and access. Responsibility for this lies with architects, security and operations.
  3. Processes: A lack of proper process is the wide open door a hacker is looking for. Testing, version control, change control, staging, patch management – these are all essential processes that have to be implemented to create a secure environment.
  4. Encryption: All data inside the enterprise should be encrypted irrespective of where it is stored. Encryption of data in transit is also a requirement. Software must be able to ensure that when it uses data, it does not then store it unencrypted, even in temporary files.
  5. Secure architecture: Software architects are responsible for making sure that all software is designed securely. Validate all technical and functional requirements and ensure that the software specification for the developers is clear and unambiguous.
  6. Unit tests: This is an area that has grown substantially over recent years but one where more needs to be done. Security teams need to engage fully with developers and software testing teams to identify new risks and design tests for them. The ideal scenario would be for the security team to have their own developers focused on creating unit tests that are then deployed internally.
  7. Maintain coding skills: One of the reasons that software is poorly written is a failure to maintain coding skills. This is often caused by the introduction of new tools and technologies and the failure to establish proper training for developers. Providing “how-to” books or computer based training (CBT) is not enough. Developers need training and it should be part of an ongoing investment and quality improvement programme.
  8. Test, test, test: When a project runs late, software testing is often reduced or abandoned in favour of “user testing”. The problem with this is that proper test programmes look at a much wider range of scenarios than users. In addition, once software is in use, even as a test version, it rarely gets completely withdrawn and fixed properly. Instead it gets intermittent and incomplete patching which creates loopholes for hackers.
  9. Implementation: As software delivery timescales reduce, there is pressure on operations to reduce their acceptance testing of applications. Unless operations are part of the test process, unstable applications will end up in production exposing security risk. This is an area that has always been a bottleneck and poorly planned attempts to reduce that bottleneck increase the risk to enterprise data.
  10. Help desk should be part of software development: This is an area that is rarely dealt with properly. Most help desk interaction is about bug and issue reporting with a small amount of new feature work. Help desk has a view across all enterprise software and has an operational and user view of risk. Using that data to reduce the risk of making repetitive mistakes will improve software quality.

The above is not an exhaustive list of steps that can be taken to harden the software security process. However, it does provide a framework against which an enterprise can assess where it has weakness.

Are there differences between small and large software code providers?

A worrying issue voiced by small consultancies attending the CIC secure development forum was that one of the biggest challenges to software security came from the fact that almost everyone is expecting someone else to tackle and deal with any of the concerns and requirements.

There is clearly a divide between what developers in smaller firms can expect to achieve from a secure software perspective, against those within larger teams and larger enterprise organizations.

A small consultancy that does not specialize in software security can only expect to focus on what it can do within the context of its remit if clients are not willing to pay for the “bank” grade security they might desire or their marketing messages all too often claim. Such firms rarely use any tools or processes over and above the normal mechanics of their job or that of their competitors. This doesn’t mean they are entirely bereft of secure software principles or education. On the contrary, a number of these firms discuss security processes and issues regularly (quarterly for the most part) and review policies annually.  That said, some expressed a desire to do more to provide better education and training within a realistic and pragmatic framework for continual improvements. Current governance models are loosely based on internal standards that developers are encouraged to follow. Quality is checked and maintained through peer reviews. But it is not enough.

The development teams within large organizations are not without such challenges either, because whilst code reviews are carried out, they are not always up to standard. Worse still is the lack of knowledge often found within the development team for tools that exist to help support the delivery of secure software applications for a broad spectrum of deployment environments. Nor is this helped by the fact that equipping all the developers of a large team with software security tools such as static analyzers can be significantly costly and time consuming to implement and train.

The communication hurdle

There is a challenge of communication that is no less different for larger enterprise organizations with the resources to employ and assign dedicated software security specialists than it is for smaller ones without the expertise. At the heart of the communication issue is language and process, where the language used by software security roles is not couched in terms recognizable to the software development and delivery teams. Nor is it clearly couched in terms of business risk or business issues that would then allow a level of prioritisation to be applied. The process issue is one of lack of insight into the workflows across the application lifecycle where intervention can have the significant impact.

There is often a need to translate the security concepts within the context of the development process before a software security risk or vulnerability can be addressed sufficiently. For too many, the disconnect between the language of the security experts and those responsible for, involved in or governed by the development process, is a significant barrier.

Because I have to…not because I want to

Whilst organizations within the financial sector generally tend to put in more effort in focusing on secure software strategies and are for the most part open to addressing and fixing the issues, they are driven to do so by regulations and the regulatory bodies. Or, as one organization so succinctly stated: If we didn’t have to comply with the PCI standards, we wouldn’t be as far along in our capabilities for addressing software security and implementing the necessary prevention measures and checks as we are.

Accountability for all

Software security is not all about the code. The way an application is architected, the choice of platforms, tools, and even methodology, all impact the way the code is written. Despite this, we still resort to blaming the developer when it all goes wrong. The main reason for this is because the other roles, tools, and elements of the development process are often invisible to the software owner or end-user client. This makes the developer an easy target for derision.

For a short period of time in the late 1980’s and early 1990’s we tried to make each developer a one-person team (analysts, developer, DBA, QA). As a result placing the majority of the blame on the developer had some limited relevance, but this is not the case today. As we can see, ownership of blame when it comes to secure software delivery lies in many quarters and for many reasons. Developers are far from being the weakest link in the secure software chain of defense.

There are many layers that you can use to improve the security of software and the application lifecycle process, one of which is to go beyond code development, and to look to empowering developers to become more involved and accountable for secure software strategies. They need to be better educated in the language of security vulnerability and the processes that lead to it as well as being and more aware of the operational and business consequences. They need support in their endeavours.

Good quality code is a must. There is still much needed focus on well developed quality code to combat the occurrences of security breaches. But so long as securing software is dealt with as a separate channel and a siloed functionality abstracted from the wider development and delivery workflow, and laid solely as an issue for the developer, delivering secure software will be a challenge hard to overcome.

Ultimately, businesses determine the risks they can accept. The job of the IT organization is to provide the language that can detail their risks in terms that business owners understand i.e. predominantly financial, and or economic, both of which underpin competitive goals and growth aims. Only when they do will the business owner understand what the risks truly equate to and thereby act and support accordingly. Understanding the security vulnerabilities within the development and delivery process from a business risk perspective will be a crucial first step in that direction.