By Ian Murphy, Principal Analyst and Bola Rotibi, Research Director, Creative Intellect Consulting
As a relatively new engineering discipline, software development has been looking for a way to improve the quality and cut the cost of what it does. There are good reasons for this, multi-tier, computing systems can take hundreds of man years of effort and cost tens of millions of dollars to build. These systems have multiple points of integration with other pieces of software. Being able to model this complexity helps architects, developers and system engineers to examine the application before they start and make decisions as to how to approach the project.
The use of models in engineering disciplines has been going on for millennia. At a high level, models provide a view that abstract unnecessary detail to deliver an uncluttered picture of a solution for respective stakeholder audiences. Models are used to create a proof of concept that allows architects and engineers to study complex problems before time and money are committed to construction. For example, modelling a bridge over a canyon would enable architects to see how the bridge would look and highlight any issues with the materials used and the terrain.
There are of course varying levels of detail that different model layers will then go onto present that depict all relevant artefacts, along with their relationship to and dependencies on, each other.
You might think, therefore, that the use of modelling inside IT departments would be rife with significant attention and investment paid to its use. Sadly this is not always the case for too many organisations. Yes, lots of teams will use models and modelling in some aspects of the development and delivery process, but it will not be consistently applied or formally implemented. Despite efforts to drive wider use of modelling in software development, the number of companies that actually do modelling as a core and formal function of their development and delivery processes are few and far between. So why is that?
Common modelling failures
There are three common failures of modelling that lead to them being dismissed as unusable:
- The first is that the model offers too simplistic a view for the different stakeholders involved. It doesn’t provide the right level of basic information for the various viewpoints required with the result that little to no understanding of any problems can be ascertained from it.
- The second is that the model is too detailed making it hard to abstract the relevant information for a particular viewpoint, easily and quickly, and making it hard to understand problems from a higher perspective. It might seem that when modelling something as complex as a multi-tier CRM product, there is no such thing as too much detail, but there is.
- The third is the incompleteness of models to allow for automatic transformation of the visual representation into executable working code.
Modelling 101: 5 points of effectiveness
Ultimately, the main objective of a model is to communicate a design more clearly: allowing stakeholders to see the bigger picture i.e. the whole system; assess different options, costs and risks before embarking on actual construction. To achieve these there are five key characteristics that a model must look to convey:
- Abstraction: A model must allow you to abstract the problem into simple meaningful representation. If you look at architectural building models, they use blocks to represent buildings. In an IT sense, the initial model will simply show connections between systems but not the underlying coding detail.
- Understanding: Having abstracted the problem, what the model must then convey is sufficient information and detail in a way that is easy to understand for the different audiences concerned.
- Accuracy: If a model is not an accurate representation of the problem then it is useless. A model would provide a useful start point to get a common view.
- Alert for prediction: A model alone cannot necessarily provide all the key details of a problem. In modelling a banking system there would be a requirement to use additional tools to predict workload and throughput. This would have to be done in order to select the right hardware, design the correct network architecture and ensure that the software can scale to the predicted capacity demand. One common failure of many IT systems is that they are often under-scaled. If properly modelled with a clear prediction step, this problem would be reduced.
- Implementation and execution cost: Models need to be cheap to produce and easy to change, especially in comparison to the cost of the product or solution delivered by the model.
Sometimes people ask me how it is possible that the Uniface business is doing so very well and – despite an economic crises – is growing…
The main reason for this is the Business Unit structure we implemented in April 2009. This new “state of independence” changed a lot for us, especially in the things we focus on:
First we’re passionate about real innovation. You can only be credible as a supplier if you can deliver great products that add value to your existing customers’ day to day business. We are listening to what our customers want and make sure these wishes are somehow coming back into the product. Uniface 9.6 is the ultimate proof of this, many people consider this the best and most innovative Uniface version in the last 10 years!
Second is customer focus. We do things different now that we’re in the Business Unit setup. Our main thing is nurturing and collaborating with our existing customers and VARs. These are the companies and organizations that depend on us, are very loyal to us and therefore deserve the best service they can get. Therefore there is an enormous focus in our team on user-events, customer-roadshows, workshops and so on, all intended to explain and help our customers how to use Uniface in the best way possible. Or – to take it to the business level – make sure that our customers always get the best ROI out of Uniface.
Innovation and customer focus, that’s why we’re doing well. Simple, isn’t it?
Programming with an appreciation for visceral and emotional human reactions to an application along with the context of usage and interaction is the only way to address practical UX goals competently
Read Part 1
Read Part 2
Read Part 3
In a podcast that I ran with a number of leading industry and market spokespeople titled “No excuse for a crappy App” (link is: http://www.creativeintellectuk.com/?page_id=1095 ), a consensus on some of the considerations for avoiding the delivery of a bad app was reached. A number of the points raised I have already covered. All have UX implications which must be addressed:
- Developers need to understand the context of use and the user and develop fit for purpose scenarios
- Too much complexity overburdens the application and can lead to a poor user experience
- Keep it simple
- A lack of process, discipline and standards makes it hard for the development team to deliver to competent User experience
- Balance functionality with user experience, build in fundamentals, ensure communications and enable responsibility across everyone involved
There are so many more attributes that now dictate the user experience – desirability and convenience to name but a few. Ultimately the development team needs to watch more carefully how users are evolving along with their changing needs.
UX or bust
It’s not always easy to financially quantify the returns based on the user experience partly because in a number of cases we, as users, are willing to put up with a lot before we are irrevocably turned off. But the world is fast changing and there are now a lot more choices. With this comes an ever more influential user audience. Users are now more opinionated and disseminate their opinions more widely. With social networks, applications and their artifacts, as well as broad connectivity and wide proliferation of mobile devices, they have the means to make themselves heard faster and louder. Bad and good experiences can now hit many more touch points with the good delivering loyalty and referrals and the bad resulting in lost business.
The likes of Apple and Adobe have long made the user experience central to their core goals and used it to shape the products and services they deliver. In the case of Apple, the rewards from its homage to both design and user experience have delivered a company with an enviable global brand, significant user fan base and a financial standing few others can match. Now more software tooling vendors have begun to recognize the importance of UX. They do so both as a focus for their own tool environments but also to enable their users to better address UX concerns and features within their own applications.
Apple aside, we don’t have to look far to see that UX done well can pay for itself in any number of ways – financially, brand loyalty, referrals and greater productivity. Done badly…well we all have our own personal experiences of the outcome.
So, I guess this is my first blog written at 30.000 feet…
What I love about North America is Gogo Flight, an Internet service which enables you use Wifi while you are traveling by plane. It’s so efficient to catch up with business instead of reading in flight magazines. Wish they had this in Europe as well. So I am updating my forecast data which is conveniently stored in the Cloud.
Speaking about the Cloud…
I am seeing some CEO’s of our VARs here this week and there’s one interesting thing I am seeing more and more. The decision to deploy Uniface applications in the Cloud is very often not in hands of the IT people anymore. Senior Management really sees Cloud Computing as a interesting way to save costs or – maybe more important – to expand market opportunities. Read more about it in this article on ZDnet.com
Greetings from the skies!
PS: Please let me know what landmark you think this is…
For the first time in its more than 20 years existence the 2012 Face to Face autumn conference was held at the Compuware office in Amsterdam. For the attendants the day started very good with an official barista making excellent latte macchiato, espresso etc. so everybody was very awake for the day. In the technical “potpourri,” which is always the kick off for the event, Dino Seelig made everybody curious for his implementation of a new HR system. Next time we expect a live demo Dino! Edu Kornmann showed his work on a Calendar widget with the famous Uniface Open Widget Interface and Arjen van Vliet on application performance analysis with the TDD tools from Compuware.
After the coffee break the presentation from Huddie Klein from Formido was very much appreciated by the audience, first of all because Huddie was able to present without(!) PowerPoint and most important Huddie showed it is possible to calculate the ROI of functional improvements in your application something which many teams have difficulty with but is very important in this economic climate.
This also became very clear in the afternoon session, led by Change Management Expert Norbert Huijzer. During this session arguments were brought up to modernization to an application or not. The most heard argument was that it is difficult to calculate the ROI of modernization. This is definitely a topic which will return in future customer sessions. If you want a report on this session and the arguments used, please contact firstname.lastname@example.org
Of course the Uniface Team presented the Uniface 9.6 release with presentations and a workshop. Overall the response to Uniface 9.6 by the audience was very positive, like in all other user events these last couple of months. The possibility the build “any” user interface for a Uniface application will certainly be used by many developers in the short term future.
The presentation day ended by a sneak preview demo of Uniface 10 by Henk van der Veer from the Uniface development team. The preview always make the audience “hungry” to get their hands-on the new development environment.
At the “borrel” afterwards many topics discussed of the day where discussed “in depth” again and the first sessions of the 2013 Spring conference where already planned.