Tag Archives: database

Attending a cloud infrastructure training – A truly AWSome Day in Amsterdam

Last week I attended, along with a few other Uniface software engineers, the AWSome Day Amsterdam event, organized by Amazon Web Services (AWS) – the world’s largest provider of cloud infrastructure services (IaaS). The event was a one-day training in Amsterdam delivered by AWS technical instructors. More than 300 (maybe even 400) people attended the event. It was very crowded, but a very well-organized event.

From Uniface, a few people from the cloud, mobile and security teams attended the event, each with their own project in mind.

The interactive training provided us with a lot of information about cloud deployment, security and usage for the web and mobile environments. The focus was on AWS as a provider of cloud infrastructure services. In a nutshell, technical instructors elaborated on the following:

AWS infrastructure with information about the three main services they offer:

  1. Amazon Simple Storage Service (S3) to store objects up to 5 terabyte in multiple buckets. This service includes advanced lifecycle management tools for your files.
  2. Amazon Elastic Cloud Compute (EC2) which offers virtual servers as you need. EC2 has advanced security and networking options and tools to manage storage. Also very interesting, you can write your own algorithm to scale up or down to handle changes in requirements or spikes in popularity, to reduce costs and improve your efficiency.
  3. Amazon Elastic Block Store (EBS) which provides persistent block-level storage volumes that you can attach to a single EC2 instance. Interesting is that EBS volumes persist independently from running life of an EC2 instance. You can use EBS volumes as primary storage for especially data that requires frequent updates and for throughput-intensive applications that perform continuous disk scans. EBS is flexible, in the sense that you can easily grow volumes.

 AWS Event

During the event we discussed extensively the security risks, identity management and access functionalities. But also the usage of different databases (SQL vs NoSQL) together with the cloud services. Interesting topics discussed at the event were concepts such as Auto scaling of EC2 instances, Load Balancing, and management tools such as CloudWatch and AWS Trusted Advisor, which seems to be very useful to track security and costs issues.

Uniface Attending AWS Event

In general, the event has broadened my view on cloud deployment using AWS, but also using other cloud infrastructure services as the same concepts can be applied to other cloud providers. 

It was truly an AWSome Day in Amsterdam!

Where to put your code

As a Uniface developer, I’ve seen a lot of Uniface applications first hand. On more than one occasion I encountered a situation where developers put all their code in the component. This happened for a number of reasons—access to the model or the library was constricted, there wasn’t enough time in the project to do it correctly, or just unfamiliarity with Uniface. I cannot speak on behalf of project managers or architects, but I can tell you how I code my projects.

The first rule of Uniface is that you do not copy and paste! (Very obvious movie reference!) If you find yourself in a situation that you think you need to copy code: stop! You are probably better off removing the code from its original source, putting it in either the application model or the library, and then reusing it in both the original component and the component where you wanted to paste it.

Single field implementation

Consider the following:

if (HEIGHT.PERSON < 0)

HEIGHT.PERSON = 0

endif

 

A person can never have a height that is smaller than 0 meters. Maybe there are people with a negative size, but I have never seen one. So if someone enters a negative value we reset the value to 0. If you were to put this in a component, than you need to copy and paste it the next time you need it. Remember the first rule? So where would you put it? The most logical place would be in the trigger of the modeled field HEIGHT in the PERSON entity. Creating an entry on entity level and then calling it from the leave field trigger would score equally well. This way the inheritance in Uniface will provide this piece of code in every component you use the field on.

Multiple field implementation

On record level

But what about two fields in the same entity? The formula for the Body Mass Index of a person would be:

BMI.PERSON = WEIGHT.PERSON/$sqrt(HEIGHT.PERSON)

The content of BMI is calculated by dividing the WEIGHT by the square root of a person’s HEIGHT. In order to calculate the BMI we need the value of two different fields in the entity PERSON.

If you thought about putting it in the modelled entity you’d be correct. I would create an entry that can be reused in (for instance) the value changed triggers of the WEIGHT and HEIGHT field or call it from a collection operation if you wanted to update all the BMI’s in some type of batch.

Between entities

Here is a classic. The total amount of an order is calculated by multiplying the price by the number of items in an order line, and then adding that to the total of the order. :

forentity "ORDERLINE"

   TOTAL.ORDER += PRIZE.ORDERLINE * NUMBERITEMS.ORDERLINE

endfor

 

The second rule of Uniface (you can actually here the voice of Brad Pitt, can’t you?) is that you never make a reference to another entity from a modelled entity. If you do, you need to include the referenced entity on every component you use the modeled entity on or the compiler will keep wagging its finger at you.

So we can’t reference the TOTAL.ORDER field in the trigger of the ORDERLINES entity. The only logical place is to put it in in a component. In this case, I would put it in a service that can be called from other locations as well. I can even activate that service in the modelled trigger of the ORDER entity.

What if it is a non-database entity?

Non-database entities come in two distinct flavors. The modelled ones and the non-modelled ones. An example of a modelled non-database entity is the entity containing a list of buttons containing default behavior that you can reuse when creating components. With these particular non-database entities the same rules apply as for the modelled database entities.

Non-modelled entities are created on the fly on a component. In this case there is only one place to put your code. The component level.

And non-database fields?

Non-database fields, have the same distinct flavors. They are either modelled (for instance a button that shows detailed information about a certain record of a modelled entity) or the non-modelled ones. If the non-database field is in the application model, code it there, otherwise code it in the component.

When I mention the component, there are actually three levels where to place your code. In the triggers of the component, in the triggers of the non-database entity, or in the triggers of the non-database field. Based on the previous rules you should be able to determine the correct position.

There is no entity or field reference

Once more for good measure:

 

if ($status < 0)

   return $status 

endif 

This code contains no field references and is of a more technical nature. This is an example of the smallest form of error handling in Uniface. If you intend to use it only once, the component is the best place to put it. If you need it in other places, you should move the code to the library and include it where required.

Can I use A Global Proc, instead of an Include Proc?

I have not used a Global Proc since the introduction of Include Procs. In my mind it is a deprecated feature of Uniface. From a component based development perspective Include Procs are better (but that is for another story). Besides using Global Procs for error handling purposes has one drawback. What happens when your Global Proc fails? Where are you going to catch that?

Let’s Summarize

Description Logical place
Code references exactly one field in one modeled entity Trigger level of the modelled field
Code references more than one field in one modeled entity Trigger level of the modelled entity
Code references more than one modeled entity In the component, preferably a service.
Code references a non-modeled entity If a non-modeled entity is used more than once, it should be defined as a modeled non-database entity. If it is a very specific non-modeled entity, it can be only in the component.
Code does not reference a field or an entity. Include proc. Never in a global proc. Component only, when it is really specific.

 

 

Using CouchDB with Uniface

I won’t repeat any definitions of what NoSQL databases are, nor a review of any specific products. I’ve read plenty about NoSQL databases and I think that the general view of developers is that it is one more tool in the arsenal of application development. I generally believe that you should choose the right tool for the job at hand.

So, you may get that task one day where the advantages of using a NoSQL database outweigh the disadvantages. Can, or how can you use this database with Uniface? The answer definitely depends on the specific NoSQL database product. Between them all, they have a large variation in their APIs and data structures. For that reason I will just describe my experiences doing some prototyping with CouchDB from Apache. Be aware that this is slightly different to Couchbase, which appears to be a commercialised offshoot from what Apache took on board as an open source project. For brevity, I refer you to the website for information about CouchDB’s characteristics:   https://couchdb.apache.org/

The major characteristic of CouchDB is that the documents stored in the database are in JSON format. While investigating another project, I stumbled upon a convenient source of JSON formatted documents that I could use to store in my CouchDB database. I hope that you aren’t offended by simple Chuck Norris jokes. It is a unique genre that not all will enjoy, but it served my purposes adequately. Thus in studying my prototype, you could imagine how you would handle more business related data.

I have provided a sample form in the community samples part of Uniface.info. All you need to do, besides compiling that one form, is to download and install CouchDB from the earlier link provided. I downloaded the Windows version. I manually created the “cnjokes” database using the CouchDB provided Futon utility, also installed with the Couch DB. I also manually defined the design document “vcnjokes”; more about that later.

The top part of the COUCHTEST01 form is really a “utility” area, where you can manually enter URIs and run requests against the “cnjokes” database.   These requests use the UHTTP internal Uniface component. The way the CouchDB API is structured gives you a very RESTful web service interface, though there are some comments on how RESTful CouchDB really is, within their online tutorials. The results of the calls are available in the message frame.   You can press the GET button without adding anything to the URI and you will see some global information about the “cnjokes” database. Overall, this “utility” is not as flexible as the CouchDB provided Futon utility, but it might be helpful during further Uniface development.

The 4 buttons, and accompanying entity and fields provide the real prototype; effectively demonstrating a CRUD lifecycle of managing CouchDB documents. The UHTTP component is used to obtain a CN joke in JSON format from an internet website, and then the UHTTP component is used to interact with the localhost CouchDB server. The document format is deliberately unchanged between the external joke website and CouchDB. However, you could manipulate the JSON before storing it in CouchDB if required, using Structs. Note that I have used $uuid as the basis for assigning a document ID.

The other 3 buttons query the CouchDB database using views. Ad-hoc queries are not possible in CouchDB. The 3 views are defined in a single “design document” called “vcnjokes”. The source for that design document are provided with the sample download, as comments for the COUCHTEST01 form.

  • Button “Get all current jokes from CouchDB” uses view “joke_by_jokeid”. All jokes are retrieved, and sorted by joke_ID, but only a few columns are selected. It cannot be edited as the revision ID is not available. Note that escaped quotation symbols in the data are displayed as quotations.
  • Button “Get all nerdy jokes from CouchDB” uses view “nerdy_joke”. The jokes list is filtered to those that have a category of “nerdy”. This list also cannot be edited.
  • Button “Get all current data from CouchDB for edit” uses view “all”. This view references all of the document and so all fields, including revision ID, are available. Thus editing can be done, and when stored, the new revision number is updated. Note that escaped quotation symbols appear as stored, for ease of updating.

When preparing the JSON for display in a Uniface form, it is certainly necessary to use Structs to manipulate it into the Component data structure. In fact the choice of the external data structure of the form entities is quite arbitrary. CouchDB has no fixed schema. Thus you can never be sure that an external application won’t add data that renders your entities and fields obsolete. All I could do is generate a useful number of jokes, and observe that some of them have one category with a value of “nerdy”. However, I can see that the category is defined as a JSON array, and so I make sure to set   $tags->jsonClass = “array” before converting the struct to a JSON string. This is what led to the one to many relationship between CNJOKE and JCATEGORY. With my CouchDB data set, I verified my schema by manually adding several extra categories to some jokes, using structure editor commands to add and remove occurrences (tool tip text will assist you).

Hopefully this prototype demonstrates how modern features in Uniface allow integration with other modern software systems.

 

Do we need a JSON data type?

I recently read a few articles raving about how good PostgreSQL is.  One article in particular explained how great it is that they have a JSON data type.  I wondered exactly what that would mean for developers, and whether Uniface needs one too.

The PostgreSQL documentation states that JSON data can be stored just fine in a text data type, but that a specific data type for JSON adds specific validation for JSON strings.  The documentation then adds that there are related support functions available.  Indeed there are JSON operators and functions that massage data between JSON strings and table rows and columns.  Suppose that you have a use case to exploit these functions, should you use them?  The simple answer for a Uniface developer is “of course not”.

Looking at those JSON support functions I would suggest that you can write Uniface functions / local proc modules to manipulate and transform data in similar ways.  Uniface Structs and the new 9.6.04 structToJson and jsonToStruct statements are particularly helpful for this.  So, provided that there is no extreme performance advantage in doing such manipulation on a DB server, it would not be a good idea to tie your application to a specific DB vendor, and lose that DBMS independence that Uniface gives you.  Bear in mind that there is no JSON data type in the current SQL Standard from 2011, and the major RDBMS vendors have not found a need to add such a non-standard extension.

Since we do have JSON manipulation tools, there is another consideration, based on our experiences with XML.  How do we validate the meaning of data transported by JSON?  With the xmlstream data type (and supporting proc statements) we have DTDs.  With our Structs transformations we have XML schema validation support.  With Uniface entities, we have the full support of the application model.

What is missing is a JSON Schema mechanism.  Thus I would suggest that if there is no supporting validation mechanism, there is no point in having a specific data type for JSON.

That situation may change in the future.  There are Internet Engineering Task Force (IETF) drafts available for a JSON Schema standard.  If you want to anticipate this as future standard, you can use this online tool to generate a JSON Schema:  http://www.jsonschema.net  from a sample JSON data stream.

At this time, to use this draft JSON Schema, you will need to write a validation module yourself.  However, you may be able to validate the data based on the Uniface Application Model.  After loading the Struct with the jsonToStruct statement, you may want to prepare the Struct for using the structToComponent statement.  Since 9.6.05+X501 the structToComponent supports a /firetriggers command option, which causes the Pre Save Occurrence and Post Save Occurrence triggers to be fired, thus allowing you to do further occurrence based validation or manipulation.  Of course the entities that you use for this purpose can be dummy entities created for this purpose, modelled or not.  This would avoid the need to reconnect with the database.

Hopefully we now have enough tools to deal with JSON data, without the need for a new data type.

Legacy: Old technology that frightens developers (part 1)

By Clive Howard, Principal Practitioner Analyst, Creative Intellect Consulting

To developers the term legacy is often a dirty word meaning old software that is a pain to work with. Ironically, of course, it is the software that developers spend most of their time working with and developers made it what it is. The question all developers should ask is why legacy software is generally considered to be bad and what can be done to avoid this situation in future? After all an application released today will be legacy tomorrow.

Development teams do not set out to create bad software that will become difficult to maintain, support and extend. When allowed by their tyrannical masters architects and developers put a lot of work in upfront to try and avoid the typical problems of bloat, technical debt and bugs that they fear happening later. For some reason over the years these problems seem to have become inevitabilities.

The type of issues that make developers fear working with legacy include: technologies that are no longer fit for purposes; bloated codebases that are impossible to understand; different patterns used to achieve the same outcome; lack of documentation; inexplicable hacks and workarounds; and a lack of consistency plus many, many more. Most of these have their roots in a combination of design and coding.

Design theory does not always reflect reality

Architects aim to design clean, performant, scalable and extensible applications. Modern applications are complex involving multiple “layers” often distributed from a hardware perspective and including third party and/or existing legacy applications. Different components will frequently be the responsibility of different development teams working in different programming languages and tools.

For some time now the principle of separation has been applied to try and avoid the tightly coupled client/server applications of the past that were known to cause many legacy issues. This has gone under many guises, “separation of concerns”, n-tier, Service Orientated Architecture (SOA) and so on. They are all variants of the same concept that the more separated out the components of an application are the more flexible, scalable, extensible and testable that application will be. For developers having an application made of smaller parts makes it more manageable from a code perspective.

One of the classic scenarios is the interchangeable database idea. An application might start life using one database, but later on it needs to change to another. The concept of ODBC meant that it was easy to simply change a connection string in code and providing the new database had the same structure as the previous everything would continue without a hitch. The problem has been that what looks good theoretically doesn’t hold up in reality.

In the example of changing the database the reality often meant that there were a number of stored procedures, triggers or functions included in the database. Changing from one database to another meant porting these and that in itself can be a significant task. The time and therefore cost of such an activity resulted in the old database continuing. Hence today we find so many applications running unsuitable databases such as Access or Filemaker. A developer then has the frustration of having to work with inherently limiting and non-performant code.

No immunity from separatist design strategies

If we move forward to many of today’s architecture patterns such as SOA we still see similar problems. The concept of SOA is that components of an application become loosely coupled and so different parts of the application are less wedded to one another. Unfortunately within the separate services and consumers the same problems as outlined above can apply.

Worse than that is many service providers do not version their services. Google Maps will often bring out a new version of their service and clients calling the previous version will continue to function. However many others (social networks take note) do not follow this practice and frequently push out breaking changes to their service. This introduces a whole new problem into legacy applications whereby developers have to regularly go back into code and update it to work with the changes to the service.