All posts by Eddy Knochs

Experimenting with the AWS S3 API

Last month I uploaded a community sample that showed how to call an Amazon Web Services RESTful API, in particular for their S3 storage service.  That sample is contained within a single form, and is accompanied by some simple instructions and notes on assumptions made etc.  I used a form component type, and constructed it to use operations for the actual API calls, so that it would be easy to understand, and easy to modify to a service component, for wider usage.

The next thing I wanted to try was to provide the same functionality from a DSP.  Initially this could have meant replacing the Windows GUI layer with a HTML5 based layer in a DSP.  However, DSPs make the Uniface JavaScript API available, and thus there is an opportunity to try out the AWS SDK for JavaScript (in the Browser).  Information is available at http://docs.aws.amazon.com/AWSJavaScriptSDK/guide/browser-intro.html .

The main advantage of using this SDK is that becomes possible to avoid a lot of low level coding with the RESTful API.  If you study the form sample I mentioned earlier, you will see a lot of code to build canonical requests, and then to sign them securely.  This is all buried inside the various SDKs that AWS provides.  This was worth a try!

As it turned out, coding the JavaScript to list the bucket contents, download and upload files, was relatively easy.  In particular, the feature to generate signed URLs for downloading files is very handy.  In fact most of the buttons on the sample DSP have browser side JavaScript which calls the AWS SDK without much reference to the Uniface JavaScript API.  This just means that in some circumstances you might not need to use DSPs at all, but if your use case does involve exchanging information with back-end processes, then this sample should be of interest.  One such use-case is to save S3 files on the back-end server, and so a JavaScript activate is done to send the signed URL to a DSP operation, to complete the download.  In any case, it is tidy to keep the JavaScript code in the Uniface repository as much as possible.

So … although the JavaScript coding turned out easy enough, the challenge turned out to be how to authenticate the SDK calls.  In the form sample I used the AWS Access Key ID and a Secret Access Key to sign requests.  These were quarantined from the form source code, and the runtime user (who shouldn’t have access to the Uniface debugger), by storing the sensitive data in assignment file logicals.  Not the ultimate form of protection, but adequate for my sample.  The JavaScript SDK requires access to these artifacts, and since this runs in the browser, it exposes them to all users.  To slightly obscure these private values, I placed them in a separate JavaScript file, which is not referred to in the HTML, but dynamically loaded by Uniface with this statement:  $webinfo(“JAVASCRIPT”) = “../js/aws_config.js” .  Of course you can read the variable contents with any browser debugger.  So this DSP sample comes with a similar caveat to the AWS recommendations.

The options for supplying credentials to the AWS JavaScript API are described here:   http://docs.aws.amazon.com/AWSJavaScriptSDK/guide/browser-configuring.html .  So for my sample I did effectively supply hard-coded credentials for an IAM user that has read-only permissions.  Real applications will want a more secure method.  I was going to evaluate AWS Cognito, but it is not yet available in my region.  Another option to investigate is to use Temporary Security Credentials, via the AWS Security Token Service.  Further discussion on authenticating credentials is beyond the scope of this blog / sample.

One final security configuration had to be made, because the sample is running within a browser, which is likely to be enforcing CORS.  This is best explained in the documentation at http://docs.aws.amazon.com/AWSJavaScriptSDK/guide/browser-configuring.html#Cross-Origin_Resource_Sharing__CORS_ .

To summarise, Uniface developers have a choice when integrating with AWS.  They can choose the RESTful APIs for lower level control, in a wider set of situations, or they can use the JavaScript SDK for easier integration when using the Uniface JavaScript API.

Yow Connected Mobile and IoT Conference Report

The YOW Connected conference was held in Melbourne on 17th to 18th September http://connected.yowconference.com.au/ . It was a developer’s conference based on Mobile and IoT topics. Since Uniface is adding Mobile devices as a deployment option in 9.7, I thought I should find out what problems existing and aspiring mobile application developers are experiencing, and how they are solving them.

Keynote presentations should give a buzz to the audience, and this conference had one on the magic of mobile devices, with plenty of Harry Potter analogies. Another keynote was on IoT wearables, which showed many examples of how technology failed to make good partners with fashion.

The technical presentations for smart phones generally fell into iOS and Android camps, i.e. native app development was very important to most attendees. There was a strong belief in maximising the user experience over portability of business applications, and this meant gaining the most out of native platform features. Indeed, whilst people figure what to do with Apple watches and Galaxy gear, I can imagine that they need to use native platform features.

Given that there was strong commitment to native mobile development, it came as little surprise that using JavaScript to program the user experience was seen as second rate technology. However, when an industry heavyweight like Facebook come along with a new JavaScript library, people start to overlook their general JavaScript prejudice. So this is where I first learnt of the React library http://facebook.github.io/react/index.html . React differentiates itself from other JS libraries with the use of a virtual DOM. Your JS programming updates the virtual DOM, and then these updates are combined to update the real browser DOM. This turns out to perform much better than scanning and updating the browser DOM for each individual update.

But Wait, Facebook has done more … they have also introduced React Native https://facebook.github.io/react-native/ . So, instead of using JS to manipulate HTML for rendering inside a UIWebView container, you use JS to manipulate view structures inside native components. There is still a virtual DOM equivalent, and then the actual native component objects are updated by this library. Facebook has initially developed this with iOS native component models, and thus you are limited to developing on an Apple platform, but as recently as one month ago, you could also target the Android native platform.

This is where the development philosophy gets interesting. In the Uniface world we are used to “write once, deploy anywhere”, and there is a suggestion that React Native might support something similar, but in fact their mantra is “learn once, write anywhere”. Even though React Native has adopted standardized layout mechanisms from CSS3 using flexboxes, they really encourage you to choose native component types that exploit the best of what the native platform offers, i.e. avoiding a lowest common denominator user experience, at the expense of full portability and productivity. My personal view is that you would only choose React Native when user experience is clearly a very high priority, and use more proven platform independent tools, like Cordova, as often as possible. Perhaps this would change as React Native evolves, after all, it is still at release 0.11.0.

There were numerous IoT technical presentations, and I include Virtual Reality and Augmented Reality apps in this category. The presentations covered things like how to make a LED flash on a microprocessor (see https://www.particle.io/prototype for examples) using calls to the Particle Cloud written in JavaScript (groans from the smart device developers, but IoT developers don’t mind). Another presentation covered VR from high-end Oculus products to DIY Google Cardboard, once more, using JavaScript to program the API calls. This article shows the basic JavaScript libraries that you can practice with http://www.sitepoint.com/bringing-vr-to-web-google-cardboard-three-js/ .

I did initially struggle to see why these IoT technologies would be important to business applications, and thus where Uniface might be able to help, but eventually the spirit of adventure in programming comes out of you (OK, call it inner geekiness) and you want to play with it anyway. Besides, we know that other IT technologies have evolved from past experimentation, and so I wasn’t surprised to find myself ordering a Google Cardboard device on the weekend https://www.google.com/get/cardboard/get-cardboard/ . Soon enough, I may be using the Uniface JavaScript API, to send data to 3D rendering JavaScript libraries, for display in my Google Cardboard device.

Using CouchDB with Uniface

I won’t repeat any definitions of what NoSQL databases are, nor a review of any specific products. I’ve read plenty about NoSQL databases and I think that the general view of developers is that it is one more tool in the arsenal of application development. I generally believe that you should choose the right tool for the job at hand.

So, you may get that task one day where the advantages of using a NoSQL database outweigh the disadvantages. Can, or how can you use this database with Uniface? The answer definitely depends on the specific NoSQL database product. Between them all, they have a large variation in their APIs and data structures. For that reason I will just describe my experiences doing some prototyping with CouchDB from Apache. Be aware that this is slightly different to Couchbase, which appears to be a commercialised offshoot from what Apache took on board as an open source project. For brevity, I refer you to the website for information about CouchDB’s characteristics:   https://couchdb.apache.org/

The major characteristic of CouchDB is that the documents stored in the database are in JSON format. While investigating another project, I stumbled upon a convenient source of JSON formatted documents that I could use to store in my CouchDB database. I hope that you aren’t offended by simple Chuck Norris jokes. It is a unique genre that not all will enjoy, but it served my purposes adequately. Thus in studying my prototype, you could imagine how you would handle more business related data.

I have provided a sample form in the community samples part of Uniface.info. All you need to do, besides compiling that one form, is to download and install CouchDB from the earlier link provided. I downloaded the Windows version. I manually created the “cnjokes” database using the CouchDB provided Futon utility, also installed with the Couch DB. I also manually defined the design document “vcnjokes”; more about that later.

The top part of the COUCHTEST01 form is really a “utility” area, where you can manually enter URIs and run requests against the “cnjokes” database.   These requests use the UHTTP internal Uniface component. The way the CouchDB API is structured gives you a very RESTful web service interface, though there are some comments on how RESTful CouchDB really is, within their online tutorials. The results of the calls are available in the message frame.   You can press the GET button without adding anything to the URI and you will see some global information about the “cnjokes” database. Overall, this “utility” is not as flexible as the CouchDB provided Futon utility, but it might be helpful during further Uniface development.

The 4 buttons, and accompanying entity and fields provide the real prototype; effectively demonstrating a CRUD lifecycle of managing CouchDB documents. The UHTTP component is used to obtain a CN joke in JSON format from an internet website, and then the UHTTP component is used to interact with the localhost CouchDB server. The document format is deliberately unchanged between the external joke website and CouchDB. However, you could manipulate the JSON before storing it in CouchDB if required, using Structs. Note that I have used $uuid as the basis for assigning a document ID.

The other 3 buttons query the CouchDB database using views. Ad-hoc queries are not possible in CouchDB. The 3 views are defined in a single “design document” called “vcnjokes”. The source for that design document are provided with the sample download, as comments for the COUCHTEST01 form.

  • Button “Get all current jokes from CouchDB” uses view “joke_by_jokeid”. All jokes are retrieved, and sorted by joke_ID, but only a few columns are selected. It cannot be edited as the revision ID is not available. Note that escaped quotation symbols in the data are displayed as quotations.
  • Button “Get all nerdy jokes from CouchDB” uses view “nerdy_joke”. The jokes list is filtered to those that have a category of “nerdy”. This list also cannot be edited.
  • Button “Get all current data from CouchDB for edit” uses view “all”. This view references all of the document and so all fields, including revision ID, are available. Thus editing can be done, and when stored, the new revision number is updated. Note that escaped quotation symbols appear as stored, for ease of updating.

When preparing the JSON for display in a Uniface form, it is certainly necessary to use Structs to manipulate it into the Component data structure. In fact the choice of the external data structure of the form entities is quite arbitrary. CouchDB has no fixed schema. Thus you can never be sure that an external application won’t add data that renders your entities and fields obsolete. All I could do is generate a useful number of jokes, and observe that some of them have one category with a value of “nerdy”. However, I can see that the category is defined as a JSON array, and so I make sure to set   $tags->jsonClass = “array” before converting the struct to a JSON string. This is what led to the one to many relationship between CNJOKE and JCATEGORY. With my CouchDB data set, I verified my schema by manually adding several extra categories to some jokes, using structure editor commands to add and remove occurrences (tool tip text will assist you).

Hopefully this prototype demonstrates how modern features in Uniface allow integration with other modern software systems.

 

Do we need a JSON data type?

I recently read a few articles raving about how good PostgreSQL is.  One article in particular explained how great it is that they have a JSON data type.  I wondered exactly what that would mean for developers, and whether Uniface needs one too.

The PostgreSQL documentation states that JSON data can be stored just fine in a text data type, but that a specific data type for JSON adds specific validation for JSON strings.  The documentation then adds that there are related support functions available.  Indeed there are JSON operators and functions that massage data between JSON strings and table rows and columns.  Suppose that you have a use case to exploit these functions, should you use them?  The simple answer for a Uniface developer is “of course not”.

Looking at those JSON support functions I would suggest that you can write Uniface functions / local proc modules to manipulate and transform data in similar ways.  Uniface Structs and the new 9.6.04 structToJson and jsonToStruct statements are particularly helpful for this.  So, provided that there is no extreme performance advantage in doing such manipulation on a DB server, it would not be a good idea to tie your application to a specific DB vendor, and lose that DBMS independence that Uniface gives you.  Bear in mind that there is no JSON data type in the current SQL Standard from 2011, and the major RDBMS vendors have not found a need to add such a non-standard extension.

Since we do have JSON manipulation tools, there is another consideration, based on our experiences with XML.  How do we validate the meaning of data transported by JSON?  With the xmlstream data type (and supporting proc statements) we have DTDs.  With our Structs transformations we have XML schema validation support.  With Uniface entities, we have the full support of the application model.

What is missing is a JSON Schema mechanism.  Thus I would suggest that if there is no supporting validation mechanism, there is no point in having a specific data type for JSON.

That situation may change in the future.  There are Internet Engineering Task Force (IETF) drafts available for a JSON Schema standard.  If you want to anticipate this as future standard, you can use this online tool to generate a JSON Schema:  http://www.jsonschema.net  from a sample JSON data stream.

At this time, to use this draft JSON Schema, you will need to write a validation module yourself.  However, you may be able to validate the data based on the Uniface Application Model.  After loading the Struct with the jsonToStruct statement, you may want to prepare the Struct for using the structToComponent statement.  Since 9.6.05+X501 the structToComponent supports a /firetriggers command option, which causes the Pre Save Occurrence and Post Save Occurrence triggers to be fired, thus allowing you to do further occurrence based validation or manipulation.  Of course the entities that you use for this purpose can be dummy entities created for this purpose, modelled or not.  This would avoid the need to reconnect with the database.

Hopefully we now have enough tools to deal with JSON data, without the need for a new data type.

How Many Monitors?

I’ve been upgrading some PCs at home recently, and one of the topics we’ve been discussing around the family is what kind of monitor(s) would suit everyone’s needs.  It was logic to extend that study to what kind monitor / display arrangement suits developers.

I’ve come across this kind of discussion in various blogs, and one thing that has struck me is that there is little distinction between home and office requirements.  This is probably down to both the prevalence of contract workers, as well as a shift toward some portion of time working from home.  Indeed, with the availability of some interesting hardware devices, even travelling developers, i.e. laptop users, also have many options available to them (i.e. when not actually travelling).

So, the main choices facing developers are … how many screens do I need, and how big should they be?  It ends up being quite personal, though I’ll try to discuss some influencing factors.  It’s a good idea to have a “home” screen, i.e. where you want to be continually updated with emails or some web content.  You would typically arrange a few application windows to fit the screen.  Hopefully these windows’ co-ordinates can be saved by these applications.  Essentially this is the concept behind Windows 8 live tiles.

For actual development I’d suggest 2 screens, one for source code editing and the other for testing / troubleshooting.  You might justify extra screens if you need two different source code editors (e.g. for polyglot development), or if the deployment environment is complex and possibly deployed over additional servers.  Screen size could be driven by budget, aesthetics or just … how good is your eyesight!  If utilising the screens for gaming as well, then I’m told refresh rate is very important.

Assuming your set-up has one workstation controlling everything, all of your screens will be extending the desktop.  Windows 8 has useful new features to support extended desktops including extending the wallpaper and the taskbar.  At last, there is somewhere to see those well stitched panorama photos that you created when you were exploring your new camera’s features.  You can also choose completely different wallpaper on each screen.

If you are lucky enough to have a permanent workstation set up, then you have maximum hardware flexibility, and only your budget is a limiting factor.  The main advantage over laptop users is that you can make sure that all screens have the same resolution.  This is handy if you move windows around between screens as they will always maximise to the correct size.

Laptop users have to make some compromises as it’s assumed that at some time you will run some of those development applications that would normally exploit multiple screens, away from your normal environment, i.e. run them in one screen.  Sometimes you can borrow one foreign screen, but it won’t be the same as the home environment.  So here’s a tip that I’ve learnt from hard experience: always drag all windows back to the laptop screen before shutting the application down.  Disconnecting the extra screen whilst your program is still running has lead me down futile troubleshooting paths, even re-installing software, all because I though the program was broken / corrupted etc., when it was just waiting for input to a secondary window that was displaying where the second monitor used to be displaying that window.