Tag Archives: API

What’s Next in Google Cloud Development?

Amazon Web Services (AWS), Microsoft Azure, and Google Cloud dominate currently the public cloud market when it comes to IaaS (Infrastructure as a Service) and PaaS (Platform as a Service). Although Amazon is still the undisputed leader in the cloud market, Microsoft’s and Google’s cloud offering is rapidly growing.

With great interest, and as usual to keep up with technology, representatives from the Uniface development team visited earlier this year already events from AWS and Microsoft Azure. An opportunity to visit another cloud event of the third biggest player in the cloud market was on 21st of June in De Kromhouthal in Amsterdam.

What’s Next in Google Cloud Development?

Along with us, there were thousands of others that attended this event, which had an overwhelming interest in an inspiring location in Amsterdam, with a beautiful view on the IJ. Google introduced itself as a cloud partner for enterprises and as a good alternative for Microsoft Office suite. For Google Cloud, data is managed and stored in giant data centers around the world, with one of their biggest data center in the Netherlands.

What’s Next in Google Cloud Development?

The event started with keynotes that gave us already the overview of technologies that will be touched on in the rest of the day, including resilient microservices; machine learning API’s; Google assistant with API.AI; Cloud functions; Spark and Hadoop; Cloud Bigtable; Cloud Spanner; and cloud security. In three different tracks, all these and other (new) technologies were discussed, whereby customer cases were discussed from, including ING and World Press Photo.

What’s Next in Google Cloud Development?

After the keynotes, most of us went to sessions about microservices with Containers, Kubernetes and Cloud Containers. It was an interesting session about creating clusters of load-balanced microservices that are resilient and self-healing. With a group of Compute Engine instances running Kubernetes, an open-source container management system, you can fully automate the deployment, operations, and scaling of containerized applications.

In the rest of the day, sessions were attended about building Firebase apps; building chat bots with machine learning; securing your cloud infrastructure; processing big data with BigQuery; and the mission-critical, relational and scalable application database Google Cloud Spanner.

Although all were, for us as technology addicts, very interesting, here a few things that are worth sharing with you:

  • In general, 99% of vulnerability exploits had patches more than a year old. So, keeping your software and infrastructure of to date is one of the best security measures you can take. (Read more here.)
  • With Google, you are able to query a petabyte(!) sized database in over a few minutes with plain SQL queries using Google BigQuery.
  • As a machine-learning expert, Google has opened API’s for computer vision, predictive modeling, natural language understanding and speech recognition, which you can embed in your smart enterprise applications.
  • Google Cloud has a more flexible approach to billing, as you can pay services you use per minute, and not as competitors allow, only on an hourly base.

All in all, it was a great day with plenty of information on the forefront of developing cutting-edge enterprise applications.

What does the cloud bring to application development?

Following our line of thought of keeping up with technology, I had the privilege and pleasure to join a diverse group of Uniface engineers who participated in the Google Cloud Next  event in Amsterdam. As mentioned earlier, Uniface is at the leading edge of application technology so in that respect we participate by learning about the newest trends. We do this also for cloud with great partners like Google by obtaining the technological highlights,  and diving deeper into some examples like spanner and app maker.  All this to drive momentum and to spark innovation at Uniface.

Next Amsterdam being such a nice and big event consisted of several tracks with different areas of focus all around the cloud. Tracks that were visionary, strategic and technical besides the experimental breakout sessions; handling everything from the business, the technology and innovation.

I attended several sessions and had a look at the experimental/technical campground as presented by Google and some of its technology partners at the conference.

The most outstanding thing I realized while at the event was that cloud is moving everywhere, from application development, to deployment and innovation.

So, in that sense, cloud is becoming a game changer in application development. What do I mean by that? Well, in general, we are used to waves of technologies and application architectures like mainframe, client/server, Static Web, Dynamic Web, mobile apps, and now the cloud.

The cloud is reshaping the way we think about software; whether that is containerizing, micro services, contributing to developing new applications, exploiting the data produced by the usage of applications, all in all, taking software to a new level. Actually, one could say, it is being changed in several dimensions.

What does the could bring to application development?

Think about security which appeared to be something for the experts, and nowadays reshapes the way we think about software. And some of the thoughts around security today may involve user behaviour as an additional way to authenticate us. Wow! Nice. Although it does also imply user behaviour is something you need to consider.

What does the could bring to application development?

What does the could bring to application development?

Well, you may think “but there is a lot of data that now needs to be processed for that”, and “what about the structure of such data?” Well, have you seen all the developments around big data and high performing databases which the cloud is enabling? Ok, I give it to you… but then how can I, as a developer, make use of that data? Well, API’s is the answer. An old and beautiful concept that is being embedded in software development now, as collaboration with others is a must. Your software needs to be easy to interface with and as such it must provide a clear and easy API for others to use. Better even is the fact that software in the cloud must have API’s, becoming a de facto standard otherwise you are out. (By the simple fact that adoption will be hard if not impossible with all the competition around.)

What does the could bring to application development?

The more common areas where the cloud appear initially to have impact was on whether the application was executed on bare-metal or on a virtualized environment reshaping componentizing the hardware and the different layers of software. This too, is something that affects application development as we need to think also on those components/containers we can use/enable others to use. Consider frameworks for it and make the necessary provisions in your application architecture.

What does the could bring to application development?

Also of utmost interest were the innovation presentations that took place on a plenary/breakout, or campground sessions. It was amazing to see how creativity is being applied to develop the technological step around the cloud; think about natural language support API, and its applicability on the artificial intelligence spectrum, which nowadays is within our reach, it is in our hands (literally) with our phones/tablets.

What amazed us too was to see synergy in our approach to application development and the new trends like App Maker.

Whether you use the cloud to deploy your applications, execute on the cloud and or to innovate, the cloud is here to stay.

All in all, the value proposition around the cloud is to think not only of what the cloud can do for you, but what you can do in the cloud too.

 

 

 

 

 

Experimenting with the AWS S3 API

Last month I uploaded a community sample that showed how to call an Amazon Web Services RESTful API, in particular for their S3 storage service.  That sample is contained within a single form, and is accompanied by some simple instructions and notes on assumptions made etc.  I used a form component type, and constructed it to use operations for the actual API calls, so that it would be easy to understand, and easy to modify to a service component, for wider usage.

The next thing I wanted to try was to provide the same functionality from a DSP.  Initially this could have meant replacing the Windows GUI layer with a HTML5 based layer in a DSP.  However, DSPs make the Uniface JavaScript API available, and thus there is an opportunity to try out the AWS SDK for JavaScript (in the Browser).  Information is available at http://docs.aws.amazon.com/AWSJavaScriptSDK/guide/browser-intro.html .

The main advantage of using this SDK is that becomes possible to avoid a lot of low level coding with the RESTful API.  If you study the form sample I mentioned earlier, you will see a lot of code to build canonical requests, and then to sign them securely.  This is all buried inside the various SDKs that AWS provides.  This was worth a try!

As it turned out, coding the JavaScript to list the bucket contents, download and upload files, was relatively easy.  In particular, the feature to generate signed URLs for downloading files is very handy.  In fact most of the buttons on the sample DSP have browser side JavaScript which calls the AWS SDK without much reference to the Uniface JavaScript API.  This just means that in some circumstances you might not need to use DSPs at all, but if your use case does involve exchanging information with back-end processes, then this sample should be of interest.  One such use-case is to save S3 files on the back-end server, and so a JavaScript activate is done to send the signed URL to a DSP operation, to complete the download.  In any case, it is tidy to keep the JavaScript code in the Uniface repository as much as possible.

So … although the JavaScript coding turned out easy enough, the challenge turned out to be how to authenticate the SDK calls.  In the form sample I used the AWS Access Key ID and a Secret Access Key to sign requests.  These were quarantined from the form source code, and the runtime user (who shouldn’t have access to the Uniface debugger), by storing the sensitive data in assignment file logicals.  Not the ultimate form of protection, but adequate for my sample.  The JavaScript SDK requires access to these artifacts, and since this runs in the browser, it exposes them to all users.  To slightly obscure these private values, I placed them in a separate JavaScript file, which is not referred to in the HTML, but dynamically loaded by Uniface with this statement:  $webinfo(“JAVASCRIPT”) = “../js/aws_config.js” .  Of course you can read the variable contents with any browser debugger.  So this DSP sample comes with a similar caveat to the AWS recommendations.

The options for supplying credentials to the AWS JavaScript API are described here:   http://docs.aws.amazon.com/AWSJavaScriptSDK/guide/browser-configuring.html .  So for my sample I did effectively supply hard-coded credentials for an IAM user that has read-only permissions.  Real applications will want a more secure method.  I was going to evaluate AWS Cognito, but it is not yet available in my region.  Another option to investigate is to use Temporary Security Credentials, via the AWS Security Token Service.  Further discussion on authenticating credentials is beyond the scope of this blog / sample.

One final security configuration had to be made, because the sample is running within a browser, which is likely to be enforcing CORS.  This is best explained in the documentation at http://docs.aws.amazon.com/AWSJavaScriptSDK/guide/browser-configuring.html#Cross-Origin_Resource_Sharing__CORS_ .

To summarise, Uniface developers have a choice when integrating with AWS.  They can choose the RESTful APIs for lower level control, in a wider set of situations, or they can use the JavaScript SDK for easier integration when using the Uniface JavaScript API.

What else can you use Stucts for?

There is well deserved interest and anticipation in Uniface 9.6, however, I’m still digesting plenty of new things from 9.5.  In particular, I’ve been studying the use of Structs, and the data transformation statements that go with it.

You may recall that Structs were invented primarily to support the processing of complex data types that are used in Web Services.  However, procedural data transformation has much broader application that Web Services.  So even if you don’t plan to integrate with Web Services any time soon, you might have a good reason to use Structs in your legacy applications.

In 9.5 the focus of the data transformation has been on XML and Uniface component data structures.  XML often appears in the payload of SOAP messages used in Web Services, but it is also transported as separate files, sometimes accompanied by XML Schema files.  Some software routine generated these files and some other routine will read and interpret it.  If you consider that there may be file transfer, polling or messaging functions that support the transport of these XML files, then you could view these XML files as a part of an asynchronous call to a software process.  It just isn’t following a formal API, like Web Services.

So let’s compare 2 ways of getting the same job done.  Imagine that you have to build an office document from a template of some kind (to provide layout and pre-defined styles) and from data that is maintained by your Uniface application.  It doesn’t matter if it’s a spread sheet or a textual document.  I am comparing the use of an API, such as COM objects for Microsoft Office, versus the XML file(s) that is stored within the final office document file (these days, Open Office Org, and Microsoft Office documents, are really zipped archives of other files that contain everything needed to reconstruct the document).

API calls may be the safest way to build such a documents, as vendors will often publish an API, along with promises (usually kept)  to keep the API compatible into the future, whilst being free to change document structures without notice (and so their structure is not often documented).

However, a series of API calls may be difficult to comprehend, as the vendor usually has to keep their interfaces at a high level of abstraction and quite generic.  The sequence of calls may not match the way that you have to process your own application data.  At least there is a formal error processing mechanism, even if you find it difficult to understand.  APIs are also potentially limited to certain platforms, or may not have a function that performs the task that you require.

XML files have an orderly structured composition which can be validated.  With some simple viewing tools, it is relatively easy to deduce a document’s data structure.  Prototyping XML file creation is quick as there is visual feedback on how the results appear.  On the down side, you may have built the document structure apparently correctly, but it may still contain errors that will make the document appear corrupt by the next software routine that processes it.  So testing and debugging may be more difficult.

Making a choice between 2 such approaches is made routinely by developers due to many factors.  Now developers can take into account the new Structs data type and data transformation statements.  After building a spread sheet for a customer recently, I’m considering building a tool to massage my iTunes library using Structs, after all, it’s just organizing your assets via an XML file!