For a definition of Polyglot Programming I refer the reader to the 2006 definition from Neal Ford, which basically says that you should use the appropriate language to solve some specific problem, and that modern applications are becoming more complex, which in turn leads to the use of more than one language within that application. This helps avoid stretching the abilities of a single language to perform all tasks, at the expense of needing an additional skill (knowledge of other programming languages).
In practice, most of us have learnt more than one programming language. The Polyglot Programming concept can also stretch to scripting languages like shell scripts. Thus we have probably been Polyglot Programming since well before 2006. Those applications which might be driven by DOS .bat (or Unix C-Shell) scripts to manipulate some simple files, build Uniface batch program command lines, execute them and then initiate some other batch programs, would fit the polyglot genre.
Another type of legacy example exploits the Uniface URB call-in, call-out architecture through signature definitions. This extends Polyglot Programming across remote systems boundaries so that the choice of the other programming language can also be dictated by the technology of that remote platform (e.g. use C# on Windows platforms, and C++ on UNIX).
Even though we’ve been comfortably Polyglot Programming for years, the Uniface mantra is to improve productivity by using a single development environment with high levels of abstraction in the programming, i.e. develop once and deploy anywhere. This has meant that new Uniface functionality often removes the need for Polyglot Programming, e.g. better OS file handling proc commands, direct support for DBMS stored procedures (SSP signature implementation type) and so on. The idea is to bring back as much coding as possible into the UDE and Uniface repository, and so improve application maintainability.