What makes a 5GL?

Impossible, but ...


Comment From time to time vendors in the application development space have claimed 5GL (5th generation language) capabilities. If you think about this for a moment you will realise that this can't be true. The idea of GLs is that each is an abstraction of the former, so we had machine code (on which I cut my teeth as a developer); assembler; third generation languages such as COBOL, Java, C++ and so forth; and finally 4GLs (now sometimes referred to as ABLs-advanced business languages-as 4GL seems to be out of fashion).

You can't get a further level of abstraction than a 4GL so there can't be such a thing as a 5GL, just as you can't abstract further than meta-meta-metadata (which is what high-end repositories provide).

So, if you can't get beyond a 4GL, what can you do to a 4GL to make it a quantum step forward compared to current 4GLs, even if it isn't actually a 5GL?

One possible answer is that you could build a data federation engine into the development environment. This would allow you to create applications that accessed diverse, heterogeneous data sources. Now, data federation has typically been thought of as being used for EII (enterprise information integration) and the premise has been that it is about queries. But a database look-up is, from a theoretical perspective, just a query, even if it is going to be used for a transactional application, so supporting data federation within a development environment sense. Indeed, it is easy to see how such an approach could be used for building MDM (master data management) applications, for example.

So, is there anybody actually doing this? Well, as you might guess, yes. A small UK company called Abbro Interactive has a tool called Abbro (now in version 3-it has been in use for some seven years), which does exactly this. You can go to the company's website and take a look at the facilities provided but the key to note is that this is a 4GL with extras that, in the case of federation, includes the ability to reverse engineer existing databases to create database views that can then be merged or joined, and caching capabilities so that reading the same data from these sources does not mean repeated database access.

The product is based on its own scripting language which is interpreted dynamically at run-time by the Abbro engine. It has to work this way because it is intended that applications will be event-driven, so event exits may occur at any time. Note that this facilitates the deployment of workflow as well as alerts, notifications and so on. It includes the ability to scan documents and populate forms therefrom, support for bar codes (and, at least in theory, RFID tags) and GPS messages so that you can determine the locations of things.

As I said: a 4GL with extras.

However, you can't buy Abbro as a product: at present all you can do is to have the company build an application for you using Abbro (based on a proof of concept, if required), which should be significantly faster (in terms of delivery) and cost less money than would normally be the case. Once you've got the application you can customise it but the underlying logic will be developed by Abbro Interactive for you. The company does have one package built on Abbro, for export documentation, but its potential uses are much broader than this. The company is considering how it might make use of channel partnerships but however good the product is, we cannot expect to see widespread deployment while Abbro itself remains the only company doing core development work.

Copyright © 2007, IT-Analysis.com


Other stories you might like

  • Cerebras sets record for 'largest AI model' on a single chip
    Plus: Yandex releases 100-billion-parameter language model for free, and more

    In brief US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world's largest Wafer Scale Engine 2 chip the size of a plate.

    "Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system," the company claimed this week. "Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes."

    The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.

    Continue reading
  • Zendesk sold to private investors two weeks after saying it would stay public
    Private offer 34 percent above share price is just the thing to change minds

    Customer service as-a-service vendor Zendesk has announced it will allow itself to be acquired for $10.2 billion by a group of investors led by private equity firm Hellman & Friedman, investment company Permira, and a wholly-owned subsidiary of the Abu Dhabi Investment Authority.

    The decision is a little odd, in light of the company's recent strategic review, announced on June, which saw the board unanimously conclude "that continuing to execute on the Company's strategic plan as an independent, public company is in the best interest of the Company and its stockholders at this time."

    That process saw Zendesk chat to 16 potential strategic partners and ten financial sponsors, including a group of investors who had previously expressed conditional interest in acquiring the company. Zendesk even extended its discussions with some parties but eventually walked away after "no actionable proposals were submitted, with the final bidders citing adverse market conditions and financing difficulties at the end of the process."

    Continue reading
  • Singapore promises 'brutal and unrelentingly hard' action on dodgy crypto players
    But welcomes fast cross-border payments in central bank digital currencies

    In the same week that it welcomed the launch of a local center of excellence focused on crypto-inspired central bank digital currencies, Singapore's Monetary Authority (MAS) has warned crypto cowboys they face a rough ride in the island nation.

    The center of excellence (COE) was established by the Mojaloop Foundation – an open source effort to create payment platforms to make digital financial services accessible to those without access to banks. The COE aims to "accelerate financial inclusion in emerging markets" through hackathons, workshops and pilot projects while examining expanded CBDCs payment capabilities."

    Singapore's sovereign wealth fund has invested in Mojaloop, and MAS chief fintech officer Sopnendu Mohanty serves as a board advisor and the authority provides representatives to the Foundation's working group, alongside folks from the Bill & Melinda Gates Foundation, Google, and more.

    Continue reading

Biting the hand that feeds IT © 1998–2022