What technology is behind API ai

Column: Stropek as a Service

AI as a Service: AI services in the cloud should make the technology suitable for everyday use

Artificial intelligence (AI) and machine learning (ML) fire the imagination of many SaaS providers. Wouldn't it be nice if we could replace complicated input masks with an easy-to-use bot?

Why do we still have to type in the travel expense receipt? A photo with a smart AI in the background can do that. In practice, teams attempting such endeavors encounter a lot of problems. Above all, in many cases there is a lack of relevant development experience. Ready-made AI services such as Microsoft's Cognitive Services promise a remedy. Instead of having to laboriously develop everything from scratch, you get easily consumable web APIs with usage-dependent costs. Does that mean that typical SaaS projects are in the fast lane towards the future of AI?

Off-the-shelf AI has limited value

The first step in answering this question begins with the availability of data. Granted, there are AI services like Microsoft's Text Analytics and Computer Vision or Google's Cloud Vision API that are completely ready-made. For example, you don't need training data or an understanding of machine learning to recognize the language of a text with text analytics. If you can send a text to a web API, you're good to go. For some applications this may be enough as an introduction (e.g. assignment of a support case to a team member who speaks the correct language). In most cases, however, this is not enough. AI and machine learning only have real added value if they are adapted to a specific use case.

Customizable AI services

If there is no ready-made AI service off the shelf, that doesn't mean that you have to build everything from scratch with libraries like TensorFlow or Microsoft Cognitive Toolkit (CNTK). There is a middle ground: Customizable AI and ML models that you can train with your own data. Here are two examples from Microsoft's product portfolio:

  • With the Custom Vision Service (currently as a preview), images can be tagged according to an individual logic. Instead of writing the algorithm by hand or having to create a deep learning model from scratch, training data is made available in the form of correctly tagged images. They are used to train a basic model provided by Microsoft. The result is an individualized model with a web API with which new images can be tagged (prediction). With this service it is even possible to export the trained model in order to run it locally.
  • The Language Understanding Service (LUIS) helps process language. When a user formulates a request in natural language, it is not easy to determine the intent of the user (e.g. navigate, order a product, book a trip, etc.) and any parameters contained in the sentence (entity, e.g. destination , Product name, date of trip). However, this ability is indispensable when programming a bot, for example. LUIS solves exactly this problem. Training data is made available in the form of pattern sets (utterances) with correct assignment to intents and parameters (Fig. 1). The trained model can be deployed with just a few clicks. The web API that you get from this can be used directly or it can be linked to the Azure Bot Service to develop a bot.

Figure 1: Microsoft LUIS

Data is worth gold

These two examples show the fundamentally new approach for the “programming” of (semi) finished AI services compared to the classic development of program libraries. Our role as developers is no longer writing an algorithm. We have to take care of the training data. This task is anything but trivial, because the quality of the resulting deep learning model stands or falls with the quality of the training data. If too little data is available or the existing training data sets are faulty (e.g. incorrect keywording), of poor quality (e.g. poor image quality, photos that are too similar, etc.) or not representative (e.g. sample sets that do not contain a real user would ever use), the result is useless. Furthermore, training data is not enough. More data sets are needed to test the models.

Data is the new gold in the world of AI and ML. Even ready-made AI services in the cloud do not change that - on the contrary. As a team that wants to enter this world, the first thing you have to ask yourself is how you can get the necessary data. This hurdle is what makes it so difficult for start-ups to get started. Established companies either have existing databases or can fall back on an existing community that can be motivated to test AI-based software components such as bots, to give feedback and thereby indirectly provide the necessary training data.

Iterative model development

An important aspect in this context is the iterative model development. Customizable AI services like the ones mentioned above contain ready-made components with which one can check real data from the company (e.g. sentences that users have said to a bot or images that have been uploaded for keywording). If you discover classification errors, you can easily add the real data with correct metadata to the training set and thereby improve the AI ​​model step by step (Fig. 2).

Fig. 2: Keywording of an image in the Custom Vision Service

In order for this iterative approach to work in practice, mechanisms must be in place that make versioning, testing and productive setting of models simple and robust. AI services are usually available serverless. As a development team, you don't have to worry about operating or scaling the server in any way. You can deploy the model with one click, distinguish between test and production environments, have built-in version management, export the models for archiving in a source code management and much more (Fig. 3). Such functions reduce the time required for administration and DevOps processes to a minimum.

Fig. 3: Deployment of a LUIS model

Metaprogramming API

Another characteristic of AI services is important for SaaS providers: All functions are not only available interactively via a web UI; exactly the same functions can also be automated via web APIs. When developing your own multi-tenant SaaS solution, you often cannot lump all end customers together. Every customer has slightly different requirements. The data models differ, workflows are customer-specific, master data is naturally different for each customer and much more. For example, if you want to offer each SaaS end customer an individual bot, the model must differ from customer to customer so that a high-quality result can be achieved. The training data are different and in many cases the models also differ structurally.

As a SaaS provider, you can use the APIs of the AI ​​services to do metaprogramming. This means that you write a program that is not used by the end customer, but creates another program: in this case an AI model with the help of an AI service.

Challenges

It all sounds tempting, doesn't it? AI and ML can be used in any project without any problems, even if there is no relevant prior knowledge and only a limited budget. Although this statement is fundamentally correct, there are some challenges to be mastered in detail. The first was mentioned briefly above: You need a lot of good quality training data. In times of the GDPR, this is not only a technical but also a legal hurdle.

The second challenge is the risk of expecting more from the selected AI service than it can offer. As already mentioned, with modern AI services you have options for adapting the ready-made models. But you cannot control all aspects. After all, it is precisely the strength of these services that the complexity is reduced. Compared to classic SaaS and PaaS services from the cloud, evaluating AI services is much more difficult. Up until now you could compare feature lists. This is no longer so easy with AI services. Suppose you want to develop a SaaS solution in which the recognition of car license plates plays a role. Are Microsoft Computer Vision Services suitable for this? Can you build a good solution with it if you prepare the images for training and real operation accordingly? Would the Google equivalent deliver better results? In my experience, these questions cannot be answered theoretically. You have to build prototypes or you need help from people who have domain-specific experience with the selected AI services.

Conclusion

AI and ML projects are often adventures in which vast amounts of money and resources are sunk. Ready-to-use AI services in the cloud, which can be adapted to the respective domain, offer a shortcut in many cases and reduce the project risk. However, anyone who thinks that such projects are trivial will be disappointed. The handling of the data, the automation of the accompanying DevOps processes, the evaluation of the available AI services from various manufacturers and much more force a serious examination of the topic. Otherwise you get a result quickly, but from the user's point of view it offers no real additional benefit.

Our editorial team recommends: