Cloud computing has become increasingly popular among enterprises and IT professionals. Flexibility, efficiency, performance, and innovation are the benefits that draw most attention in this area. In complex systems that require problem detection and resolution, it is important to pay attention to a fundamental aspect: observability.
In this article, we'll talk a bit about web application monitoring, microservices, and how we use all of this in the BioPass ID cloud-based biometrics platform.
Do you want to understand more? Read on.
What is Cloud Observability?
It is a concept widely used in the world of technology that means the ability to monitor, analyze, understand and anticipate the behavior of web applications in general.
This way we can guarantee stability to the most complex applications on the market, bringing more reliability to the system and improving the user experience.
First of all, let's talk about what Web Applications are and how they are created.
What are Web Applications?
In short, web applications are software that is accessed over the Internet. It does not matter whether the access is through a browser, a smartphone, or any other device.

Web applications are created by IT professionals. Just as in construction, several types of professionals are involved in building a web application and in its life cycle.
The process of creating a web application involves teams of software engineers who design the structure of the application, front-end designers who prepare the pages and layouts, and support analysts who ensure that everything is working as it should.
Web Application Templates
There are several software architecture models for building web applications.
- Monolithic
Monolithic is the most traditional model. In it, the application is built as a large, single, robust block that serves all application functions in a single process. This model has the advantage of being easy to build and maintain.
- Microservices
Another well-known and widely used model is Microservices. In this architectural model, each function of the application is separated in a small isolated application, also called Service. Each of these services serves a small part of the application, and it is very common to call them Microservices.
In the Microservices model we have the advantage of being able to dynamically increase and decrease only the resources that are actually being used by the application. Unlike the monolithic model, where the whole application is executed as one big and single service. Each microservice can increase and decrease to meet unexpected demands.
BioPass ID cloud-based biometrics platform uses the Microservices model to deliver Biometrics as a Service to its customers. With this, each of the functionalities delivered by our platform are built as independent and isolated small applications.
Each of them can be scaled individually, ensuring that access peaks are met without affecting other platform features. This ensures that lower utilization resources are not scaled, so our core services are prepared to meet any demand.
Microservices and APIs
In general, microservices are accessed through APIs. The term API stands for Application Programming Interface. We can briefly describe them as the standard form of communication between distinct web applications.
While humans interact with web applications using browser pages, cell phone applications, and the like, in general, when web applications communicate with each other, they use APIs.

You should keep in mind that microservices and APIs are different concepts. While the former means a framework for creating web applications, the latter means a way of communicating between applications, be they microservices or monoliths!
Observability and Monitoring
With the popularization of the Microservices model in software development, monitoring complex web applications composed of several microservices and APIs, as well as internal and external applications, has become a major challenge.
More than just monitoring the operation of an application, we need to relate the behavior of dozens or even hundreds of microservices and their external resources, such as databases, data storage, and messaging systems.
Pillars of observability
The main pillars of monitoring are: metrics, events, logs, and traces.
Once we bring them all together, we are creating an ecosystem that generates observability.
Let's understand more about them?
Metrics

Metrics are values obtained by observing the processes and resources of an application that usually vary over time. The most common metrics that we can measure in an application are its CPU usage, RAM memory, and number of accesses and errors over a period of time.
By observing how these metrics vary over time, it is possible to find a lot of useful information. We can identify peak access times by looking at historical data, as well as find out which microservices are approaching their resource limit and prepare to expand them.
Events
Events are information generated by web applications about specific internal processes that occurred at a certain time and can be used to inform errors, problems and application business logic.
An application can generate an event, for example, when a user makes a purchase of an item in its e-commerce. With this saved event, we can check the time with the highest sales volume and how this correlates with other metrics in the application.
Logs

Logs are information generated by running applications, usually in unstructured text format with a specific date and time. They are present in virtually every application in the world and differ from Events in that they are more generic, being created by the execution of the application itself, and not by specific events.
Logs are the simplest way to diagnose errors and problems in applications, and it is usually through them that the support analyst begins his work of investigating errors and their causes.
Traces

Traces are data obtained by observing the chain of events generated by accessing a resource of a web application. Every access to a web application, whether through an API or a web page, generates a series of transactions that may pass through various internal and external resources of the application.
Each of these steps generates one or more Spans that can then be grouped, with a lot of important information about the transactions made by the application, into a Trace. By looking at the Traces, we can identify internal application problems or low performance points.
Observing the interaction between the Spans of a Trace can show where in the application there are bottlenecks that can slow users down. With access to this information, a team of developers can discover and solve hard-to-discover problems in complex applications.
The correlation and analysis of these 4 types of data can further improve our ability to identify patterns and solve problems. We can also turn one type into another, such as log creation rate into metrics and events.
What about monitoring systems?
Observability allows applications to be effectively analyzed through the ability to output telemetry data that helps in observing their behavior and correcting errors and anomalies.
There are many applications, open and closed source, that specialize in monitoring each of these types of telemetry data. Typically some of this data can be easily collected and sent to a monitoring system, others require the application to be prepared to send it correctly.

You can use a feature called auto-Instrumentation to make it easy to add Traces to pre-existing applications. With this feature, a library of code is added to the application and automatically generates traces in its internal processes.
You can also make modifications to the application to customize the traces with application-specific information, custom metrics, and data correlation. Some of these libraries are open source, while others belong to licensed monitoring platforms.

In addition, it is very common that each monitoring system uses its own standards and protocols for collecting, storing, and visualizing each of these types of data. Thus, choosing the most appropriate tool for the application is essential to ensure good monitoring of a web application.
To try to solve the lack of standard and the need for rework in the instrumentation of applications, OpenTelemetry was created as a project of CNCF, famous for incubating large open source projects like Kubernetes, together with Google.
But what is OpenTelemetry?

OpenTelemetry was created by joining two large open source projects, CNCF's OpenTracing, which was focused on creating an API with no dependency on licensed applications for sending data to monitoring applications, and Google's
Google's OpenCensus, focused on creating automated instrumentation libraries coded for several programming languages.
With the unification of both projects in OpenTelemetry, the efforts of the community and several major players in the market came together in the creation of a standard for sending and collecting telemetry from applications that can be used by everyone.
Web and Cloud Applications
In this article, we start entering the world of monitoring web and cloud applications. Effectively monitoring complex web applications made of many microservices is a challenging task, but it pays off for the users of these applications.
By using industry standards and specialized monitoring systems, BioPass ID cloud-based biometrics platform can ensure that your applications and microservices work in harmony, with high performance and availability in delivering biometrics as a service to your users.
Interested and want to read more about biometrics in the cloud? Check out its benefits and why you should use it.
See you in my next article!