Qburst Logo
Industries
Solutions
Services
Innovation & Insights
Company
Industries
Solutions
Services
Innovation & Insights
Company
Making Logs Meaningful: The Role of Mapped Diagnostic Context
  1. Innovation & Insights
  2. Blog
|
General

Making Logs Meaningful: The Role of Mapped Diagnostic Context

Varun Nair
Varun Nair

Latest Posts

  • Re-Engineering Facilities Management with Dynamics 365

  • AI Can Generate Screens, But Who Designs Experiences?

  • What Spreadsheets Taught me About the Future of Agentic AI

  • The GCC Evolution: Navigating Strategy and Scale in the AI Era

  • How We Reduced Agent Onboarding Cycles for an Insurance Carrier

Logs in any computing program can be used to track its execution and how it is faring at the moment. Based on my experience here at QBurst, let me show you how to make meaningful logs in any backend application with proper contextual information. This post will specifically take a look at logging in Java and see how logs can be made meaningful, and then go on to introduce Mapped Diagnostic Context (MDC) logging.

Take a look at the log excerpts from a website’s backend server that received requests to save a user setting from two users at the same time:  

12020-04-09T15:29:53.910Z DEBUG MyLogger.doFilterInternal(45) - POST HTTP/1.1 /users/settings/save
22020-04-09T15:29:53.922Z INFO  MyService.performAction(181) - Try save user setting
32020-04-09T15:29:53.951Z DEBUG MyInterceptor.log(27) - --> GET http://www.example.com/save/user?id=1832
42020-04-09T15:29:53.990Z DEBUG MyLogger.doFilterInternal(45) - POST HTTP/1.1 /users/settings/save
52020-04-09T15:29:54.009Z INFO  MyService.performAction(181) - Try save user setting
62020-04-09T15:29:54.030Z DEBUG MyInterceptor.log(27) - --> GET http://www.example.com/save/user?id=983
72020-04-09T15:29:56.222Z DEBUG MyInterceptor.log(31) - <-- 200 OK Saved
82020-04-09T15:29:56.235Z INFO  MyService.performAction(201) - Saved user setting
92020-04-09T15:29:56.472Z DEBUG MyInterceptor.log(31) - <-- 200 OK Saved
102020-04-09T15:29:56.481Z INFO  MyService.performAction(201) - Saved user setting
11

We have marked the logs for the user with id 1832 above. But when there are more users or more logs, identifying the logs of a particular user can be difficult. We can log the user ids too and that seemed to be sufficient, or so we thought.

12020-04-09T15:29:53.910Z DEBUG MyLogger.doFilterInternal(45) - POST HTTP/1.1 /users/settings/save
22020-04-09T15:29:53.922Z INFO  MyService.performAction(181) - Try save user setting with id: 1832
32020-04-09T15:29:53.951Z DEBUG MyInterceptor.log(27) - --> GET http://www.example.com/save/user?id=1832
42020-04-09T15:29:53.990Z DEBUG MyLogger.doFilterInternal(45) - POST HTTP/1.1 /users/settings/save
52020-04-09T15:29:54.009Z INFO  MyService.performAction(181) - Try save user setting with id: 983
62020-04-09T15:29:54.030Z DEBUG MyInterceptor.log(27) - --> GET http://www.example.com/save/user?id=983
72020-04-09T15:29:56.222Z DEBUG MyInterceptor.log(31) - <-- 200 OK Saved
82020-04-09T15:29:56.235Z INFO  MyService.performAction(201) - Saved user setting with id: 1832
92020-04-09T15:29:56.472Z DEBUG MyInterceptor.log(31) - <-- 200 OK Saved
102020-04-09T15:29:56.481Z INFO  MyService.performAction(201) - Saved user setting with id: 983
11

We can see the limitation of this solution, if say, we want to distinguish between the two log traces of the same user that happened at about the same time:

12020-04-09T15:29:53.910Z DEBUG MyLogger.doFilterInternal(45) - POST HTTP/1.1 /users/settings/save
22020-04-09T15:29:53.922Z INFO  MyService.performAction(181) - Try save user setting with id: 983
32020-04-09T15:29:53.951Z DEBUG MyInterceptor.log(27) - --> GET http://www.example.com/save/user?id=983
42020-04-09T15:29:53.990Z DEBUG MyLogger.doFilterInternal(45) - POST HTTP/1.1 /users/settings/save
52020-04-09T15:29:54.009Z INFO  MyService.performAction(181) - Try save user setting with id: 983
62020-04-09T15:29:54.030Z DEBUG MyInterceptor.log(27) - --> GET http://www.example.com/save/user?id=983
72020-04-09T15:29:56.222Z DEBUG MyInterceptor.log(31) - <-- 200 OK Saved
82020-04-09T15:29:56.235Z INFO  MyService.performAction(201) - Saved user setting with id: 983
92020-04-09T15:29:56.472Z DEBUG MyInterceptor.log(31) - <-- 200 OK Saved
102020-04-09T15:29:56.481Z INFO  MyService.performAction(201) - Saved user setting with id: 983
11

So there it was—we can no longer distinguish between the two logs traces here. We had to look at it in a different way. To address the problem, we thought of adding to the log an identifier corresponding to every other HTTP request hitting our server. We can create such an identifier for each request to distinguish the context, but it would make the log more verbose as we’d have to repeat that for all the logs.

 

This brought us to Mapped Diagnostic Context (MDC), which does most of the work behind the scenes with little intervention from the developer.   

Let’s see how to set up MDC in a Java Spring Boot project. If you are using Log4j or SLF4j/Logback you can use:

1MDC.put("myProp", "contextValue");
2

If you are using Log4j2 you can use:

1ThreadContext.put("myProp", "contextValue");
2

Now, retrieve “myProp” in the log pattern using %X{myProp}. For example, you can use this:

1%date{yyyy-MM-dd'T'HH:mm:ss.SSSXXX, UTC} [%X{myProp}] %-5level %logger{0}.%M\(%line\) - %msg%n
2
3

With MDC, we get these logs. Note the hash after the timestamp in each of the logs. Yes, this looks scalable. 

12020-04-09T15:29:53.910Z [0b644eacd273d4c0] DEBUG MyLogger.doFilterInternal(45) - POST HTTP/1.1 /users/settings/save
22020-04-09T15:29:53.922Z [0b644eacd273d4c0] INFO  MyService.performAction(181) - Try save user setting
32020-04-09T15:29:53.951Z [0b644eacd273d4c0] DEBUG MyInterceptor.log(27) - --> GET http://www.example.com/save/user?id=983
42020-04-09T15:29:53.990Z [3ecc5246ce0dc675] DEBUG MyLogger.doFilterInternal(45) - POST HTTP/1.1 /users/settings/save
52020-04-09T15:29:54.009Z [3ecc5246ce0dc675] INFO  MyService.performAction(181) - Try save user setting with id: 983
62020-04-09T15:29:54.030Z [3ecc5246ce0dc675] DEBUG MyInterceptor.log(27) - --> GET http://www.example.com/save/user?id=983
72020-04-09T15:29:56.222Z [0b644eacd273d4c0] DEBUG MyInterceptor.log(31) - <-- 200 OK Saved
82020-04-09T15:29:56.235Z [0b644eacd273d4c0] INFO  MyService.performAction(201) - Saved user setting
92020-04-09T15:29:56.472Z [3ecc5246ce0dc675] DEBUG MyInterceptor.log(31) - <-- 200 OK Saved
102020-04-09T15:29:56.481Z [3ecc5246ce0dc675] INFO  MyService.performAction(201) - Saved user setting
11
12

Note that there is a unique identifier added to the logs, which identifies the log traces that are generated from the same request. And it’s much better when we put the user identifier in the MDC context too. Now we have contextual information even in the logs from “MyLogger” and “MyInterceptor”.

12020-04-09T15:29:53.910Z [user=1832, tag=0b644eacd273d4c0] DEBUG MyLogger.doFilterInternal(45) - POST HTTP/1.1 /users/settings/save
22020-04-09T15:29:53.922Z [user=1832, tag=0b644eacd273d4c0] INFO  MyService.performAction(181) - Try save user setting
32020-04-09T15:29:53.951Z [user=1832, tag=0b644eacd273d4c0] DEBUG MyInterceptor.log(27) - --> GET http://www.example.com/save/user?id=1832
42020-04-09T15:29:53.990Z [user=983, tag=3ecc5246ce0dc675] DEBUG MyLogger.doFilterInternal(45) - POST HTTP/1.1 /users/settings/save
52020-04-09T15:29:54.009Z [user=983, tag=3ecc5246ce0dc675] INFO  MyService.performAction(181) - Try save user setting
62020-04-09T15:29:54.030Z [user=983, tag=3ecc5246ce0dc675] DEBUG MyInterceptor.log(27) - --> GET http://www.example.com/save/user?id=983
72020-04-09T15:29:56.222Z [user=1832, tag=0b644eacd273d4c0] DEBUG MyInterceptor.log(31) - <-- 200 OK Saved
82020-04-09T15:29:56.235Z [user=1832, tag=0b644eacd273d4c0] INFO  MyService.performAction(201) - Saved user setting
92020-04-09T15:29:56.472Z [user=983, tag=3ecc5246ce0dc675] DEBUG MyInterceptor.log(31) - <-- 200 OK Saved
102020-04-09T15:29:56.481Z [user=983, tag=3ecc5246ce0dc675] INFO  MyService.performAction(201) - Saved user setting
11
12

A few things to note:

  • Logs can be enriched with information regardless of where the actual logging occurred. You needn’t pass around all the contextual information (like the “user” data in the above example).
  • MDC works well in concurrent backend systems too. If you add a property to MDC from a parent Java thread, the MDC properties are inherited by all the threads that are spawned from the parent thread. You can think of MDC as a ThreadLocal variable that is not shared across multiple threads or an InheritableThreadLocal variable, which is inherited by the child threads.
  • When dealing with microservices, tracing the logs and monitoring them is complex and vital at the same time. If you are writing services with Spring Boot, you can use Spring Cloud Sleuth, which automatically injects a unique id to a single request or job (called trace-id). A trace id can be split into multiple “spans” if needed. Sleuth has the option to integrate with aggregators like Zipkin.

Now let’s take a complicated concurrent system using Spring Batch framework and check if MDC works well there.

MDC in a Spring Batch Application

We have a small Spring Batch program that runs in a web application. The object of the program is to process a large sequence of documents from Mongo database and extract meaningful data in the shortest time possible and eventually save the inferences back to the Mongo database. We had a simple batch processing system set up for this.

Making Logs Meaningful: Role of Mapped Diagnostic Context

Architecture of a Spring Batch processing system.  

In our case, the batch job is a multi-threaded program and consists of a single step (which again is multi-threaded). In this step, we have a reader, a processor and a writer. The reader takes an input, the processor processes it, and the writer handles the output. The processor is also a multi-threaded program. So that’s how it’s been set up.  

So in org.springframework.batch.core.configuration.annotation.BatchJobConfigurer we configured a custom JobLauncher. Here, we populated the context for the whole job, which would be available across the step threads, reader, processor threads, writer and the various job listeners and even the HTTP interceptors:  

1public JobLauncher getJobLauncher() throws Exception {
2        SimpleJobLauncher jobLauncher = new SimpleJobLauncher() {
3            @Override
4            public JobExecution run(Job job, JobParameters jobParameters)
5                    throws JobParametersInvalidException, JobExecutionAlreadyRunningException,
6                    JobRestartException, JobInstanceAlreadyCompleteException {
7                MDC.put("user", jobParameters.getString("username"));
8                MDC.put("#job", jobParameters.getString("jobId");
9                return super.run(job, jobParameters);
10            }
11        };
12        jobLauncher.setJobRepository(getJobRepository());
13        jobLauncher.setTaskExecutor(new SimpleAsyncTaskExecutor());
14        jobLauncher.afterPropertiesSet();
15        return jobLauncher;
16 }
17

This had to work but it didn’t. We were baffled. It took us a while to realize what was happening here. The multi-threading here is a Spring-flavored java.util.concurrent.ThreadPoolExecutor basically based on Java’s Executor framework. The Executor framework reduces the overhead of creating threads by reusing a “thread-pool” and executing tasks on them. Because of this, the MDC context would not be inherited and so we have to do that explicitly. The solution was to use a TaskDecorator that copies the MDC context to the current Runnable task:

1public class ContextAwareExecutorDecorator implements Executor, TaskExecutor {
2
3    private final Executor executor;
4
5    public ContextAwareExecutorDecorator(Executor executor) {
6        this.executor = executor;
7    }
8
9    public void execute(Runnable task) {
10        final Map<String, String> callerContextCopy = MDC.getCopyOfContextMap();
11        executor.execute(() -> {
12            MDC.clear();
13            if (callerContextCopy != null) {
14                MDC.setContextMap(callerContextCopy);
15            }
16            task.run();
17            MDC.clear();
18        });
19    }
20}
21

And apply the Decorator task around the SimpleAsyncTaskExecutor:

1public JobLauncher getJobLauncher() throws Exception {
2        SimpleJobLauncher jobLauncher = new SimpleJobLauncher() {
3            @Override
4            public JobExecution run(Job job, JobParameters jobParameters)
5                    throws JobParametersInvalidException, JobExecutionAlreadyRunningException,
6                    JobRestartException, JobInstanceAlreadyCompleteException {
7                MDC.put("user", jobParameters.getString("username"));
8                MDC.put("#job", jobParameters.getString("job");
9                return super.run(job, jobParameters);
10            }
11        };
12        jobLauncher.setJobRepository(getJobRepository());
13        jobLauncher.setTaskExecutor(new ContextAwareExecutorDecorator(new SimpleAsyncTaskExecutor()));
14        jobLauncher.afterPropertiesSet();
15        return jobLauncher;
16 }
17

Now the logs are more readable with all the context data when we run our Spring Batch program:  

110:18:00.515 [user=varun, #job=cb7e83f9] INFO  o.s.b.c.l.support.SimpleJobLauncher - Job: [FlowJob: [name=myJob]] launched with the following parameters: [{date=1586666880222, totalRecords=5, prop1=value1, prop2=value2}]
210:18:00.547 [user=varun, #job=cb7e83f9] INFO  c.m.a.batch.MyJobExecutionListener - Starting batch job now...
310:18:00.737 [user=varun, #job=cb7e83f9] INFO  o.s.batch.core.job.SimpleStepHandler - Executing step: [myStepSecured]
410:18:01.543 [user=varun, #job=cb7e83f9] INFO  c.m.a.batch.MyItemProcessor - [#1] Starting process...
510:18:01.560 [user=varun, #job=cb7e83f9] INFO  c.m.a.batch.MyItemProcessor - [#2] Starting process...
610:18:01.673 [user=varun, #job=cb7e83f9] INFO  c.m.a.batch.MyItemProcessor - [#4] Starting process...
710:18:01.673 [user=varun, #job=cb7e83f9] INFO  c.m.a.batch.MyItemProcessor - [#3] Starting process...
810:18:01.674 [user=varun, #job=cb7e83f9] INFO  c.m.a.batch.MyItemProcessor - [#5] Starting process...
910:18:02.285 [user=varun, #job=cb7e83f9] INFO  okhttp3.OkHttpClient - [#2] --> POST http://www.example.com/action
1010:18:02.285 [user=varun, #job=cb7e83f9] INFO  okhttp3.OkHttpClient - [#4] --> POST http://www.example.com/action
1110:18:02.285 [user=varun, #job=cb7e83f9] INFO  okhttp3.OkHttpClient - [#1] --> POST http://www.example.com/action
1210:18:02.285 [user=varun, #job=cb7e83f9] INFO  okhttp3.OkHttpClient - [#5] --> POST http://www.example.com/action
1310:18:02.285 [user=varun, #job=cb7e83f9] INFO  okhttp3.OkHttpClient - [#3] --> POST http://www.example.com/action
1410:18:04.080 [user=varun, #job=34c52ea8] INFO  o.s.b.c.l.support.SimpleJobLauncher - Job: [FlowJob: [name=myJob]] launched with the following parameters: [{date=1586666880222, totalRecords=5, prop1=value1, prop2=value2}]
1510:18:04.111 [user=varun, #job=34c52ea8] INFO  c.m.a.batch.MyJobExecutionListener - Starting batch job now...
1610:18:04.143 [user=varun, #job=34c52ea8] INFO  o.s.batch.core.job.SimpleStepHandler - Executing step: [myStepSecured]
1710:18:04.209 [user=varun, #job=34c52ea8] INFO  c.m.a.batch.MyItemProcessor - [#1] Starting process...
1810:18:04.211 [user=varun, #job=34c52ea8] INFO  okhttp3.OkHttpClient - [#1] --> POST http://www.example.com/action
1910:18:04.221 [user=varun, #job=34c52ea8] INFO  c.m.a.batch.MyItemProcessor - [#2] Starting process...
2010:18:04.222 [user=varun, #job=34c52ea8] INFO  okhttp3.OkHttpClient - [#2] --> POST http://www.example.com/action
2110:18:04.222 [user=varun, #job=34c52ea8] INFO  c.m.a.batch.MyItemProcessor - [#3] Starting process...
2210:18:04.223 [user=varun, #job=34c52ea8] INFO  okhttp3.OkHttpClient - [#3] --> POST http://www.example.com/action
2310:18:05.612 [user=varun, #job=cb7e83f9] INFO  okhttp3.OkHttpClient - [#4] <-- 200 OK http://www.example.com/action (3320ms)
2410:18:05.634 [user=varun, #job=cb7e83f9] INFO  c.m.a.batch.MyItemProcessor - [#4] Received response in 4s
2510:18:05.653 [user=varun, #job=cb7e83f9] INFO  okhttp3.OkHttpClient - [#2] <-- 200 OK http://www.example.com/action (3360ms)
2610:18:05.665 [user=varun, #job=cb7e83f9] INFO  c.m.a.batch.MyItemProcessor - [#2] Received response in 4s
2710:18:05.706 [user=varun, #job=cb7e83f9] INFO  c.m.a.batch.MyItemProcessor - [#2] Winding up...
2810:18:05.706 [user=varun, #job=34c52ea8] INFO  c.m.a.batch.MyItemProcessor - [#4] Starting process...
2910:18:05.708 [user=varun, #job=34c52ea8] INFO  okhttp3.OkHttpClient - [#4] --> POST http://www.example.com/action
3010:18:05.765 [user=varun, #job=cb7e83f9] INFO  c.m.a.batch.MyItemProcessor - [#4] Winding up...
3110:18:05.767 [user=varun, #job=34c52ea8] INFO  c.m.a.batch.MyItemProcessor - [#5] Starting process...
3210:18:05.769 [user=varun, #job=34c52ea8] INFO  okhttp3.OkHttpClient - [#5] --> POST http://www.example.com/action
3310:18:05.784 [user=varun, #job=cb7e83f9] INFO  okhttp3.OkHttpClient - [#3] <-- 200 OK http://www.example.com/action (3493ms)
3410:18:05.803 [user=varun, #job=cb7e83f9] INFO  c.m.a.batch.MyItemProcessor - [#3] Received response in 4s
3510:18:05.817 [user=varun, #job=cb7e83f9] INFO  c.m.a.batch.MyItemProcessor - [#3] Winding up...
3610:18:06.624 [user=varun, #job=34c52ea8] INFO  okhttp3.OkHttpClient - [#1] <-- 200 OK http://www.example.com/action (2411ms)
3710:18:06.645 [user=varun, #job=34c52ea8] INFO  c.m.a.batch.MyItemProcessor - [#1] Received response in 2s
3810:18:06.656 [user=varun, #job=34c52ea8] INFO  c.m.a.batch.MyItemProcessor - [#1] Winding up...
3910:18:06.710 [user=varun, #job=cb7e83f9] INFO  okhttp3.OkHttpClient - [#1] <-- 200 OK http://www.example.com/action (4419ms)
4010:18:06.743 [user=varun, #job=cb7e83f9] INFO  okhttp3.OkHttpClient - [#5] <-- 200 OK http://www.example.com/action (4452ms)
4110:18:06.775 [user=varun, #job=34c52ea8] INFO  okhttp3.OkHttpClient - [#4] <-- 200 OK http://www.example.com/action (1063ms)
4210:18:06.778 [user=varun, #job=34c52ea8] INFO  c.m.a.batch.MyItemProcessor - [#4] Received response from in 1s
4310:18:06.789 [user=varun, #job=34c52ea8] INFO  c.m.a.batch.MyItemProcessor - [#4] Winding up...
4410:18:06.959 [user=varun, #job=34c52ea8] INFO  okhttp3.OkHttpClient - [#5] <-- 200 OK http://www.example.com/action (1186ms)
4510:18:06.964 [user=varun, #job=34c52ea8] INFO  c.m.a.batch.MyItemProcessor - [#5] Received response in 1s
4610:18:06.977 [user=varun, #job=34c52ea8] INFO  c.m.a.batch.MyItemProcessor - [#5] Winding up...
4710:18:07.001 [user=varun, #job=cb7e83f9] INFO  c.m.a.batch.MyItemProcessor - [#1] Received response in 6s
4810:18:07.011 [user=varun, #job=cb7e83f9] INFO  c.m.a.batch.MyItemProcessor - [#5] Received response in 6s
4910:18:07.023 [user=varun, #job=cb7e83f9] INFO  c.m.a.batch.MyItemProcessor - [#1] Winding up...
5010:18:07.025 [user=varun, #job=cb7e83f9] INFO  c.m.a.batch.MyItemProcessor - [#5] Winding up...
5110:18:07.159 [user=varun, #job=cb7e83f9] INFO  c.m.a.batch.MyJobExecutionListener - [#] Job finished in 6s
5210:18:07.159 [user=varun, #job=cb7e83f9] INFO  c.m.a.batch.MyJobExecutionListener - [#] Latency due to service calls: 24s
5310:18:07.307 [user=varun, #job=cb7e83f9] INFO  o.s.b.c.l.support.SimpleJobLauncher - [#] Job: [FlowJob: [name=myJob]] completed with the following parameters: [{date=1586666880222, totalRecords=5, prop1=value1, prop2=value2}] and the following status: [COMPLETED]
5410:18:07.744 [user=varun, #job=34c52ea8] INFO  okhttp3.OkHttpClient - [#2] <-- 200 OK http://www.example.com/action (3521ms)
5510:18:07.869 [user=varun, #job=34c52ea8] INFO  okhttp3.OkHttpClient - [#3] <-- 200 OK http://www.example.com/action (3645ms)
5610:18:08.013 [user=varun, #job=34c52ea8] INFO  c.m.a.batch.MyItemProcessor - [#2] Received response in 4s
5710:18:08.037 [user=varun, #job=34c52ea8] INFO  c.m.a.batch.MyItemProcessor - [#2] Winding up...
5810:18:08.138 [user=varun, #job=34c52ea8] INFO  c.m.a.batch.MyItemProcessor - [#3] Received response in 4s
5910:18:08.160 [user=varun, #job=34c52ea8] INFO  c.m.a.batch.MyItemProcessor - [#3] Winding up...
6010:18:08.250 [user=varun, #job=34c52ea8] INFO  c.m.a.batch.MyJobExecutionListener - Job finished in 4s
6110:18:08.250 [user=varun, #job=34c52ea8] INFO  c.m.a.batch.MyJobExecutionListener - Latency due to service calls: 15s
6210:18:08.445 [user=varun, #job=34c52ea8] INFO  o.s.b.c.l.support.SimpleJobLauncher - Job: [FlowJob: [name=myJob]] completed with the following parameters: [{date=1586666880222, totalRecords=5, prop1=value1, prop2=value2}] and the following status: [COMPLETED]
63

You can see how the idea of having the business context in the logging framework is useful for tracing the interactions with your program. You can distinguish interleaved log output from different sources and MDC is very useful here. While we saw an example of a Java-Spring system here, the idea of MDC has been around for some time in most languages.

Latest Posts

  • Re-Engineering Facilities Management with Dynamics 365

  • AI Can Generate Screens, But Who Designs Experiences?

  • What Spreadsheets Taught me About the Future of Agentic AI

  • The GCC Evolution: Navigating Strategy and Scale in the AI Era

  • How We Reduced Agent Onboarding Cycles for an Insurance Carrier

Recognized for Growth. Trusted for Impact.

Deloitte Technology Fast 50 India, Winner 2024

Deloitte Fast 50 India, Winner 2024

Dun & Bradstreet

Leading Mid-Corporates of India, 2024

RecognitionImage

Major Contender, QE Specialist Services


Qburst Logo
ISO
QBurst on LinkedIn
QBurst on X
QBurst on Facebook
QBurst on Instagram
Industries
RetailRealtyHigh-TechHealthcareManufacturing
Solutions
Digital ExperienceIntelligent EnterpriseProduct EngineeringManaged AgentsModernization
Services
Experience DesignDigital EngineeringDigital PlatformsData Engineering & AnalyticsApplied AICloudQuality EngineeringGlobal Capability CentersDigital Marketing
Innovation & Insights
BlogCase StudiesWhitepapersBrochures
Company
LeadershipClientsPartnersCorporate ResponsibilityNews & MediaCareersOur LocationsGrowth Referral
  • Industries
  • Solutions
  • Services
  • Innovation & Insights
  • Company

© QBurst 2026. All Rights Reserved.

Privacy Policy

Cookies & Management

Certifications