Pulling JSON using REST API can be a time-consuming process, in some cases taking as long as two or three hours to complete. With the process outlined below, Suresh Pankaj and his team were able to complete the procedure in less than 10 minutes – and improve overall performance in the process.
Suresh is a Senior Consultant for Data Management at Saxony Partners. You can connect with him directly via LinkedIn.
Before we begin, let’s take a moment to define a few key terms.
Application Program Interface (API)– an API spells out the proper way for a developer to write a program requesting services from application. API uses standard HTTP request to GET, POST, DELETE data. For an Enterprise Data Warehouse project, we are using an SQL Server Integration Services (SSIS) package to pull data using application’s API.
Representational State Transfer (REST)– an architecture style that defines a set of constraints to be used for creating web services. (RESTful web services is a popular architecture used for interfacing with applications.)
Now that our terms have been defined, let’s take a look at a high-level data flow of the application:
The size of the JSON being transmitted is huge and packaging can take a lot of time to complete. If that’s the case for you, let me help you save some time and improve your performance.
The immediate thought process to optimize the data pull was to create worker threads and parallelize the transmission of the detail attributes JSON. We worked through a few proofs of concept (POC) to explore the options and came up with best solution. Below is a summary of each attempt at solving the performance issue.
In our first POC, we used asynchronous programming. Asynchronous programming allows a unit of work to run separately from the primary application thread. Thirty asynchronous calls were made inside a loop. The code would wait for the 30 asynchronous calls to complete, and then loop to the next set of 30 calls.
Below is a snippet of code inside the loop:
…. similarly, 29 more calls…………
…. similarly, 29 more calls…………
Stringing asynchronous API calls in a loop may seem like a fast way to parallelize the transmission of JSON, but it did not deliver improvement in performance. We had to try again.
For our second POC, we decided to try multi-threaded approach.
Thirty threads were created. Each thread picked one record from the list of records for processing – until all the records in list are downloaded. In multi-threaded application, locking is used to ensure that one thread does not enter a critical section while another thread is in the critical section of code.
Here’s a snippet of code that shows locking in action:
if (ctrToProcess + 1 >= List.Count)
bEnd = true;
ctrToProcess = ctrToProcess + 1;
Unfortunately, neither one of our first two POCs delivered the desired improvement in performance. The single threaded approach had very low throughput, and the multi-threaded implementation did not provide much improvement in performance either. We had to try a third time.
In the third POC, we tweaked the second POC by adding code to create separate access tokens for each thread. This simple change yielded a huge improvement in the performance.
Ultimately, the lesson learned here was that the application treats multiple requests per token in a serial order. To gain the full benefit of a multi-threaded approach, separate access tokens should be created for each thread. This approach decreased the execution to less than 10 percent of what it was previously.
As with all multi-threaded applications it is important to monitor and manage the memory usage, as increasing the number threads may also cause other performance issues. Asynchronous programming could be used to create multiple tokens, but the code flow seems to be easier when following the multi-threaded approach