Async Bulk Operations
New standard of Core API offers the ability to perform bulk operations with the Async Bulk endpoints. These API endpoints are designed to reduce complexity when ingesting and modifying large volumes of data. When you submit a request to the endpoint, the Tulip backend system will process the request asynchronously **and **provide the results when completed. Currently, async bulk endpoints only support ingestion of 1000 records (business data objects) per API call. This record limit might not always be the case but exceptions will be clearly defined. The async bulk endpoints are tagged with Bulk
in the API specification.
Data models currently supported with Async Bulk endpoints:
- Customers (POST and PUT)
- More data models will be supported in future releases/versions.
Async Bulk Ingestion Flow
Submit a request to the async bulk endpoint for the business entities you are trying to create/modify. Refer to the Core API specification for routes and the valid request payload.
Example: PUT/POST
api/{version}/crm/customers/bulk
After the request is submitted, the endpoint will run basic validation on the data provided in the request body. This basic validation covers:
- Max records allowed in a single async bulk request has not been exceeded (1000 records)
- Required fields have been provided
- Data is correctly formatted
If basic validation fails, the endpoint will return a
422
response with the first encountered error described in the response body:{ "resource": null, "errors": [ { "message": "Customer is missing firstName field", "errorCode": "MissingRequiredField(s)", "moreInfo": "Please review the API specification for required fields for this endpoint." } ] }
If basic validation succeeds, the endpoint will return a
uuid
which is a unique identifier of your bulk async job. Thisuuid
can be used to poll the status of the job. Tulip backend systems will process the data and insert into the database semantically. During this part of the flow, errors can occur ingesting data/modifying existing data in Tulip which can be a result of semantic issues with the data provided. For example:externalId
of a business data record already exists in Tulip- Invalid address
type
entered for a Customer Address - Invalid
type
supplied for the Customer Important Date
If your request payload contains erroneous data records, the Tulip backend systems will try to complete and ingest all valid business data records. When the job completes, the API will summarize all the errors (if any) that occurred during the import. This summary will be available to download via a link provided in the job response. (See flow stage 4)
You are able to poll the status of your async bulk job by using the following endpoint:
GET
api/{version}/async/jobs/{uuid}
whereuuid
is the unique identifier for the job"resource": { "uuid": "5d8c6b68-5683-46be-b8d9-9b6ccc2b9034", "entity": "customers", // this is the business entity you are working with "status": "COMPLETED_WITH_ERRORS", "outputFileUrl": "http://tulip-prod-tenant.tulipretail.com/api/2022-08/async/jobs/{uuid}/report", //null before completion "dateStarted": "2023-01-01T00:00:00Z", "dateCompleted": "2023-01-01T00:00:00Z", "dateCreated": "2023-01-01T00:00:00Z", "dateModified": "2023-01-01T00:00:00Z", "numRecordsSent": 1024, // total number of records sent in the request payload ( < 1000) "numRecordsSuccessfullyProcessed": 1024 //null before completion }
Output File
The CSV output file will contain errors (if any) associated to the job. The file contains records (resources) that have failed and the reason why that record (resource) is failing (failureReason
).
"failureReason","resource"
"externalId already exists for this resource in Tulip","{\"firstName\":\"John\" ... }"
"Invalid type field for Customer Address: lastname","{\"firstName\":\"Jane\" ... }"
Job Statuses
Possible status
values for the async bulk job:
PENDING
- job is being prepared for processingINTERNAL_FAILURE
- An internal unrecoverable error occurred while preparing the job for processing. Please contact Tulip for supportREADY
- job is ready to startSTARTED
- the job has startedCOMPLETED
- job has completed successfully, all records were successfully importedCOMPLETED_WITH_ERRORS
- job has completed but some records were able to be imported but others had errors. Please review the output file forresource
and itsfailureReason
FAILED
- job has completed but all records in the payload have failed