File Importer Overview
Tulip’s File Importer tool simplifies the process of uploading files by enabling clients to send simple files via placing them in Tulip’s SFTP. Tulip supports CSV currently in simple decomposed (normalized) entity-specific formats that are ingested by an integration layer.
Files are landed through Tulip’s SFTP framework and uploaded to the Client tenant using Tulip’s Bulk API.
File Importer provides a rapid way to get data to Tulip as an alternative to using the Bulk API. The key difference between Bulk API and File Importer – aside from file-based transfers that are not synchronous/real time – is limited flexibility/synchronous error handling, support for custom lists of attributes, and additional specific endpoints that File Importer currently does not support. See later section in this page for details. You can manage File Importer in Admin Console under the Integratons -> Data Imports tab. You can turn on Tulip’s File Importer on this tab. Note that File Importer will be autodisabled if no files have been processed within the last 180 days. If this is the case, please return to this tab and reenable file importer.
Enable File Imports in your Environment
- Go to Admin Console and select the Integrations tab, then navigate to Data Import.
- Select Activate File Importer, and then confirm the activation.
Uploading your SSH key
Create a public/private key pair in ssh-rsa format. You can use the command
ssh-keygen -t rsain Windows command prompt or Mac terminal to generate the key.Follow the prompts to generate the public and private key. The private key should never be shared except with an authenticating tool like Filezilla or a scheduling/managed automated file transfer tool you may use to send Tulip data.

.By default, your first rsa public key is saved as the C:\Users<yourusername>.ssh\id_rsa.pub file on Windows, and the ~/.ssh/id_rsa.pub file on a Mac.Note that the .pub is the public key and the file without the .pub extension is your paired private key. You will want to copy the contents inside the .pub file to your clipboard for the next steps.Navigate to the SFTP Keys module in Admin Console under the Integrations header in the sidebar.


Add the PUBLIC key by using the add SFTP Key UI. Note: Remember to add a helpful comment to remember what this key is used for. Example: OMS Integration or Staging Environment.



Connect to the SFTP server using your preferred SFTP client (for example, FileZilla). Connection details can be found by clicking the SFTP Connection Instructions button on the SFTP Keys module.
.If your tenant was called mystoreand you were in the staging environment, your host would besftp-tulip-staging.tulipretail.comand your username would bemystore. If you were in the production environment on themystoretenant, your username would remain the same, but you host would besftp-tulip-prod-<region>.tulipretail.com, where region is one of the supported Tulip host regions in which your tenant is hosted - for instance,na,eu,de, orcn. The below is how your connection would appear in Filezilla if you were connecting to themystoretenant in the staging environment. Not that the key file is the private key (i.e. without the .pub) that you generated in steps 1-2.
.To upload a file go to
<yourtenant>/upload/, and drop your files in thegenericfolder (or the entity folder for any entities not in aerial - i.e.customer_associations) using the appropriate prefix on the file as mentioned in File Naming Conventions. For instance, if your tenant was namedmystoreand you wanted to upload a customers file to the staging environment, you would connect to the hostsftp-tulip-staging.tulipretail.com, navigate tomystore/upload/generic, and drop the csv file starting withTGEN_customersinto that folder.
.You can monitor the progress of your files in Admin Console under integrations -> Data Import. This will display the total records processed and how many uploaded successfully or received errors. Error logs can be downloaded from this interface. Error logs will also be surfaced under the
mystore/logsSFTP folder.
.Any files that do not succeed will be moved into the appropriate subfolder of the mystore/failed/folder, where they will be retained for 30 days.
File Processing at a glance
At a high level file processing involves the following steps:
- The Client team places files in their appropriate SFTP upload directory. The name of the file determines the type of record to import. See Naming conventions in this guide.
- The records contained in the file are processed soon after they are uploaded to Tulip’s API. Uploads can be sent mutiple times on the same record and each time it will be updated fully with the contents in the latest record.
- Logs are generated in error files for failed records.
- Logs and current import statuses can be viewed in Admin Console under the Data Imports tab.
Supported File Formats
File formats currently supported include:
- CSV (today)
- JSON (future)
Supported Data Entities
Due to the decoupled nature of data processing on Tulip side and to apply a uniform cross client applicability of being to prepare, provide and test data, the entities in the File importer may not be combined currently into unified (denormalized) importable formats.
Following data entities are supported:
Full list available in Aerial (TGEN prefix)
Other files are specified below.
File Naming Conventions
The type of records to import are defined by the file prefix. Note, please only use alphanumeric characters and (hyphens and underscores) when naming your file.
TGEN files
Many endpoints that exist in Aerial have a corresponding file that can be sent via File Importer. To see a full list of the files available, see the TGEN page. Each file begins with a TGEN prefix.
The filename of the file you upload to the upload/generic folder (whether through SFTP or directly via Admin Console) must follow the format:
TGEN_<recordType>_<importType>_<apiVersion>_<optionalSuffix>.csv
Only the part up to <apiVersion> should be included in the file prefix.
| Parameter | Required | Description |
|---|---|---|
recordType | REQUIRED | Specifies the recordType of the data to import (i.e. customers, stores). These record types map directly to the data models in core API docs (i.e. customers targets crm/customers, stores targets storeOps/stores). The list of supported record types can be found here. |
importType | REQUIRED | Supports two values: fullUpdate and partialUpdate. • fullUpdate will target the upsert endpoint for the Aerial data model specified by recordType. • partialUpdate will target the patch endpoint for the Aerial data model specified by recordType. • If the Aerial data model does not support upsert or patch, the data import will be marked as Failed and an appropriate error message will be returned to the user in the generated log file. |
apiVersion | REQUIRED | This value maps directly to the endpoint’s api version. |
optionalSuffix | OPTIONAL but highly recommended | This part of the filename is not used by the FFI system to determine the import strategy. However, it is recommended that the integrator includes a suffix to help differentiate multiple files with the same recordType, importType and apiVersion. |
The below files are on Tulip’s older API and do not yet support the TGEN prefix. The required prefix is specified in the table.
| Entity/Record Type | File Prefix | Examples |
|---|---|---|
| Customer Assigned Entities | TCAE | |
| Customer Associations | TCAS | |
| Category | TCA | TCA-CategoriesSample001.csv |
| Option Types | TOT | TOT-OptionTypesSample001.csv |
| Option Values | TOV | TOV-OptionValuesSample001.csv |
| Products | TPRD | TPR-ProductsSample001.csv |
| Product Value Categories | TPVC | |
| Variants | TVA | TVA-VariantsSample001.csv |
| Prices | TPRC | TPRC-PricesSample001.csv |
| Inventory | TIN | TIN-InventorySample001.csv |
| Inventory - PATCH Support | TPIN | TINinventory_goodData1.csv TINinventory_goodData2.csv |
| Orders | TOR | TOR-OrderHistorySample001.csv |
| Customer Preferences Groups | TCPG | |
| Goals | TKPI | |
| Notes | TNOT |
Working with TGEN Files
This section outlines how to map API schema to headers in your FFI CSV file.
Top level fields
- Simply include the field name as documented in the upsert or patch endpoint’s request body.
- Example: to target the
firstNametop-level field for a Customer Upsert, name the headerfirstName.
- Example: to target the
- To omit the field from the API payload, leave the column completely blank.
- To send a blank string as the field value, set this column as
''.
Nested fields
- Use dot-notation to target any nested fields.
- Example: to target the
fulfillFromMultipleStoresEnabledsetting field for a Stores Upsert endpoint, name the headersettings.fulfillFromMultipleStoresEnabled.
- Example: to target the
- Multiple levels of nesting are supported.
- Example: to target the
openfield for Monday hours for a Stores Upsert, name the headermonday.hours.open.
- Example: to target the
- To omit the field from the API payload, leave the column completely blank.
- To send a blank string as the field value, set this column as
''.
Multiple (array based) fields
- Use a zero-based array indexing convention (i.e.
fieldName[0]) to target any array based field values.- Example: to target the first address'
externalIdfield in a Customer Upsert, name your headeraddresses[0].externalId.
- Example: to target the first address'
- To target subsequent addresses, increment your array index as required:
- Example: to target the second address'
externalIdfield for a Customer Upsert, name your headeraddresses[1].externalId.
- Example: to target the second address'
- Combine this with the dot-notation convention to target any field in the request body.
- To omit the field from the API payload, leave the column completely blank.
- To send a blank string as the field value, set this column as
''.
Error responses and debugging
FFI Error checking
The FFI system will parse the filename and respond with an error if:
- The recordType is not supported.
- The importType is not one of
fullUpdateorpartialUpdate.- It will NOT check if an upsert or patch endpoint is available for the record type (this error will be caught by the API call itself)
- The apiVersion is not of the format
YYYY-MM.- It will NOT check if the api version is supported by the targeted endpoint (this error will be caught by the API call itself)
API Error checking If the above checks pass, the CSV file will be parsed row by row. Each row will be converted to a JSON payload and sent to the targeted API endpoint as determined by the Filename and CSV header naming conventions detailed above.
Any error that results from the API call itself will be captured and written to the errors file available once the data import is complete (in the same locations as any other failed import).
Partial Update Files
A file that file allows uses partialUpdate as the import type corresponds to a Tulip PATCH endpoint. This allows the user to provide partial data without the unsent data getting cleared out. This means that the integrator does not need to populate every field in the .csv to update a single field.
If an integrator wants to remove / set a field as NULL or empty string in Tulip, they can simply provide the string ‘SET_AS_NULL" in the appropriate field and that field will be emptied.
Important Considerations for Performance and Formatting
| FORMATTING/NOTE | DETAILS |
|---|---|
| VOLUMES | Tulip supports large files per type up to 1GB per file - however you may choose to keep the batch sized appropriately to aid troubleshooting in dev/qa phases. Product volumes can be larger based on optimal performance of your exporters and batched/split accordingly. Since File importer moderates the appropriate scales the data throughput sent to the API, increase the file size should not impact system load. |
| CATEGORY FILES PROCESSING | Category records are split in our intake process and processed asynchronously. This means that to ensure that the Category levels are processed in the right order in Parent-Child relationship, you must split the Categories into separate files BY LEVEL. This means it is best to the send DEPARTMENT Categories separately, then CLASS, then SUBCLASS… SUB-SUB-CLASS and so on. For example, if we have the following categories Tops > T-Shirt, Tops > Jacket, Tops > Other. Split the data in two files. category_level1.csv included Tops. And category_level2.csv includes T-Shirt, Jacket, Other |
| SFTP PROCESSING | Files get picked up as soon as the transfer to our SFTP has been completed. Files get moved in this folder path: upload -> received -> in progress -> completed (if successful) or failed (if not). Logs can be found in the root logs folder, organized by date. The log files are organized by data file name and line number of error. |
| PARTIAL SUBRESOURCE CREATION | The latest release of File Importer loosens up the restrictions of the order that certain files must be processed in. A partial subresource will not be surfaced inside Tulip and the full entity must be imported before it will appear inside Tulip. |
| Category | When a Category is provided that references a Parent Category that does not exist. File Importer will create a partial Parent Category whose data is to be provided at a later time (or later in the file). This means that Categories can be provided in any order. |
| Product | When a Product is provided that references a Parent Category that does not exist. File Importer will create a partial Parent Category whose data is to be provided at a later time, and assign the provided product to that partial category. This allows products to be imported before their associated Categories. |
| Variant | When a Variant is provided that references a Product that does not exist. File Importer will create a partial Product whose data is to be provided at a later time, and assign the provided variant to that partial product. This allows Variants to be imported before their associated Products |
| Inventory | When Inventory is provided that only references a Variant that does not exist, File Importer will result in an error. The Variant must exist in the system for Product to be inferred and mapped. The best practice is to provide the Variant and Product and if Variant and/or Product does not exist in the system, partials will be created. This allows Inventory to be imported before their associated Product / Variants. Stores however must be imported before. |
| Price | When a Price is provided that references a Product or Variant that does not exist. File Importer will create a partial Product or Variant whose data is to be provided at a later time, and assign the provided Price to that partial product or variant. This allows Price to be imported before their associated Product / Variants. Stores however must be imported before. |
| OBJECT / FILE PROCESSING SEQUENCE | Hierarchy is IMPORTANT to consider when sending files to Tulip. Ensure data in files is scheduled to be sequenced in the following manner to prevent errors due to parent child relationship that are enforced. E.g. Load STORES before EMPLOYEES because it is required for an employee to be assigned to a store, and hence Stores must exist in Tulip before Employees are sent. |
| 1 | Stores (Can also be set up in Admin console) |
| 2 | Employees (Can also be set up in Admin console) |
| 3 | Categories *Partial subresource creation enabled (See the split above) |
| 4 | Products *Partial subresource creation enabled |
| 5 | Option Types |
| 6 | Option Values |
| 7 | Variants *Partial subresource creation enabled |
| 8 | Price *Partial subresource creation enabled |
| 9 | Inventory *Partial subresource creation enabled |
| 10 | Customers |
| 11 | Orders/Receipts |
| Which Files to Develop First | Group 1: Store > employees > Customers |
| Group 2: Categories > Products > OptionTypes > OptionValues > Variants > Prices > Inventory (which is dependent on Stores) | |
| Group 3: Order History | |
| Groups 1 & 2 can be worked on in parallel. Note, however, that if you are using multicatalog, the catalog entity will need to be sent before stores are sent. Within a group, each integration should come in sequentially. Group 3 (Order History) should come last after Groups 1 and 2 are fully processed. | |
| Concurrent file handling | If a file is uploaded twice in quick succession: - If the first file is in processing status, the second file will be added to the queue. - If the first file is in a received state, only the second file will be imported. |
| CONTENTS FORMATTING | How to format Contents of the File |
|---|---|
| DATA TYPES, DATES | All dates must be ISO8601 format. Except important dates (CUSTOMER ENTITY) - which support YYYY-MM-DD only. All other formats must be string. Ensure to check metadata and FILE IMPORTER SAMPLES. |
| DATA TYPES, PHONE NO | Phones numbers must always in the E164 format, that includes a “+” symbol, country, area, and phone number. |
| UPDATES TO RECORDS | If the ID of an entity already exists, then the record will be updated to match whatever data is passed. If the ID of an entity does not exist, then it will be created. Tulip will REPLACE the contents of its database with the incoming data from Client side via files - please ensure to include full record each time when sending an update to existing record. |
| EMPTY FIELDS | No need to put any placeholder texts for optional fields, leaving them blank will do the equivalent of setting them to null. Trim any extra white spaces in the header line or in the data. Remove any special encoding and if there is any data with a comma in it use double quotes to wrap the string. For example, “23,“Whitby, York”, 64CA”. |
| FILE TYPE | UTF8 ENCODING |
| APP DISPLAY NOTE | Categories not visible if active product is not in category/ Each variant needs to have the same set of option types for the same product. E.g. If product A has variants X, Y, and Z. Then each X, Y, and Z need to have the same option types. For each entity (variants, orders, etc.) similar key ids (like Product ID in products file, Variant ID in variants file, etc.) needs to be added as contiguous record in the file one after another. See below for Multi-row formatting. |
| IDs in Tulip | All columns (fields) that are of the naming convention “field ID” are the client-side system identifiers (eg Employee,Customer, Store, Product etc). The ids can be alphanumeric and up to 255 characters in length. Below is a reference to the key refnums, their names and maximum supported lengths:
|
| IMAGES | Image URLs domains will need to be whitelisted by Tulip. It is very important to send us the image hosts before you send us any image urls to process (such as when products and variants are sent). |
| HEADER ROW | A FIRST row as the header row. The header row must contain, at a minimum, the required fields for that CSV file. The columns in the header row are recommended to be in a specific order as described in the File Importer Templates. All rows after the header row must be data rows. |
Sending Rows of Content that Contain Arrays (APPLIES TO NON-TGEN FILES ONLY)
This section clarifies how Tulip Importer supports sending arrays of nested fields under a record type. This provides a lot of flexibility in being able to sent arrays of fields in the files. This allows for multiple values for an entity, such as multiple ADDRESSES and PHONE numbers for a single Customer object. The same functionality is available for Order entities as well.
For every possible type of CSV file you can import (such as customers or products), there is a set of one or more required fields that must be filled out in order to have a successful import.
On a given row:
A row that contains different ID fields than the previous row counts as a new instance (such as a new customer or a new product).
A row that contains the SAME KEY or ID fields as the PREVIOUS row counts as an additional row, and is a PART of same instance as the previous row, except where specified.
- Additional rows that share the same Unique Row ID may only contain information under groups of columns labelled as Multi-Row Headers. Information in fields not under these types of columns will not be used since the first row’s info is used.
Multi-Row Headers and Multi-Column Headers are both types of columns where multiple pieces of information can exist.
- Multi-Row Headers allow for information to exist in additional rows whereas
- Multi-Column Headers allow you to spread the same information across multiple columns
Every group of Multi-Row Headers has a Multi-Column Headers group counterpart, and vice versa.
- These group counterparts may not be used together in the same CSV file.
- However, Multi-Row Headers and Multi-Column Headers may exist in the same CSV file as long as their group is different.
Examples
To see the difference between a multi-row header and a multi-column, consider the following example.
A customer CSV file includes two phone numbers. The first row is the header row.
Both customer ID and Phone Number ID repeat but the values are different. This allows this customer1 to have 2 phone numbers.
Multi-Row
| Customer ID | Phone Number ID | Phone Number | Phone Number Country ID |
|---|---|---|---|
| customer1 | phone1 | 123-456-7890 | CAN |
| customer1 | phone2 | 123-654-0987 | CAN |
How it looks inside of a CSV file - not how the Customer ID is the same. IMPORTANT: The rows with the same ID MUST BE CONTIGUOUS. Distributing the unique row ID in different locations in the file will lead to File processing considering each record as a separate UPDATE instead of attempting to group the array items together.
Customer ID,Phone Number ID,Phone Number,Phone Number Country ID
customer1,phone1,123-456-7890,CAN
customer1,phone2,123-654-0987,CAN
Multi-Column
| Customer ID | Phone Number 1 ID | Phone Number 1 | Phone Number 1 Country ID | Phone Number 2 ID | Phone Number 2 | Phone Number 2 Country ID |
|---|---|---|---|---|---|---|
| customer1 | phone1 | 123-456-7890 | CAN | phone2 | 123-654-0987 | CAN |
How it looks inside of a CSV file:
Customer ID,Phone Number 1 ID,Phone Number 1,Phone Number 1 Country ID,Phone Number 2 ID,Phone Number 2,Phone Number 2 Country ID
customer1,phone1,123-456-7890,CAN,phone2,123-654-0987,CAN
Unlike the Multi-Row example, headers of the Phone Number group using Multi-Column headers include identifying numbers. These numbers have no meaning other than for the importer to differentiate different Phone Numbers.
These identifying numbers must be integers, but can be in any order, and do not have to begin with 1 nor be sequential. Although for readability it is recommended to start with 1 and increased towards the right side.
Although both ways to implement such multiplicity in values that are tied to single parent record is supported, please see the samples provided throughout this documentation to help create your data mapping accordingly.
Supported CSV File Types
Click on the link below to see the formatting and column level header specifications and data types for each.