File Importer Overview
Tulip’s File Importer tool simplifies the process of uploading files by enabling clients to send simple files via placing them in Tulip’s SFTP. Tulip supports CSV currently in simple decomposed (normalized) entity-specific formats that are ingested by an integration layer.
Files are landed through Tulip’s SFTP framework and uploaded to the Client tenant using Tulip’s Bulk API.
File Importer provides a rapid way to get data to Tulip as an alternative to using the Bulk API. The key difference between Bulk API and File Importer – aside from file-based transfers that are not synchronous/real time – is limited flexibility/synchronous error handling, support for custom lists of attributes, and additional specific endpoints that File Importer currently does not support. See later section in this page for details. You can manage File Importer in Admin Console under the Integratons -> Data Imports tab. You can turn on Tulip’s File Importer on this tab. Note that File Importer will be autodisabled if no files have been processed within the last 180 days. If this is the case, please return to this tab and reenable file importer.
Enable File Imports in your Environment
- Go to Admin Console and select the Integrations tab, then navigate to Data Import.
- Select Activate File Importer, and then confirm the activation.
Uploading your SSH key
Create a public/private key pair in ssh-rsa format. You can use the command
ssh-keygen -t rsa
in Windows command prompt or Mac terminal to generate the key.Follow the prompts to generate the public and private key. The private key should never be shared except with an authenticating tool like Filezilla or a scheduling/managed automated file transfer tool you may use to send Tulip data..By default, your first rsa public key is saved as the C:\Users<yourusername>.ssh\id_rsa.pub file on Windows, and the ~/.ssh/id_rsa.pub file on a Mac.Note that the .pub is the public key and the file without the .pub extension is your paired private key. You will want to copy the contents inside the .pub file to your clipboard for the next steps.
Navigate to the SFTP Keys module in Admin Console under the Integrations header in the sidebar.
Add the PUBLIC key by using the add SFTP Key UI. Note: Remember to add a helpful comment to remember what this key is used for. Example: OMS Integration or Staging Environment.
Connect to the SFTP server using your preferred SFTP client (for example, FileZilla). Connection details can be found by clicking the SFTP Connection Instructions button on the SFTP Keys module..If your tenant was called
mystore
and you were in the staging environment, your host would besftp-tulip-staging.tulipretail.com
and your username would bemystore
. If you were in the production environment on themystore
tenant, your username would remain the same, but you host would besftp-tulip-prod-<region>.tulipretail.com
, where region is one of the supported Tulip host regions in which your tenant is hosted - for instance,na
,eu
,de
, orcn
. The below is how your connection would appear in Filezilla if you were connecting to themystore
tenant in the staging environment. Not that the key file is the private key (i.e. without the .pub) that you generated in steps 1-2..To upload a file
/upload/ , and drop your files in that folder using the appropriate prefix on the file as mentioned in File Naming Conventions. For instance, if your tenant was named mystore
and you wanted to upload a customers file to the staging environment, you would connect to the hostsftp-tulip-staging.tulipretail.com
, navigate tomystore/upload/customers
, and drop the csv file starting withTCA
into that folder..You can monitor the progress of your files in Admin Console under integrations -> Data Import. This will display the total records processed and how many uploaded successfully or received errors. Error logs can be downloaded from this interface. Error logs will also be surfaced under the
mystore/logs
SFTP folder..Any files that do not succeed will be moved into themystore/failed/<entity>
folder, where they will be retained for 30 days.
File Processing at a glance
At a high level file processing involves the following steps:
- The Client team places files in their appropriate SFTP upload directory. The name of the file determines the type of record to import. See Naming conventions in this guide.
- The records contained in the file are processed soon after they are uploaded to Tulip’s API. Uploads can be sent mutiple times on the same record and each time it will be updated fully with the contents in the latest record.
- Logs are generated in error files for failed records.
- Logs and current import statuses can be viewed in Admin Console under the Data Imports tab.
Supported File Formats
File formats currently supported include:
- CSV (today)
- JSON (future)
Supported Data Entities
Due to the decoupled nature of data processing on Tulip side and to apply a uniform cross client applicability of being to prepare, provide and test data, the entities in the File importer may not be combined currently into unified (denormalized) importable formats. Following data entities are supported:
Core:
- Stores
- Employees
- Goals
Catalog
- Categories
- Products and Product Value Categories
- OptionTypes and OptionValues
- Variants
- Prices
- Inventory
Clienteling
- Customers / Customer Patch
- Order History
- Tasks
- Preferences
- Loyalty
- Notes
Limitations
- EMPLOYEES: You can provide a default password against an employee record, but you must ensure that password is 3 chars long. no encryption/encoding of that default password is currently support. Note that files are transferred using Secure SSH enabled SFTP and Production files are handled in a compliant manner. Associate passwords must be reset using the HQ app by Client after initial set up (Requires a manager access).
File Naming Conventions
The type of records to import are defined by the file prefix. Note, please only use alphanumeric characters and (hypens and underscores) when naming your file. Supported prefixes include:
Entity/Record Type | File Prefix | Examples |
---|---|---|
Customer | TCU | TCU-CustomerSample001 |
Customer Patch | TPCU | |
Customer Assigned Entities | TCAE | |
Customer Associations | TCAS | |
Store | TST | TST-StoresSample001.csv |
Store Patch | TPST | |
Category | TCA | TCA-CategoriesSample001.csv |
Employees | TEM | TEM-EmployeesSample001.csv |
Option Types | TOT | TOT-OptionTypesSample001.csv |
Option Values | TOV | TOV-OptionValuesSample001.csv |
Products | TPRD | TPR-ProductsSample001.csv |
Product Value Categories | TPVC | |
Variants | TVA | TVA-VariantsSample001.csv |
Prices | TPRC | TPRC-PricesSample001.csv |
Inventory | TIN | TIN-InventorySample001.csv |
Inventory - PATCH Support | TPIN | TINinventory_goodData1.csv TINinventory_goodData2.csv |
Orders | TOR | TOR-OrderHistorySample001.csv |
Tasks (i.e. followups) | TTA | TTAtasks_goodData1.csv |
Customer Preferences | TCP | |
Customer Preferences Groups | TCPG | |
Customer Preferences Values | TCPV | |
Goals | TKPI | |
Customer Loyalty Tier Values | TCLT | TCLTvalues_goodData1.csv |
Notes | TNOT |
Important Considerations for Performance and Formatting
CONTENTS FORMATTING | How to format Contents of the File |
---|---|
DATA TYPES, DATES | All dates must be ISO8601 format. Except important dates (CUSTOMER ENTITY) - which support YYYY-MM-DD only. All other formats must be string. Ensure to check metadata and FILE IMPORTER SAMPLES. |
DATA TYPES, PHONE NO | Phones numbers must always in the E164 format, that includes a “+” symbol, country, area, and phone number. |
UPDATES TO RECORDS | If the ID of an entity already exists, then the record will be updated to match whatever data is passed. If the ID of an entity does not exist, then it will be created. Tulip will REPLACE the contents of its database with the incoming data from Client side via files - please ensure to include full record each time when sending an update to existing record. |
EMPTY FIELDS | No need to put any placeholder texts for optional fields, leaving them blank will do the equivalent of setting them to null. Trim any extra white spaces in the header line or in the data. Remove any special encoding and if there is any data with a comma in it use double quotes to wrap the string. For example, “23,“Whitby, York”, 64CA”. |
FILE TYPE | UTF8 ENCODING |
APP DISPLAY NOTE | Categories not visible if active product is not in category/ Each variant needs to have the same set of option types for the same product. E.g. If product A has variants X, Y, and Z. Then each X, Y, and Z need to have the same option types. For each entity (variants, orders, etc.) similar key ids (like Product ID in products file, Variant ID in variants file, etc.) needs to be added as contiguous record in the file one after another. See below for Multi-row formatting. |
IDs in Tulip | All columns (fields) that are of the naming convention “field ID” are the client-side system identifiers (eg Employee,Customer, Store, Product etc). The ids can be alphanumeric and up to 255 characters in length. Below is a reference to the key refnums, their names and maximum supported lengths:
|
IMAGES | Image URLs domains will need to be whitelisted by Tulip. It is very important to send us the image hosts before you send us any image urls to process (such as when products and variants are sent). |
HEADER ROW | A FIRST row as the header row. The header row must contain, at a minimum, the required fields for that CSV file. The columns in the header row are recommended to be in a specific order as described in the File Importer Templates. All rows after the header row must be data rows. |
Sending Rows of Content that Contain Arrays
This section clarifies how Tulip Importer supports sending arrays of nested fields under a record type. This provides a lot of flexibility in being able to sent arrays of fields in the files. This allows for multiple values for an entity, such as multiple ADDRESSES and PHONE numbers for a single Customer object. The same functionality is available for Order entities as well.
For every possible type of CSV file you can import (such as customers or products), there is a set of one or more required fields that must be filled out in order to have a successful import.
On a given row:
A row that contains different ID fields than the previous row counts as a new instance (such as a new customer or a new product).
A row that contains the SAME KEY or ID fields as the PREVIOUS row counts as an additional row, and is a PART of same instance as the previous row, except where specified.
- Additional rows that share the same Unique Row ID may only contain information under groups of columns labelled as Multi-Row Headers. Information in fields not under these types of columns will not be used since the first row’s info is used.
Multi-Row Headers and Multi-Column Headers are both types of columns where multiple pieces of information can exist.
- Multi-Row Headers allow for information to exist in additional rows whereas
- Multi-Column Headers allow you to spread the same information across multiple columns
Every group of Multi-Row Headers has a Multi-Column Headers group counterpart, and vice versa.
- These group counterparts may not be used together in the same CSV file.
- However, Multi-Row Headers and Multi-Column Headers may exist in the same CSV file as long as their group is different.
Examples
To see the difference between a multi-row header and a multi-column, consider the following example.
A customer CSV file includes two phone numbers. The first row is the header row.
Both customer ID and Phone Number ID repeat but the values are different. This allows this customer1 to have 2 phone numbers.
Multi-Row
Customer ID | Phone Number ID | Phone Number | Phone Number Country ID |
---|---|---|---|
customer1 | phone1 | 123-456-7890 | CAN |
customer1 | phone2 | 123-654-0987 | CAN |
How it looks inside of a CSV file - not how the Customer ID is the same. IMPORTANT: The rows with the same ID MUST BE CONTIGUOUS. Distributing the unique row ID in different locations in the file will lead to File processing considering each record as a separate UPDATE instead of attempting to group the array items together.
Customer ID,Phone Number ID,Phone Number,Phone Number Country ID
customer1,phone1,123-456-7890,CAN
customer1,phone2,123-654-0987,CAN
Multi-Column
Customer ID | Phone Number 1 ID | Phone Number 1 | Phone Number 1 Country ID | Phone Number 2 ID | Phone Number 2 | Phone Number 2 Country ID |
---|---|---|---|---|---|---|
customer1 | phone1 | 123-456-7890 | CAN | phone2 | 123-654-0987 | CAN |
How it looks inside of a CSV file:
Customer ID,Phone Number 1 ID,Phone Number 1,Phone Number 1 Country ID,Phone Number 2 ID,Phone Number 2,Phone Number 2 Country ID
customer1,phone1,123-456-7890,CAN,phone2,123-654-0987,CAN
Unlike the Multi-Row example, headers of the Phone Number group using Multi-Column headers include identifying numbers. These numbers have no meaning other than for the importer to differentiate different Phone Numbers.
These identifying numbers must be integers, but can be in any order, and do not have to begin with 1 nor be sequential. Although for readability it is recommended to start with 1 and increased towards the right side.
Although both ways to implement such multiplicity in values that are tied to single parent record is supported, please see the samples provided throughout this documentation to help create your data mapping accordingly.
Supported CSV File Types
Click on the link below to see the formatting and column level header specifications and data types for each.