Import Definition Details

Overview

The Import Definition Details screen provides all of the information about the selected Import Definition.

File Size Limitations

When importing files into EDP, please note the following file size limitations:

Control Files

When importing data into EDP, the platform supports the optional use of a Control File as a way to validate that the import file has been received correctly. If enabled, the Control File must be provided along with the import file. The Control File should contain one row and one column (without a header row). The value in the Control File must be an integer that represents the record count of the import file (including a column header row, if used). 

EDP will count all of the records in the import file, and compare that count to the integer in the Control File. If the platform identifies a discrepancy in the record count, then the import file will not be imported and an error message is displayed.

EDP allows you to optionally treat multiple import files as a single import job. If you have this feature enabled, ALL files must pass the Control File evaluation criteria, or else the import will fail for all files. If you do not have this feature enabled, the platform evaluates each import file separately against its Control File. 

The Control File must conform to the proper naming convention so that the platform can identify the import file to which it corresponds. The Control File must have the exact same name as the import file, but with file extension "control" or "ctrl." For example:

If you zip the import file, you must also zip its corresponding Control File. For example:

If importing multiple files, you can optionally zip all import files together in one file, and zip all their corresponding Control Files in another zip file. For example:

If you encrypt the import file, you must also encrypt its corresponding Control File. For example:

Access

To access the Import Definition Details screen, click on the desired Import Definition from the Imports screen.

Navigation links are provided near the top of the Import Definition Details screen:

Features

 Edit Import Definition Properties

Click hereClick here

To edit the Import Definition properties:

  1. Click the Edit icon next to the Import Definition name. The "Edit Import Definition Name" pop-up window is displayed.

  2. Optionally change the Display Name of the Import Definition.

  3. If your account is configured to use Business Units, optionally enter one or more Business Units to which this Import Definition applies. A confirmation dialog box is displayed; click confirm.

Note: If you assign a Business Unit to an Import Definition, you can't later change or remove that assignment.

  1. From the "Integration Queue Name" field, select a priority ranking for this Import Definition: High, Normal (the default setting), or Low. The platform will automatically process files in the highest possible priority queue unless that queue is empty; then the platform will move to the next-highest possible priority queue, and so on.

  2. Click save.

 

 Delete an Import Definition

Click hereClick here

To delete an Import Definition:

  1. Click the Edit icon next to the Import Definition name. The "Edit Import Definition Name" pop-up window is displayed.

  2. Click delete. A confirmation dialog box is displayed.

  3. Click confirm.

 

 

 Execute an Import Definition

Click hereClick here

Import Definitions can be scheduled to run automatically on a one-time or recurring basis. You can also manually execute an Import Definition as follows:

  1. Click run. A confirmation dialog box is displayed.

  2. Click confirm.

 

 

The Import Definition Details screen is split into multiple steps, identified by a series of numbered items across the top of the screen. You can complete these steps (described below) in any order you like.

 Origin Settings

Click hereClick here

  1. Click origin settings.

  2. The import file can reside either on your local PC, or be imported from an SFTP server, an Amazon S3 bucket, or Google Cloud Storage. 

Import a File from your Local PCImport a File from your Local PC

To import a file located on your local PC:

  1. From the "Connector" drop-down menu, select Desktop.

  2. Click choose file, then browse to and select the desired import file.

  3. From the "File Delimiter" drop-down menu, select a delimiter character, or select Custom Delimiter and enter the custom character.

  4. If your import file uses text qualifiers, select the appropriate character from the "Text Qualifier" drop-down menu. The platform supports the following standard qualifiers: single quote ('), double quote ("), and pipe (|). Or, select None if your import file doesn't use text qualifiers. To use a custom qualifier, select Custom Qualifier and enter the custom character.

  5. From the "File Encoding" drop-down menu, select an encoding method (the default method is UTF-8).

  6. If the import file is encrypted, select PGP from the "Encryption Type" drop-down menu.

Note: EDP does not accept PGP-encrypted files that are signed; the files must be unsigned. 

  1. From the "Contains Unescaped Double Quotes" drop-down menu, select Yes if you want to ignore enclosing double quotes. Or, select No if you want the platform to treat enclosing double quotes as the CSV cell container.

  2. If the import file has a column header row, select Yes from the "File has Column Header" drop-down menu.

  3. If you want to consider a missing file as an error, select Yes from the "Log if File Missing" drop-down menu. 

  4. If the import file is sorted, select Yes from the "Is File Sorted" drop-down menu. 

  5. If you're importing Event records, the following additional option is displayed:

  • Write to Person Profile: This option allows you to update the Person Profile with Event data for use in segmentation and targeting. Event data is always accessible for reporting and analytics, but you may have cases where historical Event data is too old to be useful for segmentation and targeting, and it serves no useful purpose on the Person Profile. If you select Yes from this drop-down menu, the platform will update the Person Profile with Event data. Please note that you may see slower import speeds if this option is enabled. As a best practice, if you don't need the Event data for segmentation and targeting, select No from this drop-down menu. 

  1. If you're importing Activity records, the following additional options are displayed:

  • Record Keys: Enter the name of the field on the import file to use to uniquely identify groups of lines as one record (or one transaction / order). 

  • Process Earned Rule Calculations: If you select Yes from this drop-down menu, the platform will use the Activity records in the import file to execute Calculation Rules (often referred to as "Earn Rules"). 

Import a File via: SFTP, Amazon S3, or Gogle Cloud StorageImport a File via: SFTP, Amazon S3, or Gogle Cloud Storage

To import a file located on an FTP Server, an Amazon S3 Bucket, or Google Cloud Storage: 

  1. From the "Connector" drop-down menu, select either Secure FTPAmazon S3, or Google Cloud Storage.

  2. From the "Connection Profile" drop-down menu, select a Connection Profile (see Connectors for more information on creating Connection Profiles).

  3. In the Folder Path field, enter the path to the folder on the server or S3 bucket where the import file is located.

  4. In the File Name field, enter the name of the import file. This field supports the use of a date pattern mask, and the use of wild card characters.

  5. From the "File Timestamp Time Zone" drop-down menu, select the time zone to use for the file's timestamp.

  6. From the "After Import" drop-down menu, identify what to do with the original file after it's been imported:

  • Move File: In the "Folder Path" field, enter the location where you want to move the file.

  • Add Suffix: In the "Suffix" field, enter the text string you want to append to the file name.

  1. From the "File Delimiter" drop-down menu, select a delimiter character, or select Custom Delimiter and enter the custom character.

  2. If your import file uses text qualifiers, select the appropriate character from the "Text Qualifier" drop-down menu. The platform supports the following standard qualifiers: single quote ('), double quote ("), and pipe (|). Or, select None if your import file doesn't use text qualifiers. To use a custom qualifier, select Custom Qualifier and enter the custom character.  

  3. From the "File Encoding" drop-down menu, select an encoding method (the default method is UTF-8).

  4. If the import file is encrypted, select PGP from the "Encryption Type" drop-down menu.

Note: EDP does not accept PGP-encrypted files that are signed; the files must be unsigned. 

  1. From the "Contains Unescaped Double Quotes" drop-down menu, select Yes if you want to ignore enclosing double quotes. Or, select No if you want the platform to treat enclosing double quotes as the CSV cell container.

  2. If the import file has a column header row, select Yes from the "File has Column Header" drop-down menu.

  3. If you want to consider a missing file as an error, select Yes from the "Log if File Missing" drop-down menu. 

  4. If the import file is sorted, select Yes from the "Is File Sorted" drop-down menu. 

  5. If you're importing Event records, the following additional option is displayed:

  • Write to Person Profile: This option allows you to update the Person Profile with Event data for use in segmentation and targeting. Event data is always accessible for reporting and analytics, but you may have cases where historical Event data is too old to be useful for segmentation and targeting, and it serves no useful purpose on the Person Profile. If you select Yes from this drop-down menu, the platform will update the Person Profile with Event data. Please note that you may see slower import speeds if this option is enabled. As a best practice, if you don't need the Event data for segmentation and targeting, select No from this drop-down menu

  1. If you're importing Activity records, the following additional options are displayed:

  • Record Keys: Enter the name of the field on the import file to use to uniquely identify groups of lines as one record (or one transaction / order). 

  • Process Earned Rule Calculations: If you select Yes from this drop-down menu, the platform will use the Activity records in the import file to execute Calculation Rules (often referred to as "Earn Rules"). 

  1. Some clients split very large import files into multiple files. In this situation, the platform can be directed to process these multiple files as a single import job, in a specified sequence. To enable this feature, select Yes from the "Process multiple files as one" drop-down menu. Then enter the following:

  • Sort Files: Select either Ascending - Alphanumeric or Descending - Alphanumeric

  • Number of Files per Job: When processing multiple files as one file, Cheetah concatenates all these files into one file, and imports it in one import job. If the size of this concatenated file is too large, it can exceed the total memory available for the import process, which results in a failed import. To prevent this situation, you can specify how many files should be included in each job. Please note this is an optional feature that must be enabled in your account. 

  1. For SFTP and Amazon S3 imports -- optionally, to use a Control File, select Yes from the "Control File Name" drop-down menu, then enter the name of the Control File. This file must be the same name as the import file, with the file extension ".control" or "ctrl."

  2. For SFTP and Amazon S3 imports -- you can specify if the platform should generate transform error files, insert error files, and / or skipped record error files. Click "Advanced Options" to expand it. From the "Export Error / Skip Logs" drop-down menu, select Yes. In the "Folder Path" field, enter the desired file path. The default filename for these error files is: "Import_Job_<ERROR_TYPE>_<JOB_ID>".

  1.  Click save. Or click next to navigate to Action Settings.

 

 Action Settings

Click hereClick here

The Action Settings screen is used to define how the platform handles the import data. 

Note: Depending on the table into which you're importing data, the options on this screen will vary; you may not see all of the options listed below. 

  1. Click action settings.

  2. From the "Include Blank Values" drop-down menu, select either:

  • Yes: Blank values in the import file should overwrite populated fields in the destination table.

  • No: Ignore blank values in the import file.

  1. From the "Skip Publish Activity" drop-down menu, select either:

  • Yes: Do not update Activity records for individuals in the import file.

  • No: Update Activity records for individuals in the import file.

Note: The default option for Skip Publish Activity is Yes. This option is enabled typically only for performance reasons.

  1. From the "Validate Records" drop-down menu, select either:

  • Yes: Run email and phone number syntax validation on the appropriate import fields. If an email or phone field fails this validation, the entire record is rejected, and is not written to the database.

Note: Please note that for email fields, this option works in conjunction with a Configuration setting called "Skip Validating Email Format." If this Configuration setting is set to true, the platform will write an improperly-formatted email address to the database, even if the above import-level flag is enabled.  

  • No: Do not run email and phone syntax validation.

  1. From the "Action" drop-down menu, identify how you want the platform to handle the import data:

  • Insert Data or Update if Duplicate: Create new records and update existing records.

  • Insert Data or Skip if Duplicate: Create new records only; existing records are ignored.

  • Update Data or Skip Insert: Update existing records only; new records are ignored.

  • Delete Records: Delete existing records. Please note that if you're importing People records, the platform will also delete all associated records as well, such as Activities, Events, Offers, and so forth. 

  • Overwrite All Existing Data: For Lookup tables only, replace the entire Lookup table with the data in the import file

  1. In the "Match Records By" field, enter the primary key (or keys) by which you want to match records in the import file to existing records in the destination table.

Note: If importing data into a Custom Object table, the "Match Custom Object By" field will automatically be populated with the Primary Key for the Custom Object table.  

  1. If importing data into a Custom Object table, if you're importing data into a Sub Object, in the "Match Sub Objects Fields By"  field, enter the primary key (or keys) by which you want to match records in the import file to existing records in the Sub Object. 

  2. If you selected more than one Match By field, select a logical operator:

  • All: A record must match on all selected fields to be considered a duplicate.  

  • Any: A record must match on at least one of the selected fields to be considered a duplicate.

Note: The All / Any selector is optional, and must be enabled in your account. If this feature is not enabled in your account, the platform defaults to using All.

  1. Click save. Or click next to navigate to Map & Transform.

 

 Map and Transform

Click hereClick here

The Map & Transform section is used to define the data map for the import file. The data map allows you to identify which column in the import file should be loaded into which field in the destination table. You can either define the data map manually, or upload a file of column header names to define the data map.

  1. Click map & transform.

  2. You can either define the data map manually, or upload a file of column header names to define the data map.

Manually Define the Data MapManually Define the Data Map

To manually define a data map:

  1. Click add row (or select Add Row from the "Add" drop-down menu).

  2. Check "Required" if this column is required.

  3. In the Field Name field, enter the name of the column from the import file.

  4. To perform a data transformation on this field, click add. From the pop-up menu of available options, click the desired Data Transformation. Repeat this step as needed to define additional data transformations.

  5. From the "Destination" drop-down, select the field in the table where this column is to be loaded.

  6. Repeat the above steps to define additional columns to import.

  7. To remove a column, click the Delete button (trash can icon).

Upload HeadersUpload Headers

To upload the headers for the import file data map:

  1. Click upload headers (or select Upload Headers from the "Add" drop-down menu).

  2. In the "Upload Headers" pop-up window, click choose file.

  3. Browse to and select the desired file.

Note: The file of column header names must be in .csv or .txt format.

  1. From the "Delimiter" drop-down menu, select a delimiter character, or select Custom and enter the custom character.

  2. Click upload. The Map & Transform screen is populated with the column header names from the import file.

The column header names are displayed within the Field Name field, in the same sequence as they appear in the import file. The platform attempts to map each column name to a field in the destination table. If the platform finds a match, the Destination drop-down menu is pre-populated with that field name. 

  1. If the platform couldn't identify a field name (or if the field name is incorrect), select the desired field from the "Destination" drop-down menu.

  2. To perform a data transformation on a column, click add. From the pop-up menu of available options, click the desired Data Transformation. Repeat this step as needed to define additional data transformations.

  3. To remove a column, click the Delete button (trash can icon).

  1. Click save. Or click next to navigate to Schedule.

 

 Test Map and Transform Settings

Click hereClick here

To test your data map and data transformations:

  1. Click test map & transform. The "Map & Transform Preview" pop-up window is displayed.

  2. Click choose file, and browse to and select the desired test file (this file must include a header row, and contain fewer than 100 rows).

Note: For the test file, the platform will apply the import settings defined in the "Origin Settings" section (such as delimiter character, text qualifier, etc.), so your test file must conform to those same settings. In addition, the platform will validate that the incoming test file field values are compatible with the data types of the destination table fields.  

  1. Click upload. The test results are displayed.

  • Processed: Click the Success tab to view the records that were successfully mapped and transformed. Optionally click success rows to download these records.

  • Transform Errors: Click the Transform Errors tab to view the records that failed the map and transform process. Optionally click transform error rows to download these rows. 

  • Insert Errors: Click the Insert Errors tab to view the records that failed the database insert process. Optionally click insert error rows to download these rows. 

  1. Optionally, to test another file, click upload another file.

  2. When finished, click close.  

 

 Schedule

Click hereClick here

In addition to running an import on demand, EDP supports the following automated execution options -- run once at a specified date / time, or run repeatedly at a specified frequency.

  1. Click schedule.

  2. Select one of the following methods:

Schedule: On DemandSchedule: On Demand

You can schedule an import to run only when manually executed.

  1. Toggle "Enabled" to "Yes."

  2. From the "Recurrence" drop-down menu, select "On demand."

Schedule: Run onceSchedule: Run once

You can schedule an import to process once at a specified date / time.

  1. Toggle "Enabled" to "Yes."

  2. From the "Recurrence" drop-down menu, select "Once."

  3. Click into the "Execute Once On" field. A calendar pop-up window is displayed.

  4. Select the desired date when you want the import to execute.

  5. Within the calendar pop-up window, click the clock icon.

  6. Define the time when you want the import to execute.

  7. From the "Time Zone" menu, select the time zone to use when determining the time to execute.

Schedule: RecurringSchedule: Recurring

You can schedule an import to run on a recurring frequency:

  1. Toggle "Enabled" to "Yes."

  2. From the "Recurrence" drop-down menu, select "Repeat."

  3. Next to Schedule Composer, select an interval:

  • Hour: Select the minute past the hour.

  • Day: Select the time of day.

  • Week: Select the day of the week, and the time of day.

  • Month: Select the day of the month, and the time of day.

  • Year: Select the day, the month, and the time of day.

  1. Optionally, to manually enter a CRON expression, check "Advanced Mode," then enter the expression.

  2. Define the import schedule begin and end date. If you leave these date ranges blank, the schedule will run indefinitely.

  3. From the "Time Zone" menu, select the time zone to use when determining the time to execute.

  1. Optionally, if you want to send error notifications, toggle Error to Yes, then enter the email addresses for the notification recipients.

  2. Optionally, if you want to send success notifications, toggle Success to Yes, then enter the email addresses for the notification recipients.

  3. Click save.

 

 Job History

Click hereClick here

To view the execution history of an Import Definition:

  1. Click job history. Each execution of an Import is displayed as a row on the Job History screen. Each row indicates the start and end date / time, the current status, and the number of records imported.

  2. Optionally, to filter the Job History log by processing status, select a status value from the drop-down menu, then click the Search button. 

  3. Optionally, to download a log file associate with an execution, select one of the following from the Download menu within the Action column:

  • EDP Log

  • Transofrmation Errors

  • Skipped Records Log

  • Records with Errors

  1.  Optionally, to view additional details of the import job, click the Show Import Jobs icon within the Action column.

  2. Optionally, to delete a job, click the Delete icon within the Action column.

 

Back to Imports