Skip to content

Importing from a Cloud Storage

Find all information and how-to import from a cloud storage provider here.

Importing from Azure Data Lake Gen2

Azure Data Lake Gen2#

To import data from Azure Data Lake Storage Gen 2:

  1. Click the "+" button and choose "Import Job or right-click in the File Browser and select "Create New""Import Job". The 'New Import Job' tab appears in the menu bar.
  2. Click "Select Connection". The dialog 'Select Connection' opens.
  3. Click on the connection for Azure Data Lake Gen2 and confirm with "Select". The connection is displayed.
  4. Select the required file type from the drop-down "File Type" and confirm with "Next".
  5. Enter the file or folder name as it is named in your storage.
  6. Define the delimiter character the schema and the column names.

    INFO: The default value for delimiter is the comma ',' character.

  7. Select the schema of the imported data.

  8. If needed, uncheck the check-box to not include the column names in the first row.

    INFO: The check-box is marked per default. The column names are contained in the first row.

  9. If you want to filter by data and time select the filter method from the drop-down.

    INFO: Select the start date and end date from the calendar for the filter mode 'Fixed dates'.

    INFO: Enter a 'das' expression as the start and end expression, e.g. 'TODAY()-4d for the filter mode 'Dynamic dates'.

  10. If needed, exclude data by the file modification day, enter the amount of days.

  11. If needed, modify the advanced settings, e.g. the character encoding and confirm with "Next". The tab 'Data Fields' opens.
  12. Confirm with "Next". The tab 'Define Fields' opens.
  13. Mark all required columns.
  14. If needed, enter placeholder values and confirm with "Apply".
  15. Decide how to handle invalid data.
  16. Decide whether you want partition data and confirm with "Next". The tab 'Schedule' opens.

    INFO: If you have checked 'Partition Data', enter a date expression and select the data format from the drop-down.

  17. Decide whether the import shall be triggered manually or on a schedule.

  18. Select the option for data retention.
  19. If needed, enter the amount of sample records and the maximum amount of errors to log and confirm with "Next". The tab 'Schedule' opens.

    INFO: Higher values lead to more precise preview results but can rapidly decrease the performance.

  20. If needed, enter an import job description.

  21. Unmark the checkbox if the import shall not start immediately after the saving.

    INFO: The check-box is marked per default to start the import right after saving the import job.

  22. If needed, enter the email address for several notifications and confirm with "Next". The 'Save Import Job' dialog opens.

  23. Select the path the data shall be imported to, enter a name and confirm with "Save". Data Import from Azure Data Lake Storage Gen2 is finished.
Importing from Google Cloud Storage

Importing from Google Cloud Storage#

INFO

Find here all information about importing data from Google Cloud Storage.

Requirements: Configuring Google Cloud Storage as a Connection#

A Google Cloud Storage connection with Spectrum must be created before exporting data.

INFO

Note that a Google Cloud Storage bucket must not contain a "_" in its name.

Configuring Import Jobs with Google Cloud Storage#

To import data from Google Cloud Storage:

  1. Click the "+" button and choose "Import Job" or right-click in the File Browser and select "Create New""Import Job". The 'New Import Job' tab appears in the menu bar.
  2. Click "Select Connection". The dialog 'Select Connection' opens.
  3. Click on the connection for Google Cloud Storage and confirm with "Select". The connection is displayed.
  4. Select the required file type from the drop-down 'File Type' and confirm with "Next".
  5. Click "Browse" to select the folder/ file from the Google Cloud Storage. The 'Remote Data Browser' opens.
  6. Select the required folder/ file and confirm with "Select". The data name is displayed in the 'File or Folder' field.

    TIP: You can scroll the results or filter for them.

  7. Define the delimiter character the schema and the column names.

    INFO: The default value for delimiter is ','.

  8. If needed, choose a filter from the drop-down to allow the usage of a time range for applying time patterns in the file name path.

  9. If needed, set file filter to exclude specific files in dependency to their age.
  10. If needed, modify advanced settings, e.g. the character encoding and confirm with "Next". The tab 'Data Fields' opens.
  11. Mark all required columns.
  12. If needed, view the raw records.
  13. If needed, enter placeholder values and confirm with "Apply".
  14. Decide how to handle invalid data.
  15. Decide whether you want partition data.
  16. Confirm with "Next". The tab 'Schedule' opens.
  17. Decide whether the import is triggered manually or on a schedule.
  18. Select the way data is replaced or appended and confirm with "Next". The tab 'Save' opens.
  19. If needed, enter an import job description.
  20. Mark the checkbox when the import shall start immediately after the saving.
  21. If needed, enter the email address for several notifications and confirm with "Next". The 'Save Import Job' dialog opens.
  22. Select the path the data shall be imported, enter a name and confirm with "Save". Data Import from Google Cloud Storage is finished.
Importing from Snowflake

Importing from Snowflake#

To import from Snowflake:

  1. Click the "+" button and choose "Import Job" or right-click in the File Browser and select "Create New""Import Job". The 'New Import Job' tab appears in the menu bar.
  2. Click "Select Connection". The dialog 'Select Connection' opens.
  3. Click on the connection for Snowflake and confirm with "Select". The connection is displayed.
  4. The Snowflake database name is that set in the connector.

    INFO: If not previously set from the connector, enter the Snowflake Warehouse name.

  5. Select to import to a table or view and select the schema to be used. A preview of the imported data is displayed.

  6. Review the schema and click "Next".
  7. Review the schedule, data retention, and advanced properties for the job.
  8. Add a description and check the box if you would like the import to start immediately after saving. Click "Save", and name the file.