Source and destination

Databricks provides instructions on how to setup a SQL endpoint with correct permissions and retrieve login credentials. We recommend following the Databricks documentation for setting up an ODBC connection.

After following instructions, you should have the following credentials from Databricks which you will use to connect to Polytomic:

  • Server hostname
  • Port
  • Access token
  • HTTP Path

Once you have set up your endpoint and credentials, you can set up Polytomic to read data from Databricks.

  1. In Polytomic, go to ConnectionsAdd ConnectionDatabricks.
  1. Enter the server hostname, port, access token, and HTTP path.

  2. (Optional) If you'd like to sync data to your Databricks instance using Polytomic, you need to configure an AWS S3 bucket. See instructions below.

  3. Click Save.

Writing to Databricks

There are two requirements for writing to Databricks:

  1. A user with write permission to the table/schemas that you want to load data into.
  2. An S3 bucket in the same region as your Databricks server and an Access Key/Secret Key with read/write access to the bucket. Polytomic will use these credentials to:
  • Write intermediary load files to the bucket
  • Instruct Databricks to load from the same bucket

Files are removed as each load job completes.

Once you have set up a user and an S3 bucket with an Access Key/Secret Key pair, enter those details in the connection configuration above.