Ready to get started?

Learn more about CData Connect Cloud or sign up for free trial access:

Free Trial

Connect to Live Hive Data in PostGresSQL Interface through CData Connect Cloud



Create a live connection to Hive in CData Connect Cloud and connect to your Hive data from PostgreSQL.

There are a vast number of PostgreSQL clients available on the Internet. PostgreSQL is a popular interface for data access. When you pair PostgreSQL with CData Connect Cloud, you gain database-like access to live Hive data from PostgreSQL. In this article, we walk through the process of connecting to Hive data in Connect Cloud and establishing a connection between Connect Cloud and PostgreSQL using a TDS foreign data wrapper (FDW).

CData Connect Cloud provides a pure SQL Server interface for Hive, allowing you to query data from Hive without replicating the data to a natively supported database. Using optimized data processing out of the box, CData Connect Cloud pushes all supported SQL operations (filters, JOINs, etc.) directly to Hive, leveraging server-side processing to return the requested Hive data quickly.

Connect to Hive in Connect Cloud

CData Connect Cloud uses a straightforward, point-and-click interface to connect to data sources.

  1. Log into Connect Cloud, click Connections and click Add Connection
  2. Adding a Connection
  3. Select "Hive" from the Add Connection panel
  4. Selecting a data source
  5. Enter the necessary authentication properties to connect to Hive. Set the Server, Port, TransportMode, and AuthScheme connection properties to connect to Hive. Configuring a connection (Salesforce is shown)
  6. Click Create & Test
  7. Navigate to the Permissions tab in the Add Hive Connection page and update the User-based permissions. Updating permissions

Add a Personal Access Token

If you are connecting from a service, application, platform, or framework that does not support OAuth authentication, you can create a Personal Access Token (PAT) to use for authentication. Best practices would dictate that you create a separate PAT for each service, to maintain granularity of access.

  1. Click on your username at the top right of the Connect Cloud app and click User Profile.
  2. On the User Profile page, scroll down to the Personal Access Tokens section and click Create PAT.
  3. Give your PAT a name and click Create.
  4. Creating a new PAT
  5. The personal access token is only visible at creation, so be sure to copy it and store it securely for future use.

Build the TDS Foreign Data Wrapper

The Foreign Data Wrapper can be installed as an extension to PostgreSQL, without recompiling PostgreSQL. The tds_fdw extension is used as an example (https://github.com/tds-fdw/tds_fdw).

  1. You can clone and build the git repository via something like the following view source: sudo apt-get install git git clone https://github.com/tds-fdw/tds_fdw.git cd tds_fdw make USE_PGXS=1 sudo make USE_PGXS=1 install Note: If you have several PostgreSQL versions and you do not want to build for the default one, first locate where the binary for pg_config is, take note of the full path, and then append PG_CONFIG= after USE_PGXS=1 at the make commands.
  2. After you finish the installation, then start the server: sudo service postgresql start
  3. Then go inside the Postgres database psql -h localhost -U postgres -d postgres Note: Instead of localhost you can put the IP where your PostgreSQL is hosted.

Connect to Hive data as a PostgreSQL Database and query the data!

After you have installed the extension, follow the steps below to start executing queries to Hive data:

  1. Log into your database.
  2. Load the extension for the database: CREATE EXTENSION tds_fdw;
  3. Create a server object for Hive data: CREATE SERVER "ApacheHive1" FOREIGN DATA WRAPPER tds_fdw OPTIONS (servername'tds.cdata.com', port '14333', database 'ApacheHive1');
  4. Configure user mapping with your email and Personal Access Token from your Connect Cloud account: CREATE USER MAPPING for postgres SERVER "ApacheHive1" OPTIONS (username 'username@cdata.com', password 'your_personal_access_token' );
  5. Create the local schema: CREATE SCHEMA "ApacheHive1";
  6. Create a foreign table in your local database: #Using a table_name definition: CREATE FOREIGN TABLE "ApacheHive1".Customers ( id varchar, CompanyName varchar) SERVER "ApacheHive1" OPTIONS(table_name 'ApacheHive.Customers', row_estimate_method 'showplan_all'); #Or using a schema_name and table_name definition: CREATE FOREIGN TABLE "ApacheHive1".Customers ( id varchar, CompanyName varchar) SERVER "ApacheHive1" OPTIONS (schema_name 'ApacheHive', table_name 'Customers', row_estimate_method 'showplan_all'); #Or using a query definition: CREATE FOREIGN TABLE "ApacheHive1".Customers ( id varchar, CompanyName varchar) SERVER "ApacheHive1" OPTIONS (query 'SELECT * FROM ApacheHive.Customers', row_estimate_method 'showplan_all'); #Or setting a remote column name: CREATE FOREIGN TABLE "ApacheHive1".Customers ( id varchar, col2 varchar OPTIONS (column_name 'CompanyName')) SERVER "ApacheHive1" OPTIONS (schema_name 'ApacheHive', table_name 'Customers', row_estimate_method 'showplan_all');
  7. You can now execute read/write commands to Hive: SELECT id, CompanyName FROM "ApacheHive1".Customers;

More Information & Free Trial

Now, you have created a simple query from live Hive data. For more information on connecting to Hive (and more than 100 other data sources), visit the Connect Cloud page. Sign up for a free trial and start working with live Hive data in PostgreSQL.