Ready to get started?

Download a free trial of the Apache Spark ODBC Driver to get started:

 Download Now

Learn more:

Apache Spark Icon Apache Spark ODBC Driver

The Spark ODBC Driver is a powerful tool that allows you to connect with Apache Spark, directly from any applications that support ODBC connectivity.

The Driver maps SQL to Spark SQL, enabling direct standard SQL-92 access to Apache Spark.

Write a Simple Go Application to work with Spark Data on Linux



Use the CData ODBC Driver for Apache Spark and unixODBC to create a simple Go app with live connectivity to Spark data.

Go is an open source programming language that enables you to easily build software on Linux/UNIX machines. When Go is paired with the ODBC Driver for Spark and unixODBC you are able write applications with connectivity to live Spark data. This article will walk you through the process of installing the ODBC Driver for Spark, configuring a connection using the unixODBC Driver Manager, and creating a simple Go application to work with Spark data.

Using the CData ODBC Drivers on a Unix/Linux Machine

The CData ODBC Drivers are supported in various Red Hat-based and Debian-based systems, including Ubuntu, Debian, RHEL, CentOS, and Fedora. There are also several libraries and packages that are required, many of which may be installed by default, depending on your system. For more information on the supported versions of Linux operating systems and the required libraries, please refer to the "Getting Started" section in help documentation (installed and found online).

Installing the Driver Manager

Before installing the driver, you need to be sure that your system has a driver manager. For this article, you will use unixODBC, a free and open source ODBC driver manager that is widely supported.

For Debian-based systems like Ubuntu, you can install unixODBC with the APT package manager:

$ apt-get install unixodbc unixodbc-dev

For systems based on Red Hat Linux, you can install unixODBC with yum or dnf:

$ yum install unixODBC unixODBC-devel

The unixODBC driver manager reads information about drivers from an odbcinst.ini file and about data sources from an odbc.ini file. You can determine the location of the configuration files on your system by entering the following command into a terminal:

$ odbcinst -j

The output of the command will display the locations of the configuration files for ODBC data sources and registered ODBC drivers. User data sources can only be accessed by the user account whose home folder the odbc.ini is located in. System data sources can be accessed by all users. Below is an example of the output of this command:

DRIVERS............: /etc/odbcinst.ini SYSTEM DATA SOURCES: /etc/odbc.ini FILE DATA SOURCES..: /etc/ODBCDataSources USER DATA SOURCES..: /home/myuser/.odbc.ini SQLULEN Size.......: 8 SQLLEN Size........: 8 SQLSETPOSIROW Size.: 8

Installing the Driver

You can download the driver in standard package formats: the Debian .deb package format or the .rpm file format. Once you have downloaded the file, you can install the driver from the terminal.

The driver installer registers the driver with unixODBC and creates a system DSN, which can be used later in any tools or applications that support ODBC connectivity.

For Debian-based systems like Ubuntu, run the following command with sudo or as root: $ dpkg -i /path/to/package.deb

For systems that support .rpms, run the following command with sudo or as root: $ rpm -i /path/to/package.rpm

Once the driver is installed, you can list the registered drivers and defined data sources using the unixODBC driver manager:

List the Registered Driver(s)

$ odbcinst -q -d CData ODBC Driver for Apache Spark ...

List the Defined Data Source(s)

$ odbcinst -q -s CData SparkSQL Source ...

To use the CData ODBC Driver for Apache Spark with unixODBC, you need to ensure that the driver is configured to use UTF-16. To do so, edit the INI file for the driver (cdata.odbc.sparksql.ini), which can be found in the lib folder in the installation location (typically /opt/cdata/cdata-odbc-driver-for-sparksql), as follows:

cdata.odbc.sparksql.ini

... [Driver] DriverManagerEncoding = UTF-16

Modifying the DSN

When the driver is installed, a system DSN should be predefined. You can modify the DSN by editing the system data sources file (/etc/odbc.ini) and defining the required connection properties. Additionally, you can create user-specific DSNs that will not require root access to modify in $HOME/.odbc.ini.

Set the Server, Database, User, and Password connection properties to connect to SparkSQL.

/etc/odbc.ini or $HOME/.odbc.ini

[CData SparkSQL Source] Driver = CData ODBC Driver for Apache Spark Description = My Description Server = 127.0.0.1

For specific information on using these configuration files, please refer to the help documentation (installed and found online).

Creating a Simple Go App for Spark Data

With the Driver Manager installed and the DSN configured, you are ready to create a simple Go application to work with your Spark data. To start, install a Go driver for ODBC databases. While there are several options available, this article will use the odbc driver found at https://github.com/alexbrainman/odbc.

Installing odbc on Linux

To install the odbc driver for Go, you will need to first ensure that you define the GOPATH environment variable:

export GOPATH=$HOME/golang/go

Once GOPATH is defined, you are ready to install the Go driver for ODBC databases:

$ go get github.com/alexbrainman/odbc

Now you are ready to create and execute a simple Go application.

Sample Go Application

The sample application issues a simple SQL SELECT query for Spark data and displays the results. Create the directory $GOPATH/src/cdata-odbc-spark and create a new Go source file, copying the source code from below.

cdata-odbc-spark.go

package main import ( _ "github.com/alexbrainman/odbc" "database/sql" "log" "fmt" ) func main() { db, err := sql.Open("odbc", "DSN=CData SparkSQL Source") if err != nil { log.Fatal(err) } var ( city string balance string ) rows, err := db.Query("SELECT City, Balance FROM Customers WHERE Country = ?", "US") if err != nil { log.Fatal(err) } defer rows.Close() for rows.Next() { err := rows.Scan(&city, &balance) if err != nil { log.Fatal(err) } fmt.Println(city, balance) } err = rows.Err() if err != nil { log.Fatal(err) } defer db.Close() }

In the terminal, navigate to the Go application directory and build the application:

$ go build

After the application builds, you will be able to execute the application, displaying your Spark data:

$ ./cdata-odbc-spark

At this point, you have a simple Go application for working with Spark data. From here, you can easily expand the application, adding deeper read/write functionality through familiar SQL queries.