Build ETL Jobs with MongoDB Data in AWS Glue Studio



Connect to MongoDB from AWS Glue Studio and create ETL jobs with access to live MongoDB data using the CData Glue Connector.

AWS Glue is an ETL service from Amazon that allows you to easily prepare and load your data for storage and analytics. With Glue Studio, you can build no-code and low-code ETL jobs that work with data through CData Glue Connectors. In this article, we walk through configuring the CData Glue Connector for MongoDB and creating and running an AWS Glue job that works with live MongoDB data.

About MongoDB Data Integration

Accessing and integrating live data from MongoDB has never been easier with CData. Customers rely on CData connectivity to:

MongoDB's flexibility means that it can be used as a transactional, operational, or analytical database. That means CData customers use our solutions to integrate their business data with MongoDB or integrate their MongoDB data with their data warehouse (or both). Customers also leverage our live connectivity options to analyze and report on MongoDB directly from their preferred tools, like Power BI and Tableau.

For more details on MongoDB use case and how CData enhances your MongoDB experience, check out our blog post: The Top 10 Real-World MongoDB Use Cases You Should Know in 2024.


Getting Started


Update Permissions for your IAM Role

When you create the AWS Glue job, you specify an AWS Identity and Access Management (IAM) role for the job to use. The role must grant access to all resources used by the job, including Amazon S3 for any sources, targets, scripts, temporary directories, and AWS Glue Data Catalog objects. The role must also grant access to the CData Glue Connector for MongoDB from the AWS Glue Marketplace.

The following policies should be added to the IAM role for the AWS Glue job, at a minimum:

  • AWSGlueServiceRole (For accessing Glue Studio and Glue Jobs)
  • AmazonEC2ContainerRegistryReadOnly (For accessing the CData AWS Glue Connector for MongoDB)

If you will be accessing data found in Amazon S3, add:

  • AmazonS3FullAccess (For reading from and writing to Amazon S3)

And lastly, if you will be using AWS Secrets Manager to store confidential connection properties (see more below), you will need to add the SecretsManagerReadWrite permission.

For more information about granting access to AWS Glue Studio and Glue Jobs, see Setting up IAM Permissions for AWS Glue in the AWS Glue documentation.

For more information about granting access to the Amazon S3 buckets, see Identity and access management in the Amazon Simple Storage Service Developer Guide.

For more information on setting up access control for your secrets, see Authentication and Access Control for AWS Secrets Manager in the AWS Secrets Manager documentation and Limiting Access to Specific Secrets in the AWS Secrets Manager User Guide. The credential retrieved from AWS Secrets Manager (a string of key-value pairs) is used in the JDBC URL used by the CData Glue Connector when connecting to the data source, as shown above.

Collect MongoDB Connection Properties

Set the Server, Database, User, and Password connection properties to connect to MongoDB. To access MongoDB collections as tables you can use automatic schema discovery or write your own schema definitions. Schemas are defined in .rsd files, which have a simple format. You can also execute free-form queries that are not tied to the schema.

Make a note of the necessary properties for use with the CData Glue Connector for MongoDB.

(Optional) Store MongoDB Connection Properties Credentials in AWS Secrets Manager

To safely store and use your connection properties, you can save them in AWS Secrets Manager.

Note: You must host your AWS Glue ETL job and secret in the same region. Cross-region secret retrieval is not supported currently.

  1. Sign in to the AWS Secrets Manager console.
  2. On either the service introduction page or the Secrets list page, choose Store a new secret.
  3. On the Store a new secret page, choose Other type of secret. This option means you must supply the structure and details of your secret.
  4. You can read more about the required properties to connect to MongoDB in the "Activate" section below. Once you know which properties you wish to store, create a key-value pair for each property. For example:
    • Username: account user (for example, [email protected])
    • Password: account password
    • Add any additional private credential key-value pairs required by the CData Glue Connector for MongoDB

    For more information about creating secrets, see Creating and Managing Secrets with AWS Secrets Manager in the AWS Secrets Manager User Guide.

  5. Record the secret name, which is used when configuring the connection in AWS Glue Studio.

Subscribe to the CData Glue Connector for MongoDB

To work with the CData Glue Connector for MongoDB in AWS Glue Studio, you need to subscribe to the Connector from the AWS Marketplace. If you have already subscribed to the CData Glue Connector for MongoDB, you can jump to the next section.

  1. Navigate to AWS Glue Studio
  2. Click Data connections
  3. Click Go to AWS Marketplace
  4. Search for the Connector "CData AWS Glue Connector for MongoDB"
  5. Click "Continue to Subscribe"
  6. Accept the terms for the Connector and wait for the request to be processed
  7. Click "Continue to Configuration"

Activate the CData Glue Connector for MongoDB in Glue Studio

To use the CData Glue Connector for MongoDB in AWS Glue, you need to activate the subscribed connector in AWS Glue Studio. The activation process creates a connector object and connection in your AWS account.

  1. Once you subscribe to the connector, a new Config tab shows up in the AWS Marketplace connector page.
  2. Choose a Fulfillment option and click the "Continue to Launch" button.
  3. On the launch tab, click "Usage Instructions" and follow the link that appears to create and configure the connection.
  4. Under Connection access, select the JDBC URL format and configure the connection. Below you will find sample connection string(s) for the JDBC URL format(s) available for MongoDB. You can read more about authenticating with MongoDB in the Help documentation for the Connector.

    If you opted to store properties in the AWS Secrets Manager, leave the placeholder values (e.g. ${Property1}), otherwise, the values you enter in the AWS Glue Connection interface will appear in the (read-only) JDBC URL below the properties.

    Username & Password

    jdbc:cdata:MongoDB:User=${Username};Password=${Password};Server=${Server};Port=${Port};Database=${Database}

    No Auth

    jdbc:cdata:MongoDB:Server=${Server};Port=${Port}

    DocumentDB

    jdbc:cdata:MongoDB:User=${Username};Password=${Password};Server=${Server};Port=${Port};UseSSL=True;UseFindAPI=True
  5. (Optional): Enable logging for the Connector.

    If you want to log the functionality from the CData Glue Connector for MongoDB you will need to append two properties to the JDBC URL:

    • Logfile: Set this to "STDOUT://"
    • Verbosity: Set this to an integer (1-5) for varying depths of logging. 1 is the default, 3 is recommended for most debugging scenarios.
  6. Configure the Network options and click "Create Connection."

Configure the Amazon Glue Job

Once you have configured a Connection, you can build a Glue Job.

Create a Job that Uses the Connection

  1. In Glue Studio, under "Connections," select the connection you created
  2. Click "Create job"

    The visual job editor appears. A new Source node, derived from the connection, is displayed on the Job graph. In the node details panel on the right, the Source Properties tab is selected for user input.

Configure the Source Node properties:

You can configure the access options for your connection to the data source in the Source properties tab. Refer to the AWS Glue Studio documentation for more information. Here we provide a simple walk-through.

  1. In the visual job editor, make sure the Source node for your connector is selected. Choose the Source properties tab in the node details panel on the right, if it is not already selected.
  2. The Connection field is populated automatically with the name of the connection associated with the marketplace connector.
  3. Enter information about the data location in the data source. Provide either a source table name or a query to use to retrieve data from the data source. An example of a query is SELECT borough, cuisine FROM restaurants WHERE Name = Morris Park Bake Shop.
  4. To pass information from the data source to the transformation nodes, AWS Glue Studio must know the schema of the data. Select "Add Schema" to specify the schema interactively.
  5. Configure the remaining optional fields as needed. You can configure the following:
    • Partition column - for parallelizing the read operations from the data source
    • Data type casting - to convert data types used in the source data to the data types supported by AWS Glue
    • Job bookmark options - to enter keys for job bookmarks in your JDBC data source

    See "Use the Connection in a Glue job using Glue Studio" for more information about these options.

  6. You can view the schema generated by this node by choosing the Output schema tab in the node properties panel.

Edit, Save, & Run the Job

Edit the job by adding and editing the nodes in the job graph. See Editing ETL jobs in AWS Glue Studio for more information.

After you complete editing the job, enter the job properties.

  1. Select the Job details tab above the visual graph editor.
  2. Configure the following job properties when using custom connectors:
    • Name: Provide a job name.
    • IAM Role: Choose (or create) an IAM role with the necessary permissions, as described previously.
    • Type: Choose "Spark."
    • Glue version: Choose "Glue 2.0 - Supports spark 2.4, Scala 2, Python 3."
    • Language: Choose "Python 3."
    • Use the default values for the other parameters. For more information about job parameters, see "Defining Job Properties" in the AWS Glue Developer Guide.
  3. At the top of the page, choose "Save."
  4. A green top banner appears with the message "Successfully created Job."
  5. After you successfully save the job, you can choose "Run" to run the job.
  6. To view the generated script for the job, choose the "Script" tab at the top of the visual editor. The "Job runs" tab shows the job run history for the job. For more information about job run details, see "View information for recent job runs."

Review the Generate Script

At any point in the job creation, you can click on the Script tab to review the script being created by Glue Studio. If you create a simple job to write MongoDB data to an Amazon S3 bucket, your script will look similar to the following:

Sample Script

import sys from awsglue.transforms import * from awsglue.utils import getResolvedOptions from pyspark.context import SparkContext from awsglue.context import GlueContext from awsglue.job import Job ## @params: [JOB_NAME] args = getResolvedOptions(sys.argv, ['JOB_NAME']) sc = SparkContext() glueContext = GlueContext(sc) spark = glueContext.spark_session job = Job(glueContext) job.init(args['JOB_NAME'], args) ## @type: DataSource ## @args: [connection_type = "marketplace.jdbc", connection_options = {"dbTable":"restaurants","connectionName":"cdata-mongodb"}, transformation_ctx = "DataSource0"] ## @return: DataSource0 ## @inputs: [] DataSource0 = glueContext.create_dynamic_frame.from_options(connection_type = "marketplace.jdbc", connection_options = {"dbTable":"restaurants","connectionName":"cdata-mongodb"}, transformation_ctx = "DataSource0") ## @type: DataSink ## @args: [connection_type = "s3", format = "json", connection_options = {"path": "s3://PATH/TO/BUCKET/", "partitionKeys": []}, transformation_ctx = "DataSink0"] ## @return: DataSink0 ## @inputs: [frame = DataSource0] DataSink0 = glueContext.write_dynamic_frame.from_options(frame = DataSource0, connection_type = "s3", format = "json", connection_options = {"path": "s3://PATH/TO/BUCKET/", "partitionKeys": []}, transformation_ctx = "DataSink0") job.commit()

Using the CData Glue Connector for MongoDB in AWS Glue Studio, you can easily create ETL jobs to load MongoDB data into an S3 bucket or any other destination. You can also use the Glue Connector to add, update, or delete MongoDB data in your Glue Jobs.

Ready to get started?

Subscribe to the MongoDB Glue Connector in the AWS Marketplace

AWS Marketplace