DataGrip 2020.3 Help

Big Data tools

The Big Data Tools plugin is available for DataGrip 2020.1 and later. It provides specific capabilities to monitor and process data with AWS S3, Spark, Google Cloud Storage, Minio, Linode, Digital Open Space, Microsoft Azure and Hadoop Distributed File System (HDFS).

You can create new or edit existing local or remote Zeppelin notebooks, execute code paragraphs, preview the resulting tables and graphs, and export the results to various formats.
Big data tools UI overview

Getting started with Big Data Tools in DataGrip

The basic workflow for big data processing in DataGrip includes the following steps:

Configure your environment

  1. Install the Big Data Tools plugin.

  2. Create a new project in DataGrip.

  3. Configure a connection to the target server.

  4. Work with your notebooks and data files.

Work with notebooks

  1. Create and edit a notebook.

  2. Execute the notebook.

  3. Analyze your data:

Get familiar with the user interface

When you install the Big Data Tools plugin for DataGrip, the following user interface elements appear:

Big Data Tools window

The Big Data Tools window appears in the rightmost group of the tool windows. The window displays the list of the configured servers and files structured by folders.

Basic operations on notebooks are available from the context menu.

Big Data Tools window

You can navigate through the directories and preview columnar structures of .csv and .parquet files.

Basic operations on data files are available from the context menu. You can also move files by dragging them to the target directory on the target server.

Data files in the BDT window

For the basic operations with the servers, use the window toolbar:

Item Description
Add connection Adds a new connection to a server.
Delete connection Deletes the selected connection.
Search in notebooks Opens a window to search across all the available Zeppelin connections.
Refresh Connection Refreshes connections to all configured servers.
Connection settings Opens the connection settings for the selected server.

Notebook editor

Zeppelin notebook editor

In the notebook editor, you can add and execute Scala and SQL code paragraphs. When editing your code paragraph, you can use all the coding assistance features available for a particular language. Code warnings and errors will be highlighted in the corresponding code constructs in the scrollbar. The results of paragraph execution are shown in the preview area below each paragraph.

Use the notebook editor toolbar for the basic operations with notebooks:

Item Description
Run all Executes all paragraphs in the notebook.
Stop execution Stops execution of the notebook paragraphs.
Clear all outputs Clears output previews for all paragraphs.
Interpreter bindings Opens the Interpreter Bindings dialog to configure interpreters for the selected notebook.
Open in a browser Opens the notebook in the browser.
Navigate Allows you to jump to a particular paragraph of a notebook.
Minimap Shows the minimap for quick navigation through the notebook.

The notebook editor toolbar also shows the status of the last paragraph execution.

Status of the paragraph execution
Execution with errors occurred
Execution with errors occurred

Monitoring tool windows

These windows appear when you have connected to a Spark or Hadoop server.

Spark monitoring: jobs
Click to preview in a separate tab
Last modified: 07 December 2020