DataSpell 2022.2 Help

Flink monitoring

With the Big Data Tools plugin, you can monitor and submit Apache Flink jobs.

Typical workflow:

  1. Establish connection to a Flink server

  2. Monitor Flink jobs using the dedicated tool window that reflects the Apache Flink Dashboard

  3. Submit new jobs to the Flink cluster

Create a connection to a Flink server

  1. In the Big Data Tools tool window, click Add a connection and select Flink under the Monitoring section.

  2. In the Big Data Tools Connections window that opens, configure the connection parameters:

    Configure Flink connection

    Mandatory parameters:

    • Name: the name of the connection to distinguish it between the other connections.

    • URL: specify the URL of your Apache Flink Dashboard.

    Optionally, you can set up:

    • Enable connection: deselect if you want to restrict using this connection. By default, the newly created connections are enabled.

    • Enable tunneling. Creates an SSH tunnel to the remote host. It can be useful if the target server is in a private network but an SSH connection to the host in the network is available.

      Select the checkbox and specify a configuration of an SSH connection (click ... to create a new SSH configuration).

    • Enable HTTP basic authentication: connection with the HTTP authentication using the specified username and password.

    • Proxy: select if you want to use IDE proxy settings or if you want to specify custom proxy settings.

  3. Once you fill in the settings, click Test connection to ensure that all configuration parameters are correct. Then click OK.

Once you have established a connection to the Flink server, the Flink tool window appears, which reflects the Apache Flink Dashboard.

Preview jobs, their configuration, exceptions, and checkpoints. Use the Filter field to filter jobs by name or click App general filter to filter them by status. If you have a running job, you can click App actions suspend to terminate it.

Kafka connection: topics

View nodes that run your tasks, view and download their logs, stdout, and thread dumps.

Kafka connection: topics

View the Job Manager, view and download logs and stdout.

Kafka connection: topics

View uploaded JAR files, filter them by name or entry class, and submit new Flink jobs.

Kafka connection: topics

Submit New Job

  1. In the Flink tool window, open the Submit New Job tab.

  2. If a JAR file of your application is not uploaded yet to the Flink cluster, click Add a connection and select a new file.

  3. Select the uploaded file and click App actions execute.

  4. In the Submit JAR file window that opens, configure the following parameters:

    • Allow non-restored state: allow skipping state of the savepoint that cannot be mapped to the new program (equivalent of the allowNonRestoredState option).

    • Entry class: enter the program entry class.

    • Program arguments: enter the program arguments.

    • Parallelism: enter a number of parallel instances of a task. Leave it blank if you need only one task instance.

    • Savepoint path: enter the path of the job’s execution state image (savepoint).

    Submit JAR File dialog

Once you have submitted the job, you can preview its status, start time, and other parameters in the Jobs tab.

Last modified: 01 August 2022