DataGrip 2021.3 is here! This is the third major update of 2021, and it’s packed with various enhancements. Let’s take a look at what it has to offer!
We’ve added the ability to display an Aggregate view for a range of cells. This is a long-awaited feature that will help you manage your data and spare you from having to write additional queries! This makes the data editor more powerful and easier to use, bringing it a step closer to Excel and Google Spreadsheets.
Select the cell range you want to see the view for, then right click and select Show Aggregate View.
Quick facts:
One aggregate value is displayed in the status bar, and you can choose which value (sum, mean, median, min, max, and so on) you’d like it to be.
Pressing F4 on any schema node displays a table view of the node’s contents. For example, you can get a table view of all tables in your schema:
Or you can view a table view of a table’s columns:
You can use this view to hide/show columns, export the data to many formats, and use text search. More importantly, the following navigation actions work here, too:
If you split the editor and open the same table again, the two data editor windows will now be completely independent. You can then set different filtering and ordering options for them to compare and work with the data. Previously, filtering and ordering were synchronized, which was less than ideal.
You can choose a dedicated font for displaying data under Database | Data views | Use custom font.
In the data editor, you can now select several values and navigate to the related data.
You can define the default method for sorting tables via ORDER BY or client-side: the latter doesn’t run any new queries and sorts only the current page. The setting can be found under Database | Data views | Sorting | Sort via ORDER BY.
16-byte data is now displayed as UUID by default. You can also customize how binary data is displayed in the data editor column.
filter {}
and sort {}
MongoDB
Code completion is now available when you’re filtering data in MongoDB collections.
This release is a logical continuation of the previous one, which introduced the ability to generate a DDL data source based on a real one. Now, this workflow is fully supported. You can:
Just as a reminder, a DDL data source is a virtual data source whose schema is based on a set of SQL scripts. Storing these files in the Version Control System is a way to keep your database under the VCS.
There is a new tab in the data configuration properties, DDL mappings, where you can define which real data source is mapped to each DDL data source.
If you want to know more about how exactly these new features will help you in your daily VCS flow, please read this article.
To compare and synchronize your DDL data source with the real one, use the context menu and select Apply from... or Dump to... from the DDL Mappings submenu.
This brand-new window has a better UI and clearly shows in the right-hand pane what result you’ll get after you perform the synchronization.
The legend in the right-hand pane shows what the colors mean for your potential result:
The Script preview tab shows the result script, which can be either opened in a new console or run from this dialog. The result of this script is applying changes to make the database on the right (target) a copy of the database on the left (source).
Besides the Script preview tab, there are two more tabs on the bottom pane: Object Properties Diff and DDL Diff. They display the difference between the particular versions of the object in the origin and in the target databases.
Just a reminder: if you want to merely compare two schemas or objects, select them and press Ctrl + D.
Important! The diff viewer is still under heavy development. Because each database has its own specific features, some objects may display as different while they are, in fact, identical. This may happen because of type aliases or omitting the default properties in the generation. If you come across this bug, please report it to our tracker.
All actions for files are available on DDL data source elements as well. For example, you can delete, copy, or commit files related to the schema elements from the database explorer.
If this option is turned on, your DDL data source will be automatically refreshed with changes to the corresponding files. This was already the default behavior, but now you have the option to disable it.
If you do disable it, changes in the source files will not automatically be reflected in the DDL data source, so you'll need to click Refresh to apply them.
In the Default schemas/databases pane you can define names for your database and schemas, which will be displayed in the DDL data source. DDL scripts don’t usually contain names, and in these cases there will be dummy names for databases and schemas by default.
If any value except User or Password has leading or trailing spaces, DataGrip will warn you about them when you click Test Connection.
SQL Server LocalDB has its own dedicated driver in the driver list. This means that it has a separate type of data source which should be used for LocalDB. Here’s how it helps:
It’s now possible to use Kerberos authentication in Oracle and SQL Server. You need to obtain an initial ticket-granting ticket for the principal by using the kinit command, which DataGrip will use when you choose the Kerberos option.
This new option in the Options tab lets you enable DBMS_OUTPUT by default for new sessions.
We’ve added a More Options button for when you need to configure something unusual for a connection. The currently available options include the ability to add Schema and Role fields for Snowflake connections, and two menu items for configuring SSH and SSL to increase their discoverability.
The Advanced tab now includes a list of Expert options. In addition to the option to turn on the JDBC introspector (please contact our support before using this!), the following database-specific options are available:
Oracle users have been experiencing a problem with DataGrip’s introspection, which took a long time if they had lots of databases and schemas. Introspection is the process of getting the metadata of the database, such as object names and source code. DataGrip needs it to provide rapid coding assistance, navigation, and search.
Oracle system catalogs are rather slow, and the introspection was even slower if the user had no admin rights. We did our best to optimize the queries to get the metadata, but everything has its limitations.
We realized that for most daily work, and even for effective coding assistance, there is no need to load object sources. In many cases, just having database object names should be sufficient to provide proper code completion and navigation. So, we introduced three levels of introspection for Oracle databases:
Introspection is fastest on level 1 and slowest on level 3.
Use the context menu to switch the introspection level as appropriate:
The introspection level can be set either for a schema or for the whole database. Schemas inherit their introspection level from the database, but it also can be set independently.
The introspection level is represented by the pill-looking icons located next to the data source icon. The more the pill is filled, the higher the level. A blue icon means that the introspection level is set directly, while a grey one means that it is inherited.
You can map your linked server in SQL Server or database link in Oracle to any existing data source.
When external objects are mapped to the data source, the code completion and resolve will work for queries using those external objects.
Internal system schemas (like pg_toast or pg_temp) and template databases used to be hidden from the schemas list. Now it’s possible to show them by using the corresponding options in the Schemas tab.
Now, streams are displayed in the database view in addition to tables and views.
Distributed tables are now placed under a dedicated node in the database explorer.
One of our users
tweeted
about an unfortunate situation: he executed the UPDATE
query on
a production database with the condition WHERE id - 3727
(instead of =)
and had millions of records updated!
We were also surprised that MySQL allows that, but such is life. But we wouldn’t be
the DataGrip team if we didn’t add an inspection for that! Please welcome the check
for boolean expressions in WHERE
and HAVING
clauses.
If the expression doesn’t seem to be explicitly boolean, DataGrip will highlight it in yellow and will warn you before you run such a query. It works for ClickHouse, Couchbase, Db2, H2, Hive/Spark, MySQL/MariaDB, Redshift, SQLite, and Vertica. In all other databases, this will be highlighted as an error.
Now queries can be extracted as a table function. To do this, select the query, invoke the Refactor menu, and use Extract Routine.
JOIN
cardinality inlay hint
The new inlay hint will tell you the cardinality of a JOIN
clause.
There are three possible options: one-to-one, one-to-many, and many-to-many.
If you want to turn it off, you can adjust the setting in
Preferences | Editor | Inlay Hints | Join cardinality.
Database names are completed when using getSiblingDB
, and collection names
are completed when using getCollection
.
Additionally, field names are completed and resolved if used from a collection
that was defined with getCollection
.
In line with this request, timestamps are no longer shown for query output by default. If you want to return to the previous behavior, you can adjust the setting in Database | General | Show timestamp for query output.
If you use the Services tool window in window mode, by default it is hidden behind the IDE. With the new setting you can pass the focus to it every time you run a query, so it will appear after the query has finished.
Also, if you are annoyed when finishing a long query in some other console activates the corresponding tab in the Services tool window, select the Activate Services output pane for selected query console only checkbox.
When importing .csv files or copying tables/result sets, you will observe the following improvements:
When you open or import a CSV file, DataGrip will now automatically detect that the first row is the header and contains the names of the columns.
DataGrip can now detect column types in CSV files. The main benefit of this is that you can sort data by numeric values. Before, they were treated as text and the sorting wasn’t intuitive.
Before we had two very similar instances – Favorites and Bookmarks. As the difference between the two could sometimes be confusing, we’ve decided to stick to just one – Bookmarks. We’ve reworked the workflow for this functionality and made a new tool window for it.
From now on, all the objects or files you mark as important (with the F3 shortcut on macOS or F11 on Windows/Linux) will be located in the new Bookmarks tool window.