Abstract:

Can you make me productive with a C++ IDE?
Anastasia Kazakova, Product Marketing Manager for CLion

CLion from JetBrains is a powerful integrated development environment built to help productivity across all aspects of C++ code. In this talk, the different aspects of how users can maximize their productivity and the features built into CLion are examined in a live demo scenario. Examples of code generation which can generate constructors, getters and setters, equality operators, relational operators, stream output operators, and override functions are explained with real-world scenarios which generate half of your daily quota for lines of code in minutes.

Anastasia explains the process of productive documentation writing using CLion’s built-in Doxygen style which makes it simple to maintain code and actually see and generate documentation. The quick documentation function is much more than this, with features such as macro replacement which can be run over huge code hierarchies: a feature the Boost.Hana author, Louis Dionne, has since learned about after having previously debugging this macro by hand, by copying the output of the preprocessor and looking at it.

Code analysis and debugging is also a very important, time-consuming aspect of working with C++ code. CLion’s capabilities are covered with examples of the static analysis options, data flow analysis, and the remote debugging tools. Integration with Clang, CMake, Valgrind, and the new compilation database are available and their uses are detailed. Not to mention profiling your code with Dtrace for Mac users and Perf for Linux, which you can do with the IDE and then follow problems, go straight to your code and fix them.

Unit tests are also covered and there is integration with Google Test, Boost.Test, and Catch to help you effectively test your code. There are features included such as “rerun all the tests that actually failed” where you can rerun all the tests that failed without having to select them manually, which can save a lot of time.

This live demo goes through the features which will have the quickest impact on your making productive changes to the way you can code with live examples of how they work in practice and their effects on the code.

Transcript:

Speaker
Make me productive with C++ IDE. Now, I have my guess as to what the answer's going to be, but let's wait and see. Anastasia.

Anastasia K
Yeah, thank you. There will be a live demo, so I do encourage everyone to come closer. You will be able to see better if you come closer. We have some spare tables here at the front. It is a live demo, what could possibly go wrong? Nearly everything. So it should be good to close our event with.

I hope you had a nice dinner, got to grab a glass of wine, or a beer, or something. So you should be in a good mood now to look at your C++ IDE. So let's start. A C++ IDE is not a very easy thing to create, as you probably got from my talk at the very beginning of this event. There has been lots of effort put into CLion. There is a fun story about CLion, believe it or not, for a very long time we didn't actually believe we could make a C++ IDE at JetBrains. Then a funny thing happened, we started AppCode. A show of hands, who knows what AppCode is? So this is an IDE for iOS and Mac development. We started supporting their Objective C and their Objective C++, and suddenly, we found ourselves in a situation where we did a C++ parser. We were a little bit surprised. We actually read a nice blog post, which was saying, "It seems you're doing a C++ parser, so we guess you are going to start doing a C++ IDE soon."

That happened in about 2012. Then, four years ago, we actually released CLion, our cross-platform C and C++ IDE. Also, in the same year, we released ReSharper C++, an extension for Visual Studio for C++, but that's another story. I will be demoing CLion here.

The main idea behind CLion is to help you to be more productive with C++ code. And that's all we try to implement. Let's start with a very simple example. What I love about CLion and about the enhanced productivity it provides me, is when I see an IDE in front of me, I'm usually thinking, "Come on, can you generate code for me? Can I sit here and you will write code for me?” It is not that simple, but it is still possible.

So let's generate some things. I have a few things available here in the menu. Let's start with the constructor. OK. I will select all the fields. It will generate in place so that you don't have to switch between different files to see the result. So here is my constructor. Let me close this one so you can see it better. I can generate getters and setters. I will also do this in place. And I can generate an equality operator. Why not?

There are a few options here. I will turn them on. It will search to see if maybe I have some existing operators for this class. So it searches to see if I have used it in some other place before. If I have, it will just add the missing operators for me. OK, I can be even lazier. A relational operator – we also search for existing ones to check if it actually needs to add one or if it should generate one from scratch. Can we do more? Stream output operator – I hate writing it manually, I think everyone hates it, especially if you have a long class. We can probably stop here.

Let's look at what we are left with. I have been lazy enough. All these things were generated automatically by my IDE for me. That is what I call proper productive laziness. When you just take a tool and it generates everything for you. OK, let's make life a little more complicated. Let's inherit from some class I have here. So I have this mammal class and it has a few virtual methods in it. What can we do here? We can override functions. So select them all and generate this stuff. So here is my makeSound override and useSelfEnergy override. OK, makeSound. I'm not sure that I'm writing any reasonable code here, but just to show you an example, let's do some sound variable. And see it actually fills in my class.

Let's do some creepy expression – I'm not sure what I'm writing here makes any sense, but just to show you. What I can do now is generate more code. I can surround this with whatever I want here. For example, I will surround it with an if-else clause. Sorry, put a line there and surround it quickly. And now, I can put some conditions here. I don't know if the weight is more than 100, for example. What I can do in the other clause is call some function.

Do you know what the problem is? I don't have this function. And that is one of the great things about JetBrains IDEs: when you don't have something, no worries. Just press Alt + Enter and it will create it for you. It is that easy. So just start using something and then ask the IDE, “OK, I forgot to actually put a declaration on the definition list. Help me to define this.” And again, actually, to do this you can see I just typed a few lines of code, pressed a few shortcuts, and I got this huge amount of code.

If you ever measure the results of your day by the lines of code you write, do you know what the optimal number of lines of code that you can develop or generate is per day? Some statistics say it’s around 70. So by doing this, half of them are already done. I am nearly ready to go home.

OK, let's do some creepy stuff here. I have a few other functions inherited from my mammal class. I'm actually not that good at biology and have no idea if the hierarchy's correct from a biological point of view, but let's try something interesting.

I have this makeSound function. Let's change the signature for this function, so I will say, "Come on, you have a huge hierarchy using this function. Would you like me to update the whole hierarchy?" I go, "Yeah, why not? Let's do that." I update, I add some parameter here, some ID probably. Let's refactor that. Now, what we can see here is my makeSound in the human class. Here's my makeSound in livestock, here is my makeSound in bird, and, of course, my makeSound in the parent mammal class.

So when I agreed to update the whole hierarchy, what did CLion do for me? It actually propagated my change to the top of the tree and then back down through all the branches. So if you want to refactor something which is deep in your hierarchy, some leaf, it would be really nice and really productive if you don't have to update the whole tree by hand. It is really nice if an IDE can go through this tree on its own. And that's what we do.

OK, we are done with this nice human sample. Let's see what else we have here. What is the most important part of our code? OK, so there are different opinions. It is documentation because without the documentation no one will be able to maintain your code further, especially when you leave for vacation or leave a company.

There are different ways to document the code. But the documentation in CLion is much more powerful than just providing documentation for, say, function signatures. It actually tries to be smart. You may probably remember some samples from my first talk? So, because we build the whole Abstract Syntax Tree for the code, we can actually tell a lot of things about C++ code. Let's start with a simple one: I have the documentation command – that is Doxygen style, one of the most popular frameworks. It is a popular tool to document your code, in a popular format. So I have my Doxygen comment here as a brief description, and some parameter description. If I call the quick documentation popup, I have all this information rendered in a single window. We do not call Doxygen in the background, don't worry. We rendered this on our own, putting everything in place correctly. You can navigate these classes and links and we can go through it and see through the whole hierarchy in the quick documentation popup.

Do you like writing documentation by hand? I don't either, I prefer to generate it like this. So three slashes, press Enter, and now this top is generated. And the best thing is, I have my function signature here with some documentation, so I have some parameters. I will add a description for my parameter. Then I put a definition somewhere, for example, if it's in a different file. I decided to rename the parameter; I said, "OK, it will be called myValue." The best thing is the documentation is updated automatically. So even after a year of changes, the documentation will be up to date. So I won't end up in some documentation naming parameters that are no longer in my code, for maybe more than a year.

OK, so you can actually see and generate documentation, but as I said, the quick documentation function is much more than this. And here is the interesting thing. I actually showed it in my first talk: macro replacement. Here it is. So the final replacement for my macro is shown here. I told you about the boost macro here, you remember that? Let's take some boost macro here. And here it is. It's actually much longer. Have you ever been interested in what's there behind the boost macro? Here it is. So you can actually see the final replacement.

I know one person in the C++ community who is really excited about this. Do you know what he is doing? Boost.Hana. Have you ever heard about this? Boost.Hana is a huge, heavy meta-programming library. The author, Louis Dionne, was actually debugging this macro by hand, just copying the output of the preprocessor and looking at it until he learned about our feature. He was so excited he forgot to ask me about all the other features. He just said, "Yeah, finally." Because Boost.Hana macros are actually even longer than the one boost macro I just showed to you. And there are maybe 20 or 30 levels deeply nested inside.

So okay, you can see the macro replacement. And as I promised, you can see the type inferred. So if I call here the quick documentation—let’s make it smaller—you'll see the int value. You can guess if I’d call it for op1, what would I get? Long, naturally. And here, the double.

So you see, we do infer these types on the fly, because we actually build the whole Abstract Syntax Tree. We have two parsers in CLion and we build the whole Abstract Syntax Tree, so we know exactly which type we have for this variable. We know exactly what I actually call it and what is behind this macro. This actually includes all the information we take from the compilation flags and the environment variables. So we take all this into consideration when resolving the code.

Just to demonstrate the compilation flags to you, I have this nice sample. I have this code and there is a preprocessor branch, and it depends on this special flag. What is this special flag? I have two CMake list files. One defines the configuration and has this special flag on it and it’s one. The other is defined in the special flag and it's zero. What I have here is a resolved context with a flag that comes to me and takes me to these highlighted branches. Let me change it, so there is no flag.

The proper branch will be immediately highlighted. So that means that the configuration we're using to resolve this actually reflects this configuration with all the proper flags. So if, for example, you're doing some cross-platform development, cross-compilation, and you are targeting several platforms, naturally, you would expect the code to resolve to the proper platform to be able to fix it. So what do you do? You put this into separate configurations, then you just switch these configurations in resolve, and you've got a proper branch highlighted for you. And it's very easy to read for this code and to understand what actually happens for this particular target, for this particular configuration.

I come from the embedded world. I was really lacking features like this when I was a developer.

That is about it for the documentation. Let's do some interesting stuff with code analysis. At JetBrains, we love static analysis. We have people who have studied static analysis at university and who have even done their Ph.D. on static analysis. They're really cool with that. They can actually implement lots of interesting stuff.

Doing code analysis in C++ was really fun. We did a lot of things. So, first of all, we made our own analyzer, which can show you some different things, such as for the definition of the name, for example, or, a function that hides in a virtual function. Or, for example, if you're still a fan of the old-style printf, if you missed the format specifiers and in the actual string they don't match the types, we'll tell you, “Come on, it doesn't match. You will probably get some unpredictable results. Check that.”

Of course, there is also using an assignment in a condition. I think every C++ and C programmer has made this by mistake at least once in their life. Myself, I have done it more than once. So we highlight these things, saying it is probably not what you actually meant here. And a very nice thing for escaping from the local scope – a very popular error in the C++ world. So when you have a local variable and they're like, "OK, it will return this for you using the address." And it's no longer valid, so that's a runtime error. You can check that, as well.

We work, as I said, with two language engines. So we have our own language engine, and we introduced a second one a year ago which works in Pearl and it's based on Clang. It's our own branch of the Clang LLVM repository that includes a bunch of different fixes on top of Clang.

What we do here is, first of all, our engine actually provides an error, such as “no matching function”. So the overload resolution failed here. But then we ask Clang for the particular reason for that, and it provides “substitution failed because no type named inner type,” and you know what I can do? I can navigate to it. So I can actually look at what is actually causing the failure or where this inner type requirement is found. Same here, it says “no member named method,” and I can navigate to this place and see where I've actually been. So here this is in just one file, but you can guess it could be in different files and you'll still navigate to the proper place.

So, as I said in my first talk, if you don't debug the overload resolution that failed, you can still actually get some additional information from your tooling. Here is a pretty exciting sample I really like. We have C-style casts, which you can actually configure in CLion and say, "OK, come on. I am old school. I am legacy. I will use C-style casts.'' But again, I also say, “I will be more modern. I will use C++-style cast.'' What's the problem with them? It is that there is more than one. So you actually have to select which cast to use if it's a C++ cast. We actually made this task a little bit easier. So when the tab doesn't match, if you press Alt + Enter and ask to cast: here is the static_cast; dynamic_cast; reinterpret_cast; and const_cast. What we are trying to do is understand from the context which particular cast you actually need and try to add it for you. Naturally, you can't always do it, but we do our best. But in about 80% of the cases, we match the cast properly. You might be quite happy with that.

When we started doing code analysis, we found out that we have a person in the team who actually did their Ph.D. on data flow analysis. We couldn't escape that. We asked them to implement data flow analysis for C++. That's something that the compiler can do for you. And it's the thing that helps you with the runtime issues. You know what? Because it analyzes how the data flows through your code and suggests particular foreign logic issues. So for example, here it tells me the condition is always true. You can guess that there is a previous if/else clause where I'm either assigning red or it is already yellow. There is no need to check that in this case. So that's true already.

OK, another example of this kind is, here I see unreachable code. Why? Because it is the case with the color yellow. And then the previous switch case, I am actually assigning red, blue, or green. The compiler can't get it because it doesn't take the data in the same way it flows through your code. It doesn't need it, actually, it's fine, it's what you expect from the compiler. Who would expect it to actually analyze your data? Probably not your security department. So, we do that. On the one hand, I really love these checks. On the other hand, there are some checks I usually recommend to turn off when there is some slowness, because, as you can expect, data flow analysis is not that fast. It analyzes the whole data.

So yeah, it's cool that you can actually run it on a CI server. When we install CLion, we can run all this code analysis just on the CI server. For example, we can run the data flow. When we did a few code analysis checks, we found that there is a nice Clang Tidy analyzer, which is done by the LLVM community and is part of the LLVM infrastructure. Let's see if we have some nice example here. Yeah, so what we did was we bundled Clang Tidy into CLion. So when you get some checks from Clang Tidy, they are marked as a Clang Tidy check. And you can get them the same way as you get CLion's own code analysis checks. And there are quick-fixes from Clang Tidy.

Here is the modernize group, so I can, for example, convert lambda to std::bind, or here it suggests using a range-based for. I can do that, as well. Pass by value and use std::move, why not? To be honest, we have only 60 or 70 of our own checks. With Clang Tidy from LLVM 8.0, the recent release which we bundle, there are 330 checks. Not all of them are reasonable, so we try to limit the default configuration a little bit. So we don't turn them on by default because there are dozens of checks. Some are created by Google, some are created by other companies. They may have some particular checks they need that you probably don't. You can tune the configuration in the code inspections. You can actually select which profile for a Clang Tidy you would like to get. You can check them here in the list or you can just provide a Clang Tidy file and your project will read the configuration from it.

Do you know what the best thing about Clang Tidy is? It's an open-source linter which you can extend easily. Yes, you can have a specific check for a team. No IDE will implement it for you because it's very specific to your project. But you want your team to get it in the IDE and to force them to fix it.

What you can do is you can implement and check it on top of Clang Tidy. That's very easy. There are dozens of samples across the Internet with huge documentation. So you just get this Abstract Syntax Tree by build by Clang and do a check. And then you can actually specify the Clang Tidy executable in CLion and use a custom Clang Tidy executable. That means you will get all your checks the same way in the IDE as you do currently. So that's a good thing. It is always good to know that you can actually extend the thing. And of course, when we learned that we could extend Clang Tidy, we actually decided, why not implement a couple of our own checks just to see how it works. And we did that.

We started by taking one very interesting check that we call argument selection defects. The interesting part is that there is a huge scientific paper behind this check. So what does it do? I have a function called get_User here, which gets the company ID and the user ID. And I made a mistake – the order of the parameters is wrong here. So I passed user ID and then company ID. But they are the same type, so the compiler will just miss it. It's fine to compile this code. And I might miss that as well.

There is a paper written by Google people who actually implemented some heuristic algorithm for detecting that the order is wrong based on the name of the arguments and the parameters. So if you are good at naming your parameters, if the names are reasonable, and if you don't name your parameters crazily, then it should guess nicely if you miss the order. So definitely we've turned it off for all the short names, like X, Y, Z, and we turned it off for all the functions like swap. Because naturally, doing a swap means you are swapping the arguments – that's fine. So this paper actually takes all these things into account and they implemented it. And, of course, you can call a quick-fix and it will switch it in all these usages where the order is missing. So we did a few checks like that.

There is another check that I can probably show you here. Do you know why it is suggesting to rename? I will tell you. It is because I have a naming conversion configured for my project. So if we go to the code style settings, you'll find this nice UI—we are making it even bigger and more flexible in the next version; we'll get more options here. The idea is that it's much easier to read the code if the naming convention is proper. So if the whole team is following the same naming convention, then a new person entering the team can easily guess by the code if it's a function, or a class, or a private method, or maybe a public function, if it's a macro, if it's an enum, so whatever. So reading the code when you have a name in the convention is easier, so we support it.

You can see the list of entity kinds here on the left. You can configure a custom prefix, custom suffix, and you can select the proper style. And you can actually apply it from a known convention that we have predefined for you here. Google and LLVM all have some predefined naming conventions. This check, actually, tells me that the name is not correct. It doesn't follow the conventions. So then naturally, we can do a quick-fix with renaming and we will update all the usages. It's a good thing to force your team to use. And naturally, you can configure it and share the settings in the version controls so that the whole team is on the same configuration.

OK, that is enough of code analysis. Let's do something more exciting. What could be more exciting than refactoring in C++? I have a function here, callPerson. What I will do, I will change the signature. And changing the signature, I will do another parameter at the same time. You can see the completion here. I will call it a bad name – don't use this name in the convention – I am calling the parameter p1 for the demo. So you see I have the usage of the callPerson function, and I will substitute it with the default value – just to keep my code compilable. You now can find usages of this function and actually change the value you actually need in all these places. But at least by default you have some nice value. And if I do the Change Signature again and swap the order, if I still have the wrong order, it might be not good because I have now ‘p’ there. Probably something will go wrong. Let's hope for the best.

We have swapped them. Cool. I'm safe. That's why I like the automatic refactorings with the IDE – because they keep me safe.

Let's now extract something. Let's close this to show you more. I have this function extractSample, and it's called here. I have a few things to extract. I will first of all extract this value. You can see we have lots of extracts here – whatever you want to extract from your C++ code. I'll extract a parameter. It is nice that it asks me first, “I see 10 occurrences of this value. Do you want me to extract all of them”? Of course I do. I think not just occasionally. Let's do that. I will call it ‘D’ for something. I can declare a constant if I want. Here, I have my parameter. Let's come back to usage. Here is my value. Cool.

Let's extract something else. Let's extract this expression – the whole expression – to a variable. Extract variable. Again, there’s more than one occurrence. OK, let's check them all. You see them highlighted. You see that the expression was actually more complicated. It managed to extract part of it, okay. So I will extract these occurrences. They will be substituted and everything is fine. So I can extract typedef, I can extract define, I can extract functions. So all these things are possible.

And one last thing about the refactoring – have you ever dealt with pulling members up and down through the hierarchy? I will show you because not everybody may have seen this before. There is Pull Members Up, or Down. And when you call it, that's fantastic. Now, it not only asks me where to pull my method for this hierarchy, but it also highlights some things from my class in red, telling me, “it is probably used by the method you are going to pull up. You will need that.” So you will know to select them and to pull them up and down your hierarchy.

OK, refactoring was nice. Resolve context, we also got it. Let's do a nice debugging demo. I guess regular debugging would be no fun. We'll do some cool debugging demo. For this, I will need my Linux machine. It's a virtual Linux machine running here in parallel on my Mac.

What I want to do, there is a nice and magic word, remote debug, which actually helps us to debug and develop when we're not on the target machine. Because in most cases we're not. We're using the machine that our company gave us, or the machine we prefer, or just the machine which is convenient because we, for example, have everything installed on it. But we are developing for another platform, embedded platform, or just some other architecture, or another target platform, or maybe it's just not possible to debug on your local machine for some reason.

So we have two types of remote configurations: first, there is the case when you just want to debug on the remote machine. So you're working locally, your code is local, so all the things here, but then you have some different architecture you want to debug. Well, we suggest you use the GDB server remote debug configuration. It's very straightforward, very simple. If you go to edit configurations, it's called “GDB remote debug.” So what did I actually provide here? The name, that's simple enough, GDB. So you can switch it, but, by default, we bundle the GDB build for multiple architectures already. So we don't just bundle the GDB for the particular architecture for this binary for my Mac. I have this binary for CLion when it was installed, but it has the GDB build for multiple architectures already. You still can switch it if you have some very specific platform for which you actually got your GDB compiled from GDB tools, but in most cases, you will be fine with the bundled version.

So I provided the connection I will be using, the address, and the port. Symbol file, what's that? For debugging, you need some debug symbols. Where do you get them? The easiest way is just to take the binary, which you will be running and debugging there. If it's built with debug flags, it has the debug symbols. OK, so I will just copy this binary to my local machine and provide it here for the symbol file.

And the last thing is path mappings. For CLion to understand the breakpoints I will be setting in my editor and to show me all the things in my code, it actually needs to understand how my local paths are mapped to the paths on the remote machine where I build the binary. If I build it locally for that architecture, that's fine. It's the same paths and you don't need the path mappings. But if, for example, it was built on some CI server with a different architecture, the paths to the sources are probably different. And the debug symbols will be used in this direct path to the sources. So we have to know the mapping.

So okay, we have successfully provided all these things. Now, all we have to do is to go to my virtual machine and start the GDB server. So it just starts the GDB server for localhost, which is my Linux port 8080 and my binary. And it's now listening in on the port. I will now go to the remote app and call the debug. The debugger in CLion is connected to my remote machine, to my Linux. And now it actually stopped on a breakpoint. What can I do now? I can do some stepping. So here is my editor on my Mac. I do some stepping.

Do you know what this is? We call it the Inline Variables view, and it is my favorite feature in the debugger. When you debug, you can see the actual values of different variables right in the editor. So you don't need to switch or hover over something to see them. And the good thing is that if I select the variable here – I’ll change the value to 300 – you'll see that it is colored in orange which shows me that it has updated it. And I can see the updated value. And I can go forward and step through the things that can evaluate expressions here, run to cursor, do all kinds of stepping, and actually debug your application. And this whole application, if I stop it here, probably we'll see, there is a game running here on my Linux. I was debugging that game. OK, so I'll stop it here.

Sometimes a remote debugger is not enough to deal with things. Sometimes I need something more complicated, the whole cycle of remote development. This means I have my local machine, but I have to compile, run, and debug my code on another remote machine, like Docker, or maybe some other architecture. You can do that. How it works In CLion is we actually build the configuration when we take the local sources and synchronize them to the remote machine using CLion. And then we synchronize such paths back at the header. I will show you why in a couple of minutes. We'll do this last synchronization.

So first: what do you have to configure? Nice question. So all you need is a toolchain, specifically a remote toolchain. It'll go to the same Linux host. I have provided some credentials to this machine here. And it now tries to connect to my remote machine and asks if I actually have all the tools I need, like the make, the C and C++ compiler. OK, it detected all the tools – I have them.

We implemented the remote GDB debug in a general way. It works for any project model supported. You probably know we're a little bit more than CMake right now. We have Gradle for C++ and compilation database. Remote GDB debug works for all three. Full remote mode now works only for CMake, so that's why I need to provide a CMake profile. So I'll do that, that's just a remote profile used in my remote toolchain. That's it. Cool. What we can do now is run it. Let's switch the configuration to show you in detail. So first of all, my debug configuration is a local configuration. So that the application simply prints this name. So let's run it.

Since I'm running it locally, it says my computer’s OS is Darwin. OK, let's now switch to the remote toolchain. Now, let's run. Oh, cool. I got Linux. That's actually the name of my remote machine. So what actually happened in the background, it actually connected to my remote machine, compiled the code there, ran it, and provided the output here. You can debug the same way, in a remote way. The thing that I like is, again, about the preprocessor branches. You see that here Linux is actually highlighted because I'm using the remote configuration. If we switch to the local configuration, naturally, the Mac branch will be highlighted. That's what I expect from my tool, to highlight the proper branch for me. If I'm working with a remote machine, the Linux branch will be highlighted for me. So that's the remote debug and the remote development mode.

Now, just a few words about integration. IDE, do you know what it stands for? Integrated Development Environment. That means that it is the whole environment, it is not just about the code. So what do we have? It is a regular IntelliJ-based IDE, so you can guess we have all the old version control support and all this stuff. That is not that exciting. Let's do some exciting stuff, like who actually uses Valgrind? Anyone? Cool. Yeah, you're my favorites. I like this tool very much. So we actually integrated the Valgrind memory check.

So that's nice for catching memory leaks in your program. So what you can do is, you can run the application with Valgrind memory check. I'm not going to run it on remote as I don't have it there. I’ll run it locally. It actually gets you to this nice output here. This Valgrind tab and what you can see here is just the whole stack trace of all the leaks. I actually recommend that you run the integrated tools from the IDE, and not separately. If I ran my application in Valgrind separately from the terminal, I would miss is the actual navigation to my code and my editor because here I actually have the code. I can jump to the source code in the editor so I can go and fix the problem immediately just from this trace.

That is it for Valgrind. But there is another tool in the community which is very popular for checking all possible kinds of addresses and other things – that’s sanitizers. Sanitizers are a bit different from Valgrind. Valgrind is just a tool, you run your application onto Valgrind. With sanitizers, you have to recompile your code, so the functionality is implemented in a different way.

If we detect the -fsanitize flag in your compile options, then we know that you're probably using some sanitizers. Cool. If you simply run your application, there will be a tab with sanitizer output. And just the same, you can go for the stack trace and you can navigate to the source code. And the good thing about the sanitizers is that there are so many of them: address sanitizers, memory sanitizer, leak sanitizer, thread sanitizer, and undefined behavior sanitizer. There are some drawbacks, for example, it's only supported for some Clang versions and some newer GCC versions starting from version 5. Both Valgrind and sanitizers do not work in Windows – these tools do not like the Windows platform. Sorry, Windows users. But still, it's good to check them out and you can provide some settings here.

What else is left from the dynamic analysis? Valgrind, sanitizers, what else? Anyone? Profiler.

So if you need to profile your code, and if you're lucky enough not to be on Windows, sorry, we don't have profiler for Windows currently. We have it on Linux and Mac. On Linux, we're using perf on Mac we're using DTrace. And so I can profile my application here. It will start the profiler. It tells me that the profile it attached and waiting for my application. Now, I can open the profiler, I can look at the flame chart for all the threads or some particular thread. I can go to a call tree, I can go to the methods list, and again, I can jump to the source. So if you have the profiler output, it's actually linked to the source code. So just profile your code. Follow the problem and then go to your code and fix it. It is that easy.

Static analysis, dynamic analysis... What else are we missing to write perfect C++ code? I asked in the very beginning, but there were just not that many hands from people who were doing unit tests. Unit tests, naturally. CLion has integration with Google Test, boost.Test, and Catch. I love Catch not because Phil Nash, who is the author, is our developer advocate, but because it's a nice framework. It is header-only, so, despite Google Tests, which you actually have to link with the whole library, you can actually just include the catch header file and you are done.

So I have some tests here. When you see these nice gutter icons, they're actually showing me the state of the test from my previous run. So some tests actually succeeded, some tests failed, but also this is the run icon. So I can just go, "OK, run the test for me," and I will get this nice unit test window with the test runner output. I can investigate the tests, I can export the results, I can compare with what I previously got here. I can do all these kinds of things. Boost.Test is the same, so just the unit test, which can run.

If you ask me which is my favorite feature for the unit test integration, I will tell you just one. It's called “rerun all the tests that actually failed.” And a very direct case where this is very useful: imagine you have 200 unit tests, half of them failed. You have to fix them because before that, your team lead said, "No, don't commit this code, all the tests are failing." You have to fix them, so what do you do? You're constantly rerunning the test that failed to try to fix the current one until it goes to zero, and then you just check the whole bunch from scratch just to be sure that you haven't broken anything again. So that's a very nice workflow. So just rerunning the tests and just saying, "OK, I can just rerun all the tests that failed. I don't need to select them manually”, that's quite helpful. That’s unit tests.

That is probably mostly it for the exciting parts. Just one last quick word about CMake. As I said, we are not only about CMake these days. We have support for a compilation database in Gradle C++, but CMake is still is a first-class citizen for CLion. So we treat it as a language. So if you start typing in CMake files, you will see the code completion. And all these file names are the actual links to the file. So I can actually go to them using the usual “Go to declaration” shortcut.

And there are Live Templates, so if you've heard about Live Templates in the IntelliJ Platform, this is just the template you can configure, put some code that you use most often, and then call it with a short combination of letters. You can provide Live Templates for CMake as well. We have some for Google Tests and Boost.Test.

So you can do all these things. Install CMake and it will be supported and will appear here in the build tool window. So that's why I call CMake support a first-class citizen. But again, it's not just about CMake nowadays. There is the compilation database, which I treat as a nice workaround for makefiles. If you use makefiles and we still don't support them, you can use compilation database. We have a whole tutorial in our help about how to do that. So go and try it.

So that's mostly it. I hope you enjoyed the demo and it wakes you up a little bit.

Yeah, and I think they will have a nice raffle and some closing words. I think you were waiting for that, yeah?