Static analysis will be executed when building your source base using the central build machine. You have two options to set this up with VSTS:
We'll go under the assumption that you're already using build 2015/vNext and you have working with build definitions for at least one project.
Edit your build definition and add two SonarQube tasks, place the begin analysis before the visual studio build task and the end of analysis after the execution of unit tests. (that way, if you have code coverage results, they will surface in SonarQube).
Under the "begin analysis" task, set the endpoint, name of project, key to the project, as well as optional parameters if needed.
Note: these settings can also be specified in the global SonarQube settings or in SQ project settings.
Note: java must be installed on the build machine if you have your own build machines.
Note: I recommend you add a 'SonarQubeAnalysisTimeoutInSeconds' variable to the build definition with the following value "600". This will extend the time-out for the static analysis, sometime your machine to have several results to import at a time, stale a little bit, which can cause builds to take longer and/or time-out.
If you're working on a non-msbuild project just use the CLI task somewhere after you built and unit tested your code.
SonarQube allows you to create projects. These projects will hold your code analysis results. You can configure a SQ project for each code repository or even for each branch to have different deltas. (ex my master builds every month, I want to see changes to the monthly and my dev builds daily so I want to see evolution on a day by day basis).
Go to "configuration"-> "Projects"-> "Management" then "create project".
Keep the project key in mind, we will need this parameter later when setting up the builds.
To prevent anyone from sending analysis results to our SonarQube installation, we need to secure the access to its services. To do so, we'll configure a service account.
From SonarQube, go to administration, security, users, and add an account.
Next click on the "tokens" cell for the account we just created an generate a new personal access token.
You can also refer to that documentation if you're not sure how to generate a PAT https://docs.SonarQube.org/display/SONAR/User+Token
To leverage this service account in VSTS, go to your team project, click settings, Services and click on "New service endpoint" and "SonarQube". Then enter the URL of your SonarQube installation, a display name for the connection and the Personal Access Token.
We will install and configure an add-on to delegate authentication to Azure Active Directory. This will allow our developers to use the same account between Visual Studio Team Services and SonarQube.
Configuration of the authentication module
Since version 5.4 SonarQube provides an additional plugin relying on the OAuth protocol to communicate with AAD. This will allow the users to leverage their corporate account to access SonarQube, providing SSO and simplifying the administrators job by having a central identity repository.
The setup procedure is already well documented, rather than duplicating it, here is a link to the resources.
Visual Studio Team Services provides a highly extensible model to third parties so they can integrate their solution with VSTS.
SonarQube has implemented build tasks and service definitions for VSTS. Before being able to leverage SonarQube from VSTS you need first to install the corresponding extension.
To do so, just click on the link provided bellow and click on install, you need to be team project collection administrator to install extensions.
Note: for on premises TFS installations, it will require a few more steps, see this link:
Static analysis works by leveraging rules. These rules are grouped by language or language categories in modules that you can install. In addition to providing support for the subsequent languages, these modules can extend the native capabilities of SonarQube.
Most of them are free, some are subject to commercial licenses.
Open SonarQube and go to configuration, system, search for and install the modules that you're interested in.
Once all the modules installed you need to restart the server using the button available in the UI of SonarQube.
Open ports 22, 9000, 80 and 443 inbound on the VM.
Rather than repeating what is already documented, I will provide you with the link
It is necessary to open the ports 22, 80, 443 and 9000 allowing respectively to access the machine remote shell, load http and https content, and access the management console.
We will update the machine, install docker, and provision the containers we need.
Connect to the machine using SSH (Putty is a very good client for windows) and run the following commands:
The containers are the components of our system managing the web traffic and providing the SonarQube service.
To secure connections, we will also generate self-signed SSL certificates which is not the easiest thing to do when someone is not used to working with linux environments. It's most likely to be the case for developers using Visual Studio Team Services (or TFS) because they come mostly from the Windows world.
I shared configuration scripts on Github to help you. Obviously if you have your own certificates, or if your environment already has some pre-existing configurations, you can edit this script.
(see the SSL part of the script)
Create a db sql azure with collation set to SQL_Latin1_General_CP1_CS_AS.
The SQL Azure database creation steps are already well described, crucial detail: use the following collation: SQL_Latin1_General_CP1_CS_AS. (and use a blank template)
Keep the database access settings (FQDN of the server, username, password, the database name) somewhere, we will need those later.
Don't forget to open the firewall of the SQL Server for connections from Azure.
We'll provision a ubuntu server in Azure, and install Putty and WinSCP on your local machine
Here is a link to a documentation explaining how to do it
This is! the machine is being provisioned!
Meanwhile take the opportunity to download a SSH terminal if you don't have one, I recommend http://www.PuTTY.org/ (you can also install WinSCP that will also provide a GUI to transfer files)
Planning, Sources, Build, deployment, testing: VSTS. Analysis: Azure VM (SonarQube), Azure SQL.
Because we use a maximum of services cloud at 2toLead I realized the following installation:
Note that to facilitate the management of the SonarQube "box" we are going to install Docker on the ubuntu machine. Once docker installed, we'll hydrate two containers, nginx and SonarQube.
Why docker? the philosophy of this article is that processing components are disposable (SonarQube as such). We can replace them quickly if they have stopped working or if a new version of SonarQube is available. Our data will reside in SQL Azure.
For 65CAD per month (with public prices), you can have a complete suite of software delivery with work management, source control, continuous integration, automated tests, automated deployments, and automated static analysis.
It took me about 1 hour to install and configure everything from start to finish, and I didn't have this series of articles to guide me, it's quite fast to set up.
Static analysis allows you to understand weaknesses of your code based on a set of rules. You can have it run automatically on a server or from the IDE.
The principle of static analysis is to take advantage of rules set more or less complex, those will detect patterns in the code that are problematic, categorize their importance and suggest a resolution.
A few examples:
There are two main families of static analysis tools the first one being called "centralized" or "automated". It will generally be executed once the code has been pushed on the source control. These analyses are usually running during your CI (Continuous Integration) from your build servers so that developers are free to do something else during this period of time.
The other family is called "integrated", which means that we'll have an analysis in (almost) real time when the developer is writing code. For example, resharper, the analyzers available with roselyn etc. This avoids pushing bad code to the source control code and having to fix it afterwards.
Note: in some scenarios, we could be perfectly set up 'gated check-ins' which means that the code won't be accepted by source control until the static analysis runs on the new source base and gives a positive feedback.
This year again I have the opportunity to speak at the SPS Ottawa.
I’ll give a speech about the graph “Migrate your custom components to the SharePoint Framework ”. We'll see how you can migrate your existing investments in SharePoint development (either full trust solutions or add-ins) to the new SharePoint framework. Migrating these components will not only help you make sure you stay ahead of the technology but will also improve users experience and help migrating to Office 365.
If you’re in the area Saturday October the 28th 2017 don’t hesitate to register to the event.Just as a reminder SPS are free events organized by the community with lot of great sessions.
This is a good occasion to expand your network, learn a lot of things and to spend a good day. The event takes place at algonquin college.See you there!
This series will explain *how to set up an automated code quality analysis* which is almost free of charge with Visual Studio Team Services, Docker, Azure, and SonarQube.
There is bad quality code in every development project. This goes from the quick and dirty hack we are not proud of, to the long-forgotten code written by a developer who quit the company.
The problem with this code is that it will eventually accumulate and slow down the pace of new features delivery. The reasons are various, a lot of time spent in bug fixes, refactoring, support...
To get out of this vicious circle, we must know what is the current baseline and then update the picture periodically. That'll allow us to see what is the amount of work required to correct the situation, and whether we are improving the situation or making it worse, etc...
I had the chance to attend a presentation of SonarQube, which is the tool / platform we are going to study during this series of articles. You should know that SonarQube is not the only option out there. Here are the key advantages of this solution:
During the Microsoft build 2014 keynote, Microsoft quickly presented it, and they are working with the community to integrate SQ with the Microsoft tools/languages.
Writing this article had been delayed because of several conferences last two years but finally got the time to publish since I discovered and dug the subject.
In the meantime, Microsoft has published two articles.
One about installing everything locally, I'll explain you how to do online.
Another one to install SQ in on vm hosted by Azure (on Windows).
By the time you read this article, all posts of the series are already scheduled for publication, if ever you want to revisit a post or another, you can leverage my blog's search feature.
While progressing on my writing, I realized that there is a tremendous number of things to explain. Hence these suggested objectives for this series:
To perform your installation, you have two options: