Using VSTS on a daily basis I find that I add a regular list of VSTS Marketplace extensions to my VSTS environment. I find them convenient and helping me to get the most out of VSTS. The list below is primarily focussed on the Work and Code area and not so much on the Build and Release area.
Last week Xpirit was at Dutch TechDays 2016 in Amsterdam RAI. Xpirit was a Platinum Sponsor for this event. For those who have missed the event, it was a great opportunity to meet all of the Xpirit colleagues in person since we were ALL there! Xpirit hosted several sessions and also organized a very successful CTO track!
Today I was working with Release Management in an On-Premise TFS 2015 situation where I had to release into server located in the DMZ.
After getting all kinds of things in place, like installing an agent, having shadow accounts setup and having validated i could reach and use the agent to install the required software I came across another issue.
The issue issue was that to be able to run a PowerShell script on the machine, WinRM is used. When running that PowerShell script from the release pipeline it blew up the pipeline with the following error:
“The WinRM client cannot process the request. If the authentication scheme is different from Kerberos, or if the client computer is not joined to a domain, then HTTPS transport must be used or the destination machine must be added to the TrustedHosts configuration setting. Use winrm.cmd to configure TrustedHosts. Note that computers in the TrustedHosts list might not be authenticated. You can get more information about that by running the following command: winrm help config.”
As the error suggests you need to add the server to the local TrustedHosts list. I first checked the current list with the following command:
That returned an empty list. And thus I decided to add the current server to the list, which can be done with the following command:
set-item wsman:\localhost\Client\TrustedHosts -value 192.168.XX.XX
The following screen shows the commands in action, actual IP’s are blurred.
When re-running the deployment all was good in the “safe zone”
Recently i have posted a VSTS extension to the Marketplace. This extension is named Token Comparer. It allows you to compare your defined Tokens with the defined release variables.
I have updated the extension to run on on-premise TFS 2015 as well. I have also done some cleaning in the code. This resulted in a new version in the market place which is no longer in “preview”. I have also updated the underlying name of the extension and therefor it now is only available through the following URL:
Read more details in the previous post: https://jaspergilhuis.nl/2016/06/27/token-comparer/
Today I published a new Visual Studio Marketplace extension named “Token Comparer”. In this post I will quickly highlight its features and its usage. In a future post will do and end-to-end scenario so in which you will learn about the creation process as well as the delivery process. But first let’s see the extension.
What does the Token Comparer do?
The Token Comparer can parse specified source files for usage of Tokens and it can compare these against available variables defined in your Release Definition. It will detect and compare the results. Based on the settings you can choose to fail, warn or continue your release.
The tasks will provide you with a summary that will show the findings. The list states the findings.
Configuring the Token Comparer?
In this version I choose to let you define a generic service endpoint to allow safely storing your credentials. Now VSTS has the ability to access an oAuth token this will be changed in a future version.
How to find the Token Comparer Extension
Navigate to your VSTS Team Project. Click the Marketplace icon. Search for “Token Comparer”. Choose to install it to your VSTS account.
Recently I encountered a suddenly broken TFS build. The process says we build hotfixes through a shelveset first to be able to test them properly before releasing them, and avoid possible conflicts with other fires. The developer in question tackled the bug at hand and created the shelveset, then continued to go ahead and build it. We have a Hotfix build definition available but the TFVC source mapping had to be updated while it was not pointing to the correct branch.
This screen below shows the current source mappings, and for this build to work he had to bump the branch version (v2.15 to v2.16).
When queueing the build including the shelveset it all came to a halt. The build actually failed on the first step [Get Sources] which was unexpected.
At line 3 it starts undoing any locally available edits, in this case it finds changed assemblyinfo.cs files of a previous build. Nothing wrong with undoing these (agent local) edits.
At line 6 it indicates it will delete any files that do not exist in the local version table. This also includes previously generated build output. Also fine.
Then in line 13/14 it says to get the workspace, and unshelve the desired changeset. And then is goes “boom”.
What is the problem?
The problem is that cleaning process is thorough but, I changed the source branch, and that is not being reflected in this process. So after the cleaning process I have the cleaned sources of Branch v2.15 while I actually need Branch v2.16.
How to work around this situation?
There are a few solutions, the ultimate solution is to have a fresh build agent every time, then you will never run into these errors. This can be done using a hosted build controller. In my current on-premise scenario this is not the most ideal scenario.
Another would be to have an on-premise auto re-provision of an agent but that seems a bit far-fetched too.
An easier solution that will always work is to use a slightly different source mapping from the beginning as can be seen in the image below.
The effect of this change is that the agent will clean v2.15, then delete files, then get v2.16 because this directory does not exist yet, then will unshelve the shelveset and build it with success. Minor downside to this is that the [Get Sources] (cleaning process) takes a bit longer when changing branches. In the log below this can be checked.
Today I was working on an integration project when I discovered some unwanted behavior when utilizing a Service Hook when trying to capture a changed work item. I used the following strategy (see this excellent blog post by René van Osnabrugge) to create JSON classes for a generated request. While debugging my solution I figured out that not all properties had values. What could be the issue?
Setting up the scenario
Using the previously mentioned strategy I have created some C# classes for the JSON from the Service Hook. I use these classes in my method to capture the Service Hook. Specifying the “TFSHook.RootObject” automatically transforms the JSON into the classes.
Finding the issue…
I expected all the classes to be automatically filled with the values from the received JSON stream. Unfortunately, some of the classes where empty.
First thing I went to validate was whether the JSON actually contained the values. Now once the request has been transformed it is not very easy to get the original JSON. Luckily the Service Hooks page has an excellent history view, in there you can find all you need.
With the extra set of eyes from Mark Dekker we quickly discovered that the some of the properties HAVE a “.” in their names, while inspecting the generated JSON classes did NOT HAVE a “.” in their names.
Call it a bug or a feature, the solution is all that matters for now. In the generated classes we can easily decorate the properties with an JSON attribute that will allow us to specify the property name to look for in the JSON stream.
Voila! Running this will provide you with objects containing the actual values.
Bonus Material: Increasing the debugging experience
When debugging your Service Hook, changing some code and then debugging again may give an unexpected behavior, your previous breakpoint may not be hit. My experience is that when something goes wrong in your code, or a timeout happens, the Service Hook is set to “Enabled (restricted)”.
This can be easily fixed by choosing “Enable” from the context menu.
Be sure to have the Service Hooks page around when developing a Service Hook. This post proves how useful it can be!