Thursday, 23 July 2015

Should I put 'project.lock.json' in source control?

TL;DR: No

Why not?

If you depend on specific versions of dependencies, then the lock file contains the same information as project.json, but in an easier way for dnx to consume. As long as the specific versions you depend directly on are the same, dnu will always restore the same versions.

If you have floating dependencies, then you expect the dependencies to change frequently, and any package restore can create noise in source control whenever it picks up new versions.

Longer reasoning

A small categorization of dependencies

Consider each of your dependencies as one of the following:
- stable
- fast moving

A stable dependency is one you consider to be fully featured, bug-free, ready to use and depend on. Newtonsoft.Json might be one, the latest stable release of RavenDB would be another, or an internal library that’s been in use for years and sees new versions once or twice a year.
You want to ensure you’re always working with the version you tested on, and it’s ok to explicitly update that dependency when you need functionality from a new version, or a bug is fixed.
You do not expect to have to update this dependency frequently.
You target a specific version of such dependencies, like 1.2.3, or 4.0.0.

A fast moving dependency is one that you want to track the latest developments, either because it’s still under rapid development and you want to consume always the new functionalities and get the latest bugfixes, you want to know when it breaks your code, it’s being developed at the same time as your project, or it’s just another project that you develop in tandem and it is split for logical reasons, deployment lifecicles or reuse in other projects.
If you’re trying out the latest versions of ASP.NET 5, you might consider beta-6 to be a fast moving dependency, and want to track all latest developments instead of having to manually check and update versions.
In this case you target ranged versions, like 1.0.0-beta6-*.

Project.json and project.lock.json

The project.json file includes the dependencies for your project, besides other information like your project version, commands available for dnx. It then gets transformed into a project.lock.json, which contains the whole dependency tree, resolved into specific versions.

Project.json

Your dependencies are defined on the project.json file.

You can have ranged versions:

    "Microsoft.AspNet.Mvc": "6.0.0-beta6-*",

And specific versions:

    "Newtonsoft.Json": "7.0.1"

These dependencies are used by dnu restore to generate or update the project.lock.json file, which contains all the transitive dependencies of your project, resolved to the specific versions to be used.

Project.lock.json

This file is mainly used by dnx to load the required references for your project, instead of having to recalculate all the dependencies each time the application runs.

Locking a project.lock.file

When restoring packages with dnu, you can opt to lock the project.lock.json file with dnu restore --lock. This means that whatever versions dnu picks up are written into the lock file, and future restore executions will use the same versions, if they continue to satisfy the dependencies on project.json.

What happens is that project.lock.json will contain the locked property set to true, instead of the default false.

If the versions from the lock file do not satisfy the dependencies from project.json, because they changed in the meantime, then the lock file gets rebuilt.

Locking a project.lock.json file is only useful if you depend on ranged versions of a package/project, and it does nothing if you only depend on specific versions.

How are versions picked for a specific project.json

Dependency resolution is described here, but for the purpose of this post there are two main points:

  • By default, versions are considered to be “at least X”. So "Newtonsoft.Json": "7.0.1" means that at least you need 7.0.1.
  • It will pick the earliest version that satisfies all dependencies. So if you depend on A 1.0 and B 1.0, and A 1.0 depends on B 2.0, it will pick B 2.0.

The result is that, for a project.json file that only contains specific versions, subsequent restores will always use the same versions. Hence the first point of section Why not?.

For ranged versions, dnu restore searches for the latest version of the dependency, and the rest of the resolution works as usual. This means that yes, a dnu restore done today can pick up later versions of a dnu restore done yesterday, which is point two of Why not?.

Options

Now that we know what are the main moving parts, here are the three ways a version gets resolved for a dependency:

  • It is specified on project.json (no wild cards)
  • It is wildcarded on project.json, but locked on project.lock.json
  • It is wildcarded on project.json, and unlocked on project.lock.json

The first case is used for stable dependencies. Project.lock.json is never changed unless you change project.json.

The last case is used for fast moving dependencies. Whenever there is a new version of such dependencies, Project.lock.json can change even if nothing else changes.

The middle case is a bit of an odd duck when thinking about stable/fast moving,but it is used when you want to track a fast moving dependency, but for now want to work with a version that you know doesn’t break your code. This is useful in large teams, where the team considers a dependency to be fast moving, but some developers want to stabilize it for a while.

Scenarios

Be notified of breaking changes on upstream projects

If you’re working on a project that depends on other projects, you can use a ranged dependency to always pick up the latest version. That way, you can have a build server checking new versions for problems, since it will always pick up new versions.

Locally, you can lock project.lock.json during a sprint or a feature to avoid getting sidetracked by upstream problems. Since the lock file isn’t commited, you can still get warnings from the build server, and handle them before merging back your code.

Branch to test a bugfix for a library

Consider you found a bug on version 1.42.0 of a library called XBuggy. You can create a branch in source control to test workarounds and fixes, and want to pick up new versions of XBuggy to test as they come up.

Instead of having to specifically update on each new version, you can depend on XBuggy 1.42.*. This means that on each restore, you can pick up new versions of the library.

When the bug is fixed, you then set the dependency to the version 1.42.3 that fixed the bug and merge it back to master.

New functionality that requires new versions of another project

If you have two separate projects that need to be developed in tandem for a given functionality, it is useful to use ranged versions while they’re in development.

The ASP.NET 5 scenario

Looking at the github repos for ASP.NET 5, we see that most of the projects use ranged versions to get early warnings of breaking changes. This enables them to quickly move up new versions, with the option of locking versions for short periods.

Side notes

One scenario where adding project.lock.json to source control is useful is if you want to run dnx without a dnu restore first, since dnx needs the lock file to run (at the time of this post). However, I’m not sure if this would ever crop up outside of my imagination.

Friday, 15 April 2011

Error 2 when trying to start windows event log service

Today my Windows 7 machine couldn't start the windows event log service, and so I couldn't open the event log viewed.

After a bit of troubleshooting I found out that the issue seemed with a particular registry key.

Removing the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\eventlog\parameters registry key allowed event log to start without problems.

Monday, 4 October 2010

Zero-friction deployment: from local to github in one powershell command

Short version:

Code is here, depends on a dll here and here. Allows for a powershell script to upload a file to github.

Long version:

There are one core belief powering this post:

- There should be zero friction in getting a software release out in the open. In the age of build scripts and APIs and everything over http there is no reason a human should be involved in cutting a version except in the decision to make it so.

The main side effect of having very little to no friction in cutting a release is that it happens way more often, since there is no reason not to do it if the code is good to go.

In this particular case, all I lacked was a way to upload a file to github. The packaging was already done, courtesy of psake+write-zip. Having found nothing in the powershell world to do it, I ended up “porting” part of a ruby script.

Github stores files in Amazon’s S3, so there were two steps to uploading a file:
-Telling github that the file exists and getting a token to pass to S3.

-Uploading the file itself to S3 using the token gotten from step 1.

The biggest issue ended up being that WebClient doesn’t handle POST’ing to an url with MIME values, which is what S3 expects in this scenario. Using the underlying *thingie* directly from powershell would bit a bit harder and more prone to errors, so I just used an existing helper to upload the file correctly.

The powershell code that uses it is available here, with another extra dependency on the Html Agility Pack to scrape the downloads page for info on existing downloads.

The function itself (not iet a full commandlet due to lack of time) is upload_file_to_github:

function upload_file_to_github($login, $repo, $api_key, $file, $filename, $description){
    [void][System.Reflection.Assembly]::LoadFrom((get-item "lib\Krystalware.UploadHelper.dll"))
   
    $full_repo = $login+"/"+$repo
    $downloads_path = "http://github.com/"+$full_repo+"/downloads"
   
    $post = new-object System.Collections.Specialized.NameValueCollection
    $post.Add('login',$login)
    $post.Add('token',$api_key)
    $post.Add('file_size',$file.Length)
    $post.Add('content_type',"application/octet-stream")
    $post.Add('file_name',$filename)
    $post.Add('description',$description)
    $wc = new-object net.webclient
    $upload_info = [xml][System.Text.Encoding]::ASCII.GetString($wc.UploadValues($downloads_path, $post))
   
    $post = new-object System.Collections.Specialized.NameValueCollection
    $post.Add('FileName',$filename)
    $post.Add('policy',$upload_info.hash.policy)
    $post.Add('success_action_status',"201")
    $post.Add('key',$upload_info.hash.prefix+$file.Name)
    $post.Add('AWSAccessKeyId',$upload_info.hash.accesskeyid)
    $post.Add('signature',$upload_info.hash.signature)
    $post.Add('acl',$upload_info.hash.acl)
   
    $upload_file = new-object Krystalware.UploadHelper.UploadFile $file.FullName, "file", "application/octet-stream"
    [void][Krystalware.UploadHelper.HttpUploadHelper]::Upload("http://github.s3.amazonaws.com/", $upload_file, $post)
}

As you can see, it’s just a normal POST to the web to notify github of a new file, interpreting the response as xml to extract needed information for the S3 POST. That one uses the helper to pass the parameters as MIME-encoded values.

I have to say, using xml in powershell was a lot easier than I thought it would be. I wonder if JSON is equally well supported…

Monday, 27 September 2010

Blaze-IPP – Design choices and implementation

When I set out to create the support to code commands in IronPython, there were two main guidelines in my mind: simple commands and fast feedback.

Simple commands

Commands should only need to be a name and the definition of execute for the simplest of cases. No imports, just the code:

This is the simplest way to define a command as of version 1.2. As you can see, there’s nothing that isn’t required, and simple shortcuts or expanders could be defined by a simple method

The side effect is that even short or small tasks can be easily automated since there’s no ceremony.

There are two main features to accomplish this guideline:

  • Incrementing the scope with the extra classes and namespaces.
  • Some “smarts” to allow for both methods and classes to be commands.

Adding definitions to a scope is simple, since all we need to do is associate a value to a symbol:

ScriptScope scope = _engine.CreateScope();
 
scope.SetVariable("BaseIronPythonCommand", ClrModule.GetPythonType(typeof(BaseIronPythonCommand)));
scope.SetVariable("UserContext", UserContext.Instance);

Allowing for both classes and methods is also trivial, since we can filter the values on the scope by type after executing code.

Anything that’s a PythonType for which the CLR type can be assigned to an IIronPythonCommand is a command class.

Anything that is callable and a PythonFunction that doesn’t start with “_” is a command method.

Fast feedback

I shouldn’t need to reload the application just to pick up a new script or a change to an existing one. Blaze should pick up changes from the file system and reload the files as needed.

Here the big issue was locked files when the “file changed” was triggered.

This meant that changed files were placed in a queue, and a timer running in the background pulled files from the queue and reloaded them. If a file is locked, then put it again in the queue.

Monday, 20 September 2010

Forked dbdeploy.net

Dbdeploy.net is a database change management library, used by us at weListen to track the changes made to a database in the scope of a project and apply them as needed. It takes a folder full of ordered files containing the change scripts in sql and applies them as needed to a given database. It tracks which changes have been applied in a special table.

We’ve used for quite some time, and were mostly happy with it, except when using it from the command line on the servers. The msbuild task worked well to obtain and apply the changes, but the command line application added extra information to the output that needed to be trimmed before applying it to the server.

Well, it annoyed me enough to fork it from its home at sourceforge to a new place at github.

I refactored to projects a bit, to exclude a direct dependency on NAnt, and added a Dbdeploy.Powershell assembly with 3 commands:

  • Export-DbUpdate, which outputs the scripts that need to be applied. Can write the scripts directly to a file;
  • Push-DbUpdate, which applies the changes to the database (still has a bug when using with SQL Server, as I need to split the resulting script into batches);
  • Select-DbUpdate outputs information about which sets need to be applied;

All of these commands take a path to a config file describing the database type, connection string and table name to store the changelog, and the directory where the actual changesets exist.

Also, instead of mucking about with NAnt as the build script host, I’m using psake. I really can’t stand using xml to describe build steps anymore.

To use it just download the release, and in powershell “import-module Dbdeploy.Powershell.dll”.

Monday, 6 September 2010

Anatomy of a command

Creating an IronPython command to launch the command prompt isn’t much, since you can have a shortcut to do just that.
You might have noticed that the command subclasses “BaseIronPythonCommand”. This is an abstract class that implements the basics and just leaves the Name and the Execute method for you to implement:
image
There’s also an interface you can implement directly (IIronPythonCommand) if you want to assume control of everything.
BaseIronPythonCommand’s implementation is quite simple. Most of it is boilerplate for a Blaze plugin, leaving only the name and execution to be done in the script:

GetName is the name used to invoke the command. It should be something unique and descriptive. A long name is good, and you can teach Blaze a shortcut to it by typing the shortcut and selecting the command on the dropdown box. This teaches blaze about the shortcut.
GetDescription returns a description for the command, to be shown on the line below it on the interface.
AutoComplete returns a completion for the given string. This means you can transform “off” into “office” inside your command.
Execute tells the command the user has selected it. The command returns either a program to execute in the form of a path concatenated with the arguments, null if the command has executed itself.
Any one of these can be implemented in ironpython, although for basic commands you only need to implement GetName (or a property called Name) and Execute.

Wednesday, 1 September 2010

Tutorial: How to create your own plugins for Blaze-IronPythonPlugins (BPP)

On the first post in this series I presented Blaze IPP, which allows for scripting of a launcher using IronPython. Here I tell you how you can write your own plugins from scratch. It will be a short post.

First of all, I assume you’re already running Blaze with the plugin installed.

Let’s create a new directory under “Plugins” to store our new plugins. This way if any future upgrade on IPP brings an ironpython file with the same name, your plugin will not be overwritten. We’ll call it LocalIronPythonPlugins:

Untitled

Now, configure IPP to also monitor that directory. Call Blaze, right click on it and in the “settings” open the “Plugins” tab. Select “IronPythonHostPlugin”, “Configure” and add the newly created directory (you can either browse to it with “b” or you can paste the path into the text box):

Untitled2

Now that we have a clean directory to work with, let’s create a file called “OpenCommandLine.ipy”. Open it and add the following code:

As you can probably guess, this will create a new command named OpenCommandLine, and when you select it it will open a new command prompt. IPP is clever enough to know that if a command returns a string, it is a program to be executed.

This is just a simple example to get you started. Since this is IronPython, you can use the entire .Net framework and fetch things from the web, open the registry, connect to a remote host or start a service, to give a couple of examples.