The combination of Vagrant, VirtualBox and Ubuntu allows for some interesting potential. First of all, its a simple way to build and deploy cloud images similar to Amazon Web Service's AMIs, but all on your local machine. It is also capable of customizing the virtual machine settings through a configuration file leaving us the opportunity to create full linux development desktop experiences. Finally, it allows us to run linux containers (such as docker) on Windows and OS X environments in an extremely simple way.
The use case I will be showing in this post is for people who prefer bare metal installs of either Windows or OS X, but would like to have a full screen linux environment such as ubuntu running gnome. With Vagrant and VirtualBox, this is universally possible.
First, please head to http://git-scm.com and download the git installer. For windows, make sure to install the unix tools, its worth it. The installer for windows will get openSSH installed which is required by Vagrant.
Next step: Install VirtualBox. Once virtual box is installed, also install the extension pack.
Finally, Install Vagrant.
Once all the dependencies are installed, open up a shell window and lets get started. We will use a base cloud image from Ubuntu 13.04 found on http://vagrantbox.es. Vagrant works in project directories, so first create a project folder. Inside the project folder we will initialize a default configuration file and start an instance. Once the instance is ready we will install ubuntu-gnome-desktop.
$ cd ~
$ mkdir ubuntudesktop; cd ubuntudesktop
$ vagrant box add ubuntu http://cloud-images.ubuntu.com/vagrant/raring/current/raring-server-cloudimg-amd64-vagrant-disk1.box
$ vagrant init
$ vagrant up
If you use docker for integration or continuous testing, you may be re-building images. Every command issued through docker keeps a commit of the fs changes, so disk space can fill up fast; extremely fast on an EC2 micro instance with 8GB of EBS.
Scrounging around the issues in the docker project on github, I ran across a thread talking about solutions for the storage growth. I took it and expanded a bit.
Below is the output of past and present docker commands. Anything with a status of "Up for x minutes", those are presently running docker commands. Anything with Exit 0 or other Exit values are ended and can be discarded if they are not needed, such as you do not need to commit the changes to a new image. In the screenshot below, you can see a re-build of the mongodb image. There are two commands the Dockerfile issued that stored commits and they can be discarded.
$ sudo docker ps -a
SharePoint 2013 recently had two cumulative updates released: March 2013 PU mandatory update, and the June 2013 CU. I wont go into the details of obtaining these patches or running them. Basically, you'll need a lot of down time to run the patches, as they take a while. With that said, lets assume the patches ran, updated and completed. Lets also assume you have already run the product configuration wizard after each patch to update the database.
For me, I had to run the patch a few times. Due to randomness (or maybe just sloppy closed source coding) SharePoint CUs tend to fail. The good thing, is that they either succeed 100% or fail completely (leaving SharePoint more or less in an OK state. Luckily for me, I did not have any issues running the product configuration wizard. I've seen and heard of instances where product configuration wizard fails, but its usually a clean up task that fails and isnt a big deal.
After I updated the March 2013 PU, I ran into a 503 unavailable for central admin and my site collections. Instead of finding an immediate resolution, I plowed forth and installed the June 2013 CU with success. Unfortunately after the database upgrade and a SharePoint server reboot, I was still getting a 503 error when trying to access SharePoint.
After a bit of googling, I found a working solution to this particular problem. Load up IIS Manager and head to the Server Application Pools. As per this post and for me, all of my Application Pools were stopped. Without hesitation, I started every stopped pool, restarted IIS and once again was able to access Central Admin and my Site Collections.
Lets imagine we are taking our express web application and isolating out the app routes and the socket events. The reason for this is simple: when a Web class extends the controller and event classes, they can share important pieces of information related to the web server such as a user session or login information. This will help us later on when implementing socket events for authorized users.
The following image on the left depicts what I am explaining above. Blue lines are executed at the same time. Red lines are dependent on their blue parent's completion before they execute, and so on.
I then take the idea a step further and mix batches of parallel tasks with tasks dependent on the batch to complete.
This is a sample gist of personal work I am doing on a Project Management webApp. The goal of this gist is to show how non dependent tasks can be parallelized and dependent tasks can be run after those parallel dependencies are run. eachTask iterator takes a task object and a callback. It uses the Function prototype method call to pass scope to each parallel task.