After going on some amazing Zip lines in Costa Rica recently I decided to build my own in my back yard. Here is what it looks like completed:

I google’d around a bit for instructions, found a guy who did a good write up and used that as a starting point.

Next I went to Amzon.com and bought all this stuff:

Once you get the parts, follow these steps:

  1. Put up some wood around the tree where the wire rope will go, this way the wire doesn’t dig into the tree. 
  2. Start with the higher point as you will be high up on a ladder and don’t want to do that under tension.
  3. Wrap the wire rope around the tree (the wood you put on it) and then start rolling out the wire until you reach the other tree.
  4. On the other tree you’ll want to put up an attachment line. This is like a 5 to 10 foot line that you tie around the tree (again with wood) and it has a loop that you attach the turnbuckle to. On the other side of the turnbuckle you attach the line that comes from the other tree. Use a come along to get the line really tight before attaching to turnbuckle. 
  5. Expect to get the slope, tension, and something else wrong the first time, and second time, etc. You will just need to try, adjust, and repeat until you get those right.
  6. Build a brake out of two pieces of 2x4, 1 foot long. Use a 1/2” round router bit to cut out a slit down the middle of each piece of wood, then clamp them together with some strong bolts (over the zip line of course). Have a way to attach your brake line to the brake.
  7. Run a line (nylon rope) down to the brake from the starting tree, attach to tree above zip line to keep it out of the way. Attach a pulley on the end of it. Then attach the brake line to the brake, go through the pulley and then through another pulley attached to the bottom tree. Drop it down to your weights.

Brake System (Very Important)

The braking system uses a counterweight, very close to the style depicted in this diagram (details here):

image

Views from Above

image

image

Disclaimer: I’m not liable or responsible for any injury or damage you cause to yourself or others by trying to build one of these things. This stuff is fairly dangerous if you don’t do it right. The lines have high tension, you’ll be up off the ground, and if the brake system doesn’t work you’ll splat into the tree. It’s up to you to learn appropriate ways to do stuff like tie off wire rope (never saddle a dead horse, etc), get the right slope (not too steep), implement the brake system, etc. You assume all risk in building your own zip line.


Multi-tenant software has the unique challenge of integration with multiple back end systems, often systems that are unique to a given tenant. Providing an abstraction layer on top of all the back end variations is the key to a good multi-tenant platform. Abstraction layers typically provide one or more query capabilities to back end systems and any time you are supporting queries you should support paging. The challenge is that not all back end systems support paging so this post will provide some ideas on how to deal with various levels of back end paging support.

Consider the following context diagram:

Here we can see there is a Client (this could be a mobile application) that connects to a Platform (this could be a web service) which connects to various back end systems.

The client supports multiple tenants, meaning one user can log in within the tenant context of Company A, while another user can log in within the tenant context of Company B, etc. Each tenant may connect to one of 3 back end systems, shown above as A, B, or C.

Now consider retrieving a list of items (lets say transaction history). The client needs only make a request with two parameters (page and page size) with optional filters. It doesn’t matter what back end system the tenant connects to because the platform makes the paging experience consistent and transparent to the client. The client has a consistent paging model for all tenant, the integration complexities of back end systems are hidden in the integration layer of the platform.

Now lets look at how we handle the integration complexity of paging with 3 different use cases.

A) Back end system supports paging - in this case we can simply pass through the page and page size parameters to the back end system.  This is the most simple and efficient form of querying. However many legacy systems don’t support this so we often need to do something else, like caching in the platform and paging in the platform.

B) Back end system doesn’t support paging - in this case we have to get some set of data from the back end system and page it ourselves in the platform. The amount of data to get depends on the specific context of your integration, for simplicity we can say we will get 90 days worth of data, cache it, and apply paging on top of that.

C) Back end system doesn’t support page/page size constructs - some times systems are not able to fetch specific parts of a set of data (like with a start and stop index) but they do return data one page at a time. In this case the integration implementation can still accept page/page size parameters, but it needs to map them into the read forward pattern of getting data from the back end. This is a more complicated integration pattern but it could perform better than option B, getting all data at once since getting all data could take awhile.

When trying to decide how to implement paging with a back end systems always start by trying to pass paging/query context to the back end system. If the back end system can’t handle it, then you need to do some sort of adaptation in the integration layer. If you can’t do any adaptation in the integration layer, then you need to send all the data back to the platform where it is cached and the cache is queried.


I’ve recently been looking a number of obfuscation tools for .Net and found Crypto Obfuscator to be one of the better ones. The main reasons are a simple UI and support of advanced obfuscation techniques, like encryption, anti-debug, anti-tampering, etc. - I’ll go into details below. 

First a couple common questions for folks not familiar with obfuscation.

What is Obfuscation?

In a software context, obfuscation is the process of rearranging code so that it:

  1. Doesn’t reflect the original structure.
  2. Is very hard to reverse engineer, because the code has been changed in ways that are not intuitive or meaningful to humans.

Why Obfuscate?

The main reason to obfuscate is to protect (to a degree) the intellectual property of your software. If someone can see your source code they can copy it and potentially benefit or profit from your hard work, effectively taking advantage of you.

Note: obfuscation is not a 100% guaranteed lock on your source code. Any individual or organization with enough resources (time, money, expertise, etc) can make some sense of obfuscated code. However the majority of folks will just open your DLL in Reflector, see a bunch of scrambled code, and give up at that point. Think of it this way, you lock your front door when you leave the house. Sure someone determined could break a window to get into your house, but the likelihood is less. Think defense in depth, this is just one control layer - and your software should have many (more than just obfuscation).

So if you are deploying compiled code to customers and don’t necessarily want them poking around the source you should consider obfuscation. This review focuses on Crypt Obfuscator from LogicNP Software.

Example Application

Consider the following simple console program that takes in one input argument, does some secret stuff with it and returns a secret value. 

image

Running the above program looks like this:

image

Now lets see what happens when someone views the application (DemoApp.exe) in Reflector.

image

Obviously the source is totally exposed to anyone decompiling the assembly. This is where a tool like Crypto Obfuscator comes in. It makes that decompiled code uncomprehendable. 

Using Crypto Obfuscator

The first thing I noticed about this product was the easy to understand user interface. You basically just grab some DLLs you want to obfuscate, check some settings, and it does the rest. Here I added my DemoApp.exe to a new project and configured the obfuscation settings:

image

Now I can run the obfuscated code and see that it works exactly like the original.

image

However when I open this one in Reflector I get a totally different view of the decompiled source code:

image

As you can see classes got renamed, methods got renamed, internal logic got spread across many methods, some stuff is encrypted. Frankly I don’t know what is going on, and that’s the point.

image

So at this point my program is basically useless to someone trying to reverse engineer it, and that’s the value of the tool.

Advanced Features

Exclusion/Inclusion Rules

This allows you to control obfuscation behavior at a very fine grained level. For example I might want to just do something (or not do something) with my “DoSecretStuff” method:

image

Exception Reporting

The program has a built in a way to send errors to your service. Basically you click a button and it generates a C# project that you use to implement the error reporting service interface. The default implementation sends errors to you via Email/SMTP.

image

Code Signing / Authenticode Support

The tool also supports signing the new assembly that has been created. This is needed to enable the anti-tampering feature.

Licensing

There is also integration with LogicNP’s licensing technology if you use that.

Conclusion

Overall this is a super easy to use tool that does a really good job at scrambling source code. Although this blog post is about a DemoApp.exe, I’ve used this on real projects and found no issues with the generated/obfuscated code so far. The code even works on Mono. So if you are looking for an easy to use, fairly inexpensive obfuscator, you might consider Crypto Obfuscator.



Windows Service Bus offers a powerful durable messaging backbone for distributed and cross platform systems. This is a quick example how to use it to send messages across machines using Topics and Subscriptions.

Example Scenario: employees of service provider X need to be able to manage styles for their customer branded applications and build new applications in real time. 

The old way: employees would save branding/styles and then manually build a new version of applications that use the styles. This was a costly and time consuming process.

The solution: automate the building of the applications every time styles are saved/updated. Use service bus to provide a messaging mechanism that will link the web front end application with the build server.

Sequence Diagram

image

Initialize Topics & Subscriptions

The first thing we need to do is ensure the topics we want to send messages on exist and the subscriptions we want to use also exist. Here is a simple initialization method we call to ensure these are set up.

public static void InitializeServiceBus(NamespaceManager namespaceManager)
{
    if (!namespaceManager.TopicExists(Constants.Topics.BuildRequestTopic))
        namespaceManager.CreateTopic(Constants.Topics.BuildRequestTopic);

    if (!namespaceManager.TopicExists(Constants.Topics.BuildResponseTopic))
        namespaceManager.CreateTopic(Constants.Topics.BuildResponseTopic);

    if (!namespaceManager.SubscriptionExists(Constants.Topics.BuildRequestTopic, Constants.Subscriptions.BuildRequestSubscription))
        namespaceManager.CreateSubscription(Constants.Topics.BuildRequestTopic, Constants.Subscriptions.BuildRequestSubscription);

    if (!namespaceManager.SubscriptionExists(Constants.Topics.BuildResponseTopic, Constants.Subscriptions.BuildResponseSubscription))
        namespaceManager.CreateSubscription(Constants.Topics.BuildResponseTopic, Constants.Subscriptions.BuildResponseSubscription);
}

Topics in Azure

image

Style Controller

Using ASP.Net MVC we’ll create a controller that will handle the “save changes” event. This will capture the new style and send a “build request” to the service bus.

public class HomeController : Controller
{
    private readonly NamespaceManager namespaceManager;
    private readonly TopicClient buildRequestTopic;
    private readonly IStorageProvider storageProvider;
public HomeController() { namespaceManager = NamespaceManager.Create(); InitializeServiceBus(namespaceManager); buildRequestTopic = TopicClient.Create("BuildRequestTopic"); storageProvider = new FileSystemStorageProvider(); } private void Save() { var tenant = Request["tenant"]; var css = Request["css"]; var cssData = Encoding.Default.GetBytes(css); var bundleId = storageProvider.Store(cssData, tenant); var message = new BrokeredMessage(); message.Properties.Add("bundleId", bundleId); message.Properties.Add("tenant", tenant); buildRequestTopic.Send(message); } }

You can see the code above uses the NamespaceManager and TopicClient to create a topic to send requests on. The HTML page will provide a simple UI to change the CSS for a given tenant, and a way to kick off the build of the application based on the changed CSS.

UI to Manage Style

image

When the build button is clicked the “Save” method in the code snippet above is called, and a message is sent to the service bus. Another process (the “Worker” process) will pick up that event and process the change as a new build.

The Worker Program

The worker code is very simple. It basically just starts up a subscription and listens for build events, then responds to a build event by building an application, storing it, and sending back a build done event.

public static void Main()
{
    IStorageProvider storageProvider = new FileSystemStorageProvider();
    var namespaceManager = NamespaceManager.Create();
    InitializeServiceBus(namespaceManager);
    var buildResponseTopic = TopicClient.Create(Constants.Topics.BuildResponseTopic);
    var client = SubscriptionClient.Create(Constants.Topics.BuildRequestTopic, Constants.Subscriptions.BuildRequestSubscription);
    client.Receive();
    while (true)
    {
        var message = client.Receive();
        if (message != null)
        {
            try
            {
                var bundleId = (string)message.Properties[Constants.Properties.BundleId];
                var tenant = (string)message.Properties[Constants.Properties.Tenant];
                Console.WriteLine("Got bundleId: " + bundleId + ", for tenant: " + tenant);
                var cssData = storageProvider.Get(bundleId);
                var css = Encoding.Default.GetString(cssData);
                var appBuild = BuildApplication(css, tenant);
                var appBuildId = storageProvider.Store(appBuild, tenant);
                Console.WriteLine("Built application, buildId: " + appBuildId);
                var response = new BrokeredMessage();
                response.Properties.Add(Constants.Properties.BundleId, bundleId);
                response.Properties.Add(Constants.Properties.BuildId, appBuildId);
                buildResponseTopic.Send(response);
                // Remove message from subscription
                message.Complete();
            }
            catch (Exception)
            {
                // Indicate a problem, unlock message in subscription
                message.Abandon();
            }
        }
    }
}

Push Notification (Build Done Event)

Finally if we switch back to the UI application, we’ll notice a separate thread is started in the Global.asax “Application_Start” method.

        protected void Application_Start()
        {
            AreaRegistration.RegisterAllAreas();

            WebApiConfig.Register(GlobalConfiguration.Configuration);
            FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
            RouteConfig.RegisterRoutes(RouteTable.Routes);
            
            (new Thread(ListenForBuildResponses)).Start();
        }

This will start a background thread to listen for build finished events, and notify the UI when a build is done (via Signalr)

private void ListenForBuildResponses()
{
    var namespaceManager = NamespaceManager.Create();
    Core.Utilites.InitializeServiceBus(namespaceManager);
    var client = SubscriptionClient
        .Create(Core.Constants.Topics.BuildResponseTopic,
                Core.Constants.Subscriptions
                    .BuildResponseSubscription);
    client.Receive();
    while (true)
    {
        var message = client.Receive();
        if (message != null)
        {
            try
            {
                var buildId = (string) message.Properties["buildId"];
                var hubContext = GlobalHost.ConnectionManager.GetHubContext();
                hubContext.Clients.All.buildDone(buildId);
                // Remove message from subscription
                message.Complete();
            }
            catch (Exception ex)
            {
                // Indicate a problem, unlock message in subscription
                message.Abandon();
            }
        }
    }
}

Finally we add a little JavaScript to the HTML page so SignalR can push a build done notification to the UI.

$(function () {
    var notificationHub = $.connection.notificationHub;
    notificationHub.client.buildDone = function (buildId) {
        $("#target")
            .find('ul')
            .append($("").html("Build Done: " + buildId + ""));
    };
    $.connection.hub.start();
});

At this point the UI displays the download link and the user downloads the newly built application.


Bitcoin ATM - yep, this is an ATM that takes in real fiat money and loads bitcoins to your wallet. You hold your wallet’s QR code to the machine and that’s how it gets the address to send the bitcoins to. The price you pay is based on a feed from exchanges, I think they are using Bitstamp. I price was like $127/BTC last weekend at the Crypto Currency Conference in Atlanta.

Bitcoin ATM - yep, this is an ATM that takes in real fiat money and loads bitcoins to your wallet. You hold your wallet’s QR code to the machine and that’s how it gets the address to send the bitcoins to. The price you pay is based on a feed from exchanges, I think they are using Bitstamp. I price was like $127/BTC last weekend at the Crypto Currency Conference in Atlanta.


Part of building stateless systems that scale horizontally is using a distributed cache (where state is actually stored). This guide outlines the different types of items one will probably need to cache in a system like this, where to cache it (local or distributed), how to use the cache, what timeouts to use, etc.

Horizontal Scale

First lets review what a horizontally scaled system looks like. 

image

Machine 1 and 2 accept requests from the load balancer in an un-deterministic way, meaning there is no affinity or sticky sessions. So the machines need to be stateless, meaning they don’t manage state themselves. The state is stored in a central place, the distributed cache. The machines can be taken offline without killing a bunch of user session, and more machines can be added and load can be distributed as needed.

Types of Cache

Notice there are two types of caches here:

1) Local caches - these are in-memory caches on each machine. This is where we want to store stuff that has long running timeouts and is not session specific.

2) Distributed cache - this is a hi performance cluster of machines with a lot of memory, built specifically for providing a out of process memory store for other machines/services to use. This is where we want to sure stuff that is session specific.

Using Cache

When using information that is cached, one should always try to get the information from the cache first, then get from source if it’s not cached, store in cache, and return to caller. This pattern is called the read through cache pattern. This ensures you are always getting data in the most efficient means possible, before going back to the source if needed.

Cached Item Lifespan

 There are basically two things to think about when thinking about cached items lifespan. 

1) How long should something remain in the cache before it has to be updated? This will vary depending on the type of data cached. Some things like the logo of a tenant on a multi-tenant system should have a long timeout (like hours or days). While other stuff like an array of balances in a banking system should have a short timeout (like seconds or minutes) so it is almost always up-to-date. 

2) When should stuff be removed from cache? You should always remove stuff from cache if you know you are about to do something that will invalidate the information previously cached. This means if you are about to execute a transfer, you should invalidate the balances because you’ll want to get the latest balances from the source after the transfer has happened (since an update has occurred). Basically any time you can identify an action that will invalidate (make inconsistent) something in the cache, you should remove that item, so it can be refreshed. 

Designing Cache Keys

You should take the time to design a good cache key strategy. The strategy should make it clear for your development team how keys are constructed. I’ll present one way to do this (but not the only way). First think about the types of data you’ll be caching. Lets say a typical multi-tenant system will consist of the following categories of cached items:

1) Application- this is stuff that applies to the whole system/application.

2) Tenant - this is stuff that is specific to a tenant. A tenant is a specific organization/company that is running software in your system.

3) Session - this is stuff that is specific to a session. A session is what a specific user of an organization creates and uses as they interact with your software.

The whole point of key design is to figure out how to develop unique keys. So lets start with the categories. We can do something simple like Application = “A”, Tenant = “T”, Session = “S”. The category becomes the fist part of the cache key.

image

We can use nested static classes to define parts of the key, starting with the categories. In the code sample above we start with a Application class that uses “A” as the KeyPattern. The next we build a nested class 
"Currencies" which extends the KeyPattern with it’s own unique signature. Notice that the signature in this case takes in parameters as to create the unique key. In this case we are using page and page size to build the key. This way we can cache a specific set of results to a query that uses paging. There is also a property to get the TimeToLive and another to construct the key, based off the pattern.

image

The above example is caching stuff in a “local cache”, not a distributed cache. This is because the information in this example is not specific to a user or session. So it can be loaded on each machine which can keep a copy of it there. Generally you want to do this for anything that doesn’t need to be distributed, because it performs much better (think local memory vs. serialization/deserialization/network, etc).

When thinking about unique keys for things like session, you should consider putting the session identifier as an input to the key, since that should guarantee uniqueness (per session). Remember you basically just have a really big name/value dictionary to fill up. But you have to manage the uniqueness of the keys.

Takeaways

1) Use both a local and distributed cache. Only put session or short lived stuff in the distributed cache, other stuff cache locally.

2) Set appropriate timeouts for items. This will vary depending on the type of information and how close to the source it needs to be. 

3) Remove stuff from cache when you know it will be inconsistent (like updates, deletes, etc).

4) Take care to design cache keys that are unique. Build a model of the type of information you plan to cache and use that as a template for building keys.


Cloud9 IDE - Develop code from your browser

Great to see more advanced applications like IDEs being moved to the cloud. This one currently only supports the Node JS/Javascript/Python stack.

I’ll be the first to sign up if they offer support for .Net. Possible business venture for someone else? Replace Visual Studio with an awesome browser based equivalentthat’s a powerful idea.



I’ve needed a clock that shows multiple time zones so I can schedule meetings with remote offices during times that overlap regular business hours. I couldn’t find anything on the market that did that, so I decided to build this product myself. This blog post shows how it was built.

Programming with .Net Gadgeteer

The software was written in C# for the .Net Micro Framework. It uses hardware that is compatible with the .Net Gadgeteer platform.

Shematic Diagram

This is the view from the designer in Visual Studio

image

Location Configuration

Each RFID card has an associated location stored on the Micro SD card. Here is an example of the configuration file stored on the card:

<configuration>
  <appSettings>
    <add key="LogLevel" value="Debug" />
    <add key="Wifi.Network" value="ssid-here" />
    <add key="Wifi.Password" value="network-password-here" />
    <add key="RFID.4D00559A66.Location" value="Portland, OR" />
    <add key="RFID.4D006CE088.Location" value="Georgia, GA" />
    <add key="RFID.4D005589A1.Location" value="Auckaland, New Zealand" />
    <add key="RFID.4D0055D211.Location" value="Bangalore, India" />
    <add key="RFID.4D0055D01C.Location" value="Tel Aviv, Israel" />
    <add key="RFID.4D00558F43.Location" value="London, UK" />
  </appSettings>
</configuration>

You’ll notice the pattern of “RFID.car id.Location”, the “card id” is what is read when you place an RFID card over a reader. This is used to get the corresponding location, like “Portland, OR” from the configuration file. The location is then used to get the current time and sun profile.

private TimeZoneInfo GetTimeZone(string cardId)
{
    string location;
    TimeZoneInfo timeZoneInfo;
    if (!locations.Contains(cardId))
    {
        var cacheKey = "RFID." + cardId + ".Location";
        location = configurationManager.GetSetting(cacheKey);
        locations.Put(cardId, location);
    }
    else
        location = (string) locations.Get(cardId);

    if (!timeZones.Contains(cardId))
    {
        var geoPoint = geoLocationService.GetLocationGeoPoint(location);
        timeZoneInfo = geoTimeZoneService.GetTimeZoneInfo(geoPoint);
        timeZones.Put(cardId, timeZoneInfo);
    }
    else
        timeZoneInfo = (TimeZoneInfo) timeZones.Get(cardId);

    return timeZoneInfo;
}

The current time is displayed on the LED matrix modules. The sun profile is used to display “sunny hours” with blue dots.

Green dots are used to show “standard work hours” (8am to 5pm / Mon-Fri). This is helpful when arranging ad hock meetings with various locations because it gives a quick indicator when there will be overlap during standard business hours. 

The main part of the program

private void ProgramStarted()
{
    configurationManager = new XmlConfigurationManager(sdCard);
    logger = new DebugLogger(configurationManager);
    networkManager = new WifiNetworkManager(wifi, configurationManager, logger);
    timeManager = new NativeTimeManager(configurationManager, logger);
    geoLocationService = new GoogleGeoLocationService(logger);
    geoTimeZoneService = new EarthToolsGeoTimeZoneService(logger);
    bitmapProvider = new DoubleNumberBitmapProvider();

    RFID1.DebugPrintEnabled = true;
    RFID2.DebugPrintEnabled = true;
    RFID3.DebugPrintEnabled = true;

    RFID1.CardIDReceived += (sender, id) =>
                                {
                                    if (timeZoneId1 == id) return;
                                    timeZoneId1 = id;
                                    multipleTimeZoneDisplay.UpdateTimeZoneForRow(0, GetTimeZone(id));
                                };
    RFID2.CardIDReceived += (sender, id) =>
                                {
                                    if (timeZoneId2 == id) return;
                                    timeZoneId2 = id;
                                    multipleTimeZoneDisplay.UpdateTimeZoneForRow(1, GetTimeZone(id));
                                };
    RFID3.CardIDReceived += (sender, id) =>
                                {
                                    if (timeZoneId3 == id) return;
                                    timeZoneId3 = id;
                                    multipleTimeZoneDisplay.UpdateTimeZoneForRow(2, GetTimeZone(id));
                                };

    networkManager.Connected += (sender, args) =>
                                    {
                                        timeManager.ApplySettings();
                                        timeManager.StartTimeService();
                                    };
    timeManager.TimeServiceStarted += OnTimeServiceStarted;

    timeManager.MinuteChanged += (sender, args) => multipleTimeZoneDisplay.WriteCurrentTime();

    networkManager.Connect();
}

As you can see in the above code each RFID card can raise an event “CardIDReceived”, which is programmed to update the display for the specific row it’s on.

There are several “managers” that abstract the details of things, like a geoTimeZoneService that integrates with Earth Tools to get the current offset (daylight time) and sunrise/sunset hours. Another geoLocationService that integrates with Google to get latitude and longitude for a given location. The timeManager synchronizes time with a time server. Finally, the wifiNetworkManager is used to establish a internet connection with a local Wifi network.

Parts / Costs

The whole thing has about $500 worth of electronics, and about $75 worth of wood. Here is the invoice of some key parts I bought from GHI Electronics:

  • FEZ Spider Mainboard (1 @ $119.95) $119.95
  • Gadgeteer Standoff Pack (3 @ $1.95) $5.85
  • Extender Module (1 @ $4.95) $4.95
  • 5x Breakout Module Set (1 @ $4.99) $4.99
  • USB Client DP Module (1 @ $24.95) $24.95
  • RFID Reader Module (3 @ $24.95) $74.85
  • SD Card Module (1 @ $6.95) $6.95
  • LED Matrix Module (DaisyLink) (6 @ $19.95) $119.70
  • WiFi RS21 Module (1 @ $79.95) $79.95

From Amazon.com, you can find the RGB LED strips:


Twitter Box Unveiled at Portland Maker Faire