.NET

MongoDb C# Driver 2 - A Nice(ish) Abstraction by James Heywood

I've been doing a bit of work in the evenings of late to put something together that better demonstrates some of my back end development as I realised there was no example data access code in my bitbucket repos. 

So I've resurrected part of an old project, the project was an on-line car owners portal for Porsche, in fact some previous posts on this blog relate to this, I had to remove the name Porsche from those articles (and screen grabs) under threat from a previous employer, but I feel enough time has passed now that it's ok to mention their name, if not well we'll see what happens!

You can see my efforts in the following repository; https://github.com/jdheywood/4um - a rough estimate of the time I've spent on this is somewhere between 10 and 15 hours over the course of a week.

Overview

Anyway the part I have chosen to rewrite to show what I can do is a mini forum, where users can ask questions and other users can reply. There is a search function and the ability to favourite questions and answers so users can revisit these as and when needed. 

When we built this for the client it was a single .NET MVC application using forms authentication, knockout.js to enhance the user interface and MongoDb for the data storage. I've decided to rewrite this as an API and an SPA so that I can better separate the data from the interface, my plan is to recreate the knockout interface once the API is ready and then after that have a go at writing interfaces using a variety of JS frameworks to contrast and compare these approaches.

The user interface (when ready) will consist of two forms, one for users and another for admins (or maybe one form that adapts based on the user's role?), with a JS framework using ajax to call the API for data operations. A stretch goal would be to add OAuth support to ease registration and access, but that's way down the line.

 

Progress

For now I've been concentrating on getting the back end up and running, I thought this would be relatively quick to do but Mongo have rewritten their C# driver completely since I last looked at it so I've ended up re-writing this part completely. Also when I built this a couple of years ago we had very little test coverage so I've tried to address this shortcoming.

So what I have so far is a nice(ish) abstraction over the Mongo C# drivers (if I do say so myself!) and the beginnings of an API that uses the repositories I've created to access the MongoDb collections.

Yes I know repositories are so 2000s and in the 20 teens it's all about CQRS, but hey I can always refactor these at a later date, I wanted to concentrate on getting the mongo abstraction in place first before I changed too much.

I have written unit tests for the various factories that abstract the MongoDb drivers and integration tests to demonstrate and prove the repositories functionality. 

I've also created a simple console application to load sample data to mongo, as the integration tests set up and tear down the data they use when they finish you have no data, hence the need for loading sample data to demonstrate the API.

If you have access to a Windows machine and Visual Studio 2013 you can pull down the repo, build the solution and see this running. You'll need to install MongoDb first though, instructions/links can be found in the readme of my github repo.

If you run the integration tests you can see the repositories in action, similarly if you run the Forum.Tools.Samples console application then fire up the API and hit one of the endpoints; /api/questions, /api/answers, /api/searchterms you can see the API returning data from MongoDb.

 


Good, Bad, Better

Some things I like about this
- The abstraction over MongoDb via MongoContext, MongoClientFactory, MongoDatabaseFactory and Generic MongoCollectionFactory
- The use of SimpleInjector for IOC/Dependency Injection, especially the ability to verify the container
- The use of MongoContext in the API Startup as a single entry point to set up the data access
- The integration tests covering the repositories

Some things I would change given time;
- Refactor the repositories into Commands and Queries to allow easier testing
- Introduction of API data transfer objects and mapping between domain objects and these DTOs
- Swap out the use of an installed MongoDb for an embedded or hosted version
- Use AutoFixture customisations to build test and sample data instead of semi-hand rolling these objects

The next things for me to do to get on with this project;
- Define the API routes and build out the API layer
- Change the API to return JSON instead of XML
- Unit and Integration tests of the API
- Install swagger to self-document the API; https://github.com/swagger-api/swagger-ui 
- Set up a simple web application
- Markup and styling for the web app
- Introduction of knockout to provide UI functionality

So quite a few things to do before this is a functioning application, however the data access layer is well defined, covered by tests and helps to demonstrate the kinds of applications I've built, I hope you like it.

 

Summary

One final point to note is that I had trouble unit testing my repositories due to issues substituting the MongoCollection objects that the repository methods use, I have used integration test to add test coverage because of this issue. Even though we need integration tests anyway to actually test data storage and access we should be able to write unit tests for our repositories which run quickly and are less expensive in terms of maintenance, deployment and execution, so how do we do this?

The issue stems from the collection objects and the fluent API that the MongoDb C# driver provides, when trying to set the .Returns(object) for the chained fluent API methods off a collection NSubstitute throws errors as it doesn't know which method you are trying to set the .Returns() on.

After some googling on this and how best to resolve what I've determined is that this is a problem with my code and abstraction, rather than NSubstitute (way to state the bleeding obvious eh?). The way to resolve this is to introduce another layer to the abstraction, something along the lines of an IForumMongoCollection, and have this implement methods of IMongoCollection, this way my MongoCollectionFactory can return my ForumMongoCollections and I can mock/substitute these in my unit tests.

So a simple idea when you step back and think it through, it alluded me at the time I was writing this code though, but that's what you get for coding late into the night. So that's now top of my to do list, once that's in place I can start building out the application properly.

Anyway enough blogging for now folks, hope you like this article and the repository, as ever throw comments and questions at me below if you are so inclined.

Cheers

 

Debugging Service Bus Issues in Azure by James Heywood

So I had an interesting morning investigating an issue with the project I am working on at the moment.

Background

Briefly we have built an identity solution integrating several client websites, the client uses Gigya for social sign on and we have put in an API and service bus, fronted by a custom built JavaScript SDK (written by yours truly in backbone) to manage the login and registration of users to Gigya and then hit endpoints in our API which raise messages onto the service bus.

These messages are then picked up by a variety of worker roles in order to synchronise user data between several systems, including the client's email campaign tool and in relation to this post and the issue I was having, a Solr search index containing segmented user data which the client wishes to use for advertising and marketing purposes.

So we have a cloud based solution which is robust, scales well, is easy to manage and is also open to extension (by creating further service bus topics or queues and the worker roles to handle and process these messages), nice!

FYI we are using Microsoft Azure as our platform so this post is skewed towards this, however the same concepts apply regardless of your platform and all cloud development presents the same challenges when trying to debug issues and identify the cause of unexpected behaviour.

The Issue

My problem came about when trying to figure out why a particular message was not resulting in the relevant data in our search index, where is the problem here? With the API, the bus, the topic, the message handler, the search index itself? With so many parts its a challenge just trying to find the right place to start, and it took a couple of hours to figure this out.

As it happens the problem turned out to be something trivial and once I understood this the fix was also trivial. As with most things it was the investigation that was costly, so just in case you are having an interesting time trying to figure out an issue with Azure development here is how I found the problem.

Resources

Firstly I needed a few things setting up and/or installing;

Windows Azure Management Portal

Obviously you are going to need access to the cloud resources in question, a quick request to the administrator sorted that out for me. When first using the azure portal it can take a while to familiarise yourself with the iconography, the elephant for HD Insight is my personal favourite!

Once you play around with it for a while though it starts to make a bit more sense, although it is very information dense so take your time and digest it at your leisure, assuming you have the time. If you're trying to debug an issue you probably don't, in which case good luck!

Service Bus Explorer

As we are using a bus and the issue manifested itself as the lack of an expected outcome from a message posted on the bus I thought it prudent to take a peak at the messages on the bus, just call me Columbo!

The reason for this is to check if I had any messages in dead letter queues, or indeed if the bus was set up correctly. This handy tool let's you peak at messages on the bus and those that have failed to be handled and ended up in the dead letter queue. Just download the file, unzip and run the .exe in the bin folder. You'll need to add your connection settings to start looking around which you can find from your worker role configuration in the Azure Portal.

Azure Management Studio

This piece of software gives a reasonable interface to manage your azure configuration, in particular it allows you to review azure storage content, which is what I was concerned with. When you run the installer it prompts you to visit an endpoint in azure to download your azure publish settings, follow the link and upload the downloaded file to the Azure Management Studio software and it sets up all of your azure subscription details for you, nice!

The Investigation

Now we have some weapons in our arsenal we are ready to do battle with the pesky issue, below are the steps I undertook during my investigation.

  • In the portal, go to the Service Bus section and select the correct namespace, from here use either queues or in my case topics to ensure that the relevant topic is set up as expected. All seems well here, let's move on.
  • Next go to the Cloud Services section and select the relevant service name, from here click on Configure and ensure that the configuration of your worker role(s) is as expected. In my case this allowed me to check that I had the correct endpoints set up for my search index, which ruled out an issue with the handler connecting to the search index. You can also check things like connections to table storage, if you have any. This would have given me a clue early on in my investigation had I thought about it but alas I was too busy scratching my head wondering where to look for helpful information at the time!
  • From here I made a note of the service bus settings (hostname, key name, key value) and then proceeded to start up the service bus explorer
  • In service bus explorer I chose File > Connect and added the details of my service bus and clicked OK to connect to the bus in question.
  • When this had loaded I drilled down into the topic in question, found the relevant subscription and spotted that there was a message sat in the dead letter queue for this. Upon retrieval of this message I spotted I had an issue processing the message, so this pointed in the direction of my message handler.
  • The next step was to open up Azure Management Studio, drill down to the relevant storage account and then filter and query the WADLogsTable for a message matching the role and timestamp of the dead lettered message I just found on the bus.
  • Once I found this I had an exception message and stack trace, so at that point it was a simple case of figuring out the error and making the fix. 

The Fix

As it turns out I simply forgot to add some data to our development environment's table storage. A query issued by the message handler relied on this data and as it was missing it was throwing an exception and dead lettering the message on the bus.

 

TL/DR

After all that it was developer error, doh!

 

I hope this post has helped shed some light on the fun of working in azure and also enlightened you to some of the tools at your disposal to assist when trying to identify an issue.

Cheers,
James