Rendering Markup Anywhere in MVC

I had a hard time coming up with the title, because as you know, markup is pretty dynamic in MVC. However, I came across an interesting limitation when it came to rendering markup. I’m not talking about rendering partial view content using Html.Partial, or using a helper method. I’m actually talking about rendering markup segments, which I’ll demonstrate with a VB.NET example (sorry, I’ve been stuck in the VB world for some time, it’s become more natural than C#):

@* SomeView.vbhtml *@
@Code
Html.ScriptDefine(
   @
      alert("ME");
   )
End Code

Html.ScriptDefine is not something Microsoft created, but a custom extension I created. This was a helper method to register script segments. It is not a method defined globally or in a view, but a helper that’s code, which can be easily reused across projects, which is why I tried to utilize this technique. Somewhere in the master page, a process read all of these scripts and rendered them in the master page. This was excellent; I could define these script blocks anywhere in the view, and they would all get rendered in one place.

My helper looked like the following:

Public Sub ScriptDefine(Of TModel)(html As HtmlHelper, key as String, fn as Action(Of TModel))
    Dim script = fn(html.ViewData.Model)
    'Store reference to model and script somewhere, 
    'which the master page retrieves all of the scripts and renders
End Sub

It worked, except in one scenario: Partial Views, which is a key reason why I wanted it. See, I often found myself using scripts in a partial view. I tried using an optimization technique where scripts run at the end of the page; the only problem was a partial view that used a script had it’s <script /&rt; block defined wherever the partial was, which was usually above the end of the view. The issue with partial views has to do with the rendering process, and although I wasn’t quite sure how to figure out why, I found a better solution anyway: HelperResult.

By defining the script in a helper (a small caveat) and then storing the helper result, this solved the problem much more easily. I was able to define an extension like the following:

Public Sub ScriptDefineHelper(Of TModel)(html As HtmlHelper, key As String, fn As Func(Of TModel, HelperResult))
   Dim helperResult = fn(html.ViewData.Model) 'Returns the content as IHtmlString
   Dim list = CType(html.ViewContext.HttpContext.Items("_Scripts_"), List(Of String))

   if (list Is Nothing) Then
      list = new List(Of String)
   End If

   list.Add(helperResult.ToHtmlString()) 'Store the scripts as a string, which is easy to render later

   html.ViewContext.HttpContext.Items("_Scripts_") = list
End Sub

Now wherever we use our helper, we can use it like:

@Code
  'Use in view or partial view
  Html.ScriptDefineHelper(Function(i) Scripts())
End Code

@Helper Scripts()
   
      alert("Hello");
   
End Helper

And we can render out all the scripts with the following code (we can also use a helper method for this):

Dim items = CType(html.ViewContext.HttpContext.Items("_Scripts_"), List(Of String))
For Each item in items
  @Html.Raw(item)
Next

The real question is why do all of this, when all of the scripts could be in the page? Well, there are good reasons for doing this. First and foremost, keeping the scripts used in a partial view are best defined in the partial view. Out of sight is out of mind, especially for JavaScript. By using this technique, scripts can be defined, and rendered at the designed area, more effectively. That is the primary benefit; outside of that, there aren’t a lot of benefits.

Kendo UI Lists and Twitter Bootstrap Simplified

Bootstrap is a great CSS layout framework for setting up the user interface of your application. Bootstrap provides a grid system, whereby content can be structured into a grouping of columns up to a maximum of 12 total. This works great for laying out content, and can also be useful for layout out content in grids too. The following is an example of defining a template. Kendo uses templates to define the item view for the list. The following is a row within the eventual list rendered vertically:

<script type="text/x-kendo-template">
  <div class="row">
    <div class="col-md-3">
      #: FirstName #
    </div>
    <div class="col-md-4">
      #: LastName #
    </div>
    <div class="col-md-2">
      #: State #
    </div>
    <div class="col-md-3">
      <a href='@Url.Action("Where", "Some")/#=ID#'>
        View
      </a>
    </div>
  </div>
</script>

Next, we need to use the template, which we would supply to the kendo listview initialization plugin. Below is the initialization of the list, as well as the passing of the template to the list:

<div id="listview"></div>

  $(function() {
    $("#listview").kendoListView({
       autoBind: false,
       dataSource: new kendo.data.DataSource(..),
       template: kendo.template($("#template").html())
    });
  });

Notice our listview doesn’t need to define anything special; it gets built up by the kendoListView widget. The initialization properties passed are disabling auto binding on init (manual binding occurs later, which is good for views that need the user to interact with a form first). It also defines a data source and supplies our template.

The listview then binds the data, grabbing each record and generating a collection of <div class=”row”> objects, one for each record of data. That’s all that it takes to use the listview to bind a collection of rows using the bootstrap styles. Now when the screen collapses, each cell also collapses into it’s own row.

Introduction to Xamarin Forms

For the longest time, developers have dreamed to write one set of code to support multiple application platforms. PhoneGap was one product that achieved that dream; it only had one caveat: it isn’t native. PhoneGap runs within the operating system’s browser, essentially making it a localized web application. It’s certainly a valid option for developing mobile applications.

When it comes to Xamarin, the iOS and Android interfaces were separate, but code sharing could occur between the backend code, by using PCL’s, shared projects, or code linking. Either way, most of the code was separated, but only the UI code was differentiated.

In comes Xamarin Forms 3.0, a new way to share 100% of the code. Xamarin forms offers an API for building applications using pages or views. For instance, below is a sample page that works in both iOS and Android:

var page = new ContentPage {
    Title = "My Profile",
    Icon = "MyProfile.png",
    Content = new StackLayout {
        Spacing = 15, Padding = 25,
        VerticalOptions = LayoutOptions.Center,
        Children = {
            new Entry { Placeholder = "Name" },
            new Entry { Placeholder = "Address" },
            new Entry { Placeholder = "City/State" },
            new Button {
                Text = "Save",
                TextColor = Color.Black,
                BackgroundColor = Color.White }}}
};

Additionally, Xamarin Forms has content views that use a XAML interface with an associated code-behind, giving a WPF feel to application development. With this approach, you can create 100% SHARED CODE, a remarkable achievement. To get an idea of what Xamarin Forms can do, check out the online samples.

I plan to continue to write more about Xamarin Forms in the year to come. Stay tuned.

Xamarin Shared and PCL Projects – A Comparison

Developers have always had problems supporting multiple platforms in application development, mainly because even though a lot of the code was repetitive, there wasn’t many options available for code sharing. What would often happen is that developers (say wanting to support classes for an ASP.NET web site and a Windows Phone Library application) would write the code for one project and then link the file to the other project. This solved the problem, but adds the extra step of adding the files (plus potential synchonization issues if you forget to add the file). Microsoft offered up Project Linker, a free tool that linked code in projects together. This did work, and solved some of these problems. You were dependent on an external tool for that ability.

A while back, portable class library (PCL) support was added, which addressed this very need: provide the ability to create a project that can be shared across all of these different platforms. This works create to write common code like interfaces, DTO’s, and utility classes, but doesn’t necessarily cover platform-specific features.

When it comes to sharing code, PCL’s don’t quite stretch into the mobile arena, which is where Xamarin comes into play. Since it uses C#, Xamarin supports portable class library (PCL) projects in Visual Studio, and have recently added support within Xamarin Studio. But they also have provided another feature to address code sharing: shared projects. Xamarin documented how shared projects work with great detail. Essentially a shared project works by copying its code into any project that references it. If you have an iOS, Android, or Windows Phone app, the code is copied to each of the projects that refer to it, and thus this shared code is pushed into (similar to how Project Linker works) the dependent projects. This is yet another way that project sharing can work within Xamarin.

Code-sharing probably provides a little more capability than PCL; PCL only contains a subset of assemblies it can refer to, and thus the sharing feature provides you with a little more capability, at the reduced cost of needing separate projects to compile the code into (although, you probably have a project setup for each environment anyway). The Sharing feature is just another tool available to developers to create great applications.

Adding Testability Through Mocking Workarounds

Sometimes when you are developing code, you are bound to run into something that is a roadblock from unit testing. That is why frameworks like TypeMock are so great; TypeMock is a powerful mocking tool that can reflect and override any code you feel like mocking, even if that code is internal. If you are using an open source tool like Moq or RhinoMocks, you don’t have the ability to mock non-virtual (non-overridable for the VB crowd) or private methods. TypeMock allows you to do it all. As far as my open-source or free mocking tools go, I like Moq a lot. The interface is simple, straightforward, and is as functional as any of the other frameworks out there. (Note this post isn’t meant to be a selling point for Moq, but it’s useful to know as my examples will be using it.)

As an architect, your bound to run into segments of code that are like the following:

[ExportType(typeof(ICacheLoadingService))]
public class CacheLoadingService : ICacheLoadingService
{
    //A quick wrapper around ASP.NET cache
    private ICache _cache = null;

    //DI constructor injection of relationships
    public CacheLoadingService(ICache cache)
    {
        _cache = cache;
    }
   
    public void Init()
    {
        //instantiate database connector that has a customized constructor that can't be easily mocked
        IDbConnector ctx = new SQLServerDBConnector();
        //Write the commonly used reference data to cache
        _cache.Add(typeof(State), ctx.Get().ToList());
        .
        .
    }

}

Here we have a few things. ICache is a simple wrapper around ASP.NET cache, and is loaded in the constructor of the service (not relevant to this article; just note for reference). Later on, Init() opens up a database connection, which we have no control over creating, but does implement an interface. This is there the challenge for unit testing lies, and we’ll see a workaround very soon. Lastly, [ExportType] is from an open-source project I created to make registration of dependencies very simple.

Back to IDbConnector; with the current setup, using a framework like Moq makes testing this class hard without a live actual database connection. But with a little help, we can add a wrapper around the creation process easier. In walks our little helper:

public interface ICreator
{
   T Create(Func fn);
}

[ExportType(typeof(ICreator))]
public class ObjectCreator : ICreator
{
   public T Create(Func fn)
   {
      return fn();
   }
}

While this solution seems redundant and pointless, this actually solves the problem above. Because we use an object, defining an interface, to create the object, it makes mocking the object that much easier. Changing our init method from above, we add the following to CacheLoadingService:

[ExportType(typeof(ICacheLoadingService))]
public class CacheLoadingService : ICacheLoadingService
{
    private ICache _cache = null;
    private ICreator _creator = null;

    public CacheLoadingService(ICache cache, ICreator creator)
    {
        _cache = cache;
        //passed in through dependency injection
        _creator = creator;
    }
   
    public void Init()
    {
        //ICreator takes the pain out of testing, which you will see next
        IDbConnector ctx = _creator.Create(() => new SQLServerDBConnector());
        _cache.Add(typeof(State), ctx.Get().ToList());
        .
        .
    }

}

Our change is subtle, and it required more code (to create the new class), and a little overhead, sure. But now let’s look at how we can test the DB connection, which wasn’t possible to do earlier. The following is the test logic using Moq:

var creator = new Mock();
creator.Setup(i => i.Get()).Returns(stateTestData);
.
.

var cache = new Mock();
cache.Setup(i => i.Add(typeof(State), It.IsAny<List>()).Verify();

var service = new CacheLoadingService(cache.Object, creator.Object);
service.Init(); //now our DB connection is mocked

cache.Verify();

Now in this test, we verify the DB connection returns the test data and actually gets passed to the cache. This is a very simple way to test objects with external dependencies, like databases, files, etc. Databases hinder unit testing because it breaks the isolation aspect (but is fine for integration tests). And our cache.Verify() statement verifies that everything gets added to the cache as expected.

We managed to take code that was hard to test and provide a simple way of testing it in this article. This is not a one-size-fits-all solution, but it does add testability to a lot of bases you never previously expected, and only adds a small overhead to the application.

TEF Open Source Project Released

I’ve always liked the idea of MEF. The ability to mark a class as exportable, then very easily consume it in your application is a very nice feature. However, MEF has always been instance-based, meaning you had to have an instance of a class to register. This works OK in some instances, but not in others. For example, in some instances, an instance of a class in ASP.NET can be problematic, especially if it’s tied to a specific HTTP context.

Because I needed type-specific support, without having to instantiate a specific instance, TEF was born, a simplistic implementation for registering your types. Registering your types is as simple as adding an [ExportType] to your class, or adding an assembly attribute [ExportDependency].

[ExportType(typeof(ISomeService))]
public class MySomeService : ISomeService
{
}

OR:

[assembly:ExportDependency(typeof(ISomeService), typeof(MySomeService))]

To extract the types out of the assembly, you can target a specific app domain, assembly or collection of types. For instance, the code below targets a specific app domain.

var importer = new TEFMetadataImporter();
//Pull any assembly dependencies marked with [ExportDependency]
var types = importer.GetAssemblyTypes(AppDomain.CurrentDomain).ToList();
var assemblyTypeCount = types.Count;

//Pull any type dependencies marked with [ExportType]
types.AddRange(importer.GetClassTypes(AppDomain.CurrentDomain));

The project was built for .NET projects, MonoTouch, and MonoDroid. You can grab the source code and a sample project for TEF on BitBucket. Note: for MonoTouch and MonoDroid, the project was built with Xamarin Studio, so the System DLL’s may not match the Visual Studio System DLL’s. It’s easy enough to grab the source code and replace them.

Telerik Q1 2014 Released

The latest release for Q1 2014 was includes some features that really caught my eye, and so I’m posting this to share them to you. The first was the new words processing feature for the WPF Telerik framework, a framework that can generate a word document without the use of Word. In my opinion, that is huge, as it is a very useful framework that can even challenge other word processing software products available on the market, now available with DevCraft complete.

The second interesting product was a responsive UI framework available with the Kendo UI framework. A lot of applications make use of some responsive UI framework like Twitter Bootstrap or Foundation by Zurb. Now that Kendo has an offering for this, it’s one step closer to making Kendo a complete product.

Dependency Inection in .NET Book Review

One of the first tasks in any application I build is to setup a Dependency Injection container, provide a simple wrapper around the container itself, and make it available throughout the entire application. The reasoning is simple; all of the references are easy to contain, the application remains loosely coupled, and the dependencies are managed for you. The DI container composes the object and populates all of the dependencies automatically. To me, DI has always been a pattern undervalued in the .NET world, and so a book like this is great to make such an excellent design pattern known to the .NET world.

A really good book on Dependency Injection was published and is a good read for going from knowing nothing about Dependency Injection to becoming an expert in it. The book walks through the basics of DI: how to inject a reference and how to design your component to allow for dependency injection, object composition, and lifetime management. The book continues on to look at what’s good for DI, and what is an anti-pattern for DI as well.

The book follows up with how to setup DI using some of the most widely used Dependency Injection containers on the open source market. Each chapter illustrates how a user would setup the container, provide the references, and make use of the container within an application. It looks at the various configuration options each container has, and explains the nuances (each container does behave a little bit differently, even though conceptually they are very similar).

If you never looked into dependency injection, I would highly recommend that you do, and this book is a great way to start.

Adding ASP.NET MVC Anti-Forgery Tokens To All Post Requests Easily

One of the newer attacks against web applications is the cross-site request forgery attack. It’s an attack against modern applications that store a cookie to represent the currently logged in user. The problem has been explained in other web sites. I’d highly recommend checking out Phil Haack’s blog post on the subject.

One of the techniques to prevent this attack is to add an anti-forgery token using the @Html.AntiForgeryToken extension method. On the controller side, the action method defines the [ValidateAntiForgeryToken] attribute. Behind the scenes, the hidden input field for the anti-forgery token is validated by the MVC framework to ensure it’s correct. This has also been explained well; see Steve Sanderson’s post on the subject. While there is discussion as to whether this approach is needed just for the logging in an anonymous posts, or all posts in general, as been up for debate. But the point of CSRF is to attack authenticated users.

I’m not real fond of repetitive coding, especially when the framework is flexible enough to avoid it. Below is my solution to to create a flexible solution to validate all post operations. The first task is to create an attribute for validating the token. After using .NET Reflector by Red Gate to examine the existing ValidateAntiForgeryTokenAttribute class, the token is simply an authorization attribute that validates the request using a helper utility to validate it. See the example below.

public class GlobalAntiForgeryTokenAttribute
  : FilterAttribute, IAuthorizationFilter
{
  public sub OnAuthorization(filterContext As AuthorizationContext)
  {
	if (filterContext.HttpContext.Request.HttpMethod.ToUpper() == "POST")
	{
	  AntiForgery.Validate();
    }	
  }
}

On authorization of the request, if the operation is a POST request, we call the Validate() method on the AntiForgery helper to actually perform the validation. All of our post operations are now checked for forgery; however, this will fail because we haven’t added our token globally. To do that, we have to create a custom form extension method like the following:

public static void FormExtensions
{
   public static MvcForm BeginDataForm(this HtmlHelper html, string action, string controller, ...)
   {
     var form = html.BeginForm(action, controller, ...);
	 //At this point, the form markup is rendered in BeginForm
	 // we can render the token
	
	 //With every form, we render a token, since this
	 //assumes all forms are posts
	 html.ViewContext.Writer.Write(html.AntiForgeryToken().ToHtmlString());
	
	return form;
   }
}

If we use our custom helper for all of our forms, then all of our custom forms will have rendered an anti-forgery token. Therefore we don’t have to worry about creating it ourselves, saving time and reducing code.

Launching the Browser in Android with Xamarin

If you like the Android interface and have used an app like Facebook or Twitter, you’re no doubt used to the ability to click a link and get a list of browser-based applications to choose from. This ability to choose your browser to open a link is a feature in Android called an “intent”. An intent is an asynchronous message to request some action on behalf of another application. Intents serve many purposes, such as sending data to Facebook, or a browser. Intents also navigate between activities in an application.

For sharing to a browser, it requires using a special intent with is the Intent.ACTION_VIEW in Android, or Intent.ActionView in Xamarin Android. This intent, along with the URL, signals to Android to open the requested resource in a browser. For instance, let’s take a look at the following method:

private void ShareToBrowser(string url)
{
	if (!url.StartsWith ("http")) {
		url = "http://" + url;
	}

	Android.Net.Uri uri = Android.Net.Uri.Parse(url);
	Intent intent = new Intent (Intent.ActionView);
	intent.SetData (uri);

	Intent chooser = Intent.CreateChooser (intent, "Open with");

	this.StartActivity(chooser);
}

In our utility method, from an URL string, we ensure there is an “http” prefix on the URL; otherwise, the intent doesn’t quite work right. From the URL, we create an Android URI, and pass it as the data of the intent. The constructor of the intent takes the name of an action, which in our case, is the view action. In most other common situations, we’d use the Send action. The View action is used for browsing, whereas Send is used for app sharing (to Facebook, etc.)

If we were to start the activity without the chooser, Android would pick the first browser installed and launch it. By adding a “chooser” intent, we get that nice “Open with” dialog, listing all of our installed browsers. And it’s that simple to launch the browser in our application.

If you are looking for a nice overview of intents, Lars Vogel has a nice overview available at Vogella.