Alternative Tricks To Importing Data From Excel

I often find myself with the task of importing some dataset into a database, and the source of that data is usually Excel. The common approach to importing data into a database like SQL Server is to import the data into a table. While that usually works, there are times where that doesn’t work, because the data is non-standard, or for who knows what reason.

The spreadsheet I’m about to show you will easily import, but let’s assume it doesn’t for some reason. If we had a datasheet like this:

AccountNumber LastName FirstName Active
12345 Smith Tom Yes
12643 Peter Tom Yes
2354365 Smith Amy No

Nothing complicated here. But assuming we can’t get it in, one trick i have found myself doing is using some Excel functions that generate a SQL statement. For instance, if we did the following equation in Excel:

=CONCATENATE("insert into Customers (AccountNumber, LastName, FirstName) values ('", A4, "', '", B4, "', '", C4, "')")

Concatenate is an Excel function, very similar to .NET String.Concat method, that takes each string and combines them into one. Instead of using the plus sign to concatenate values, each string entered, separated by commas, is concatenated together, which may also include a cell value. The following above takes the values in the row 4 set of ABC cells and combines them into the template insert statement.

If we copy this function across all of the rows, it would generate a bulk insert statement like:

insert into Customers (AccountNumber, LastName, FirstName) values ('12345', 'Smith', 'Tom')
insert into Customers (AccountNumber, LastName, FirstName) values ('12643', 'Peter', 'Tom')
insert into Customers (AccountNumber, LastName, FirstName) values ('2354365', 'Smith', 'Amy')

Again nothing complicated; however, this can be useful. It can even be used to not include inserts if the user is inactive; if we update our formula:

=IF(D2="Yes",CONCATENATE("insert into Customers (AccountNumber, LastName, FirstName) values ('",A2,"', '",B2,"', '",C2,"')"),"")

This would only output the two active records:

insert into Customers (AccountNumber, LastName, FirstName) values ('12345', 'Smith', 'Tom')
insert into Customers (AccountNumber, LastName, FirstName) values ('12643', 'Peter', 'Tom')

The inactive record didn’t copy. After generating the scripts, we can copy the entire bunch of statements and paste into SQL Server Management Studio, which will execute each statement separately. With this approach, we could also do updates; maybe we want to sync the active status (a boolean in this scenario). We could use the following formula:

=CONCATENATE("update dbo.Customers set Active = ", IF(D2 = "YES", "1", "0"), " where AccountNumber = '", A2, "'")

Which generates:

update dbo.Customers set Active = 1 where AccountNumber = '12345'
update dbo.Customers set Active = 1 where AccountNumber = '12643'
update dbo.Customers set Active = 0 where AccountNumber = '2354365'

This feature may not be useful as a whole, but one scenario it is useful in is to insert data into a table variable. Data in a table variable can be used to conduct research, do data massaging before inserting into a table, or whatever else you want. For instance, suppose we had the following script shell in SQL Server:

declare @output table
(
  ROWID int identity(1,1) primary key clustered not null,
  Account varchar(100),
  First varchar(100),
  Last varchar(100),
  Active bit
)

-- INSERT insert templated scripts here

select count(*) from @output where Account like 'BN%'

select count(*) from @output where Active = 1

select count(*) from @output where Active = 0

select count(*) from @output where Account like '%99'

We use the following formula to generate the inserts into the table:

=CONCATENATE("insert into @output values ('",A2,"', '",B2,"', '",C2,"', ",IF(D2="YES","1","0"),")")

Which produces the scripts:

insert into @output values ('12345', 'Smith', 'Tom', 1)
insert into @output values ('12643', 'Peter', 'Tom', 1)
insert into @output values ('2354365', 'Smith', 'Amy', 0)

And our final script is:

declare @output table
(
  ROWID int identity(1,1) primary key clustered not null,
  Account varchar(100),
  First varchar(100),
  Last varchar(100),
  Active bit
)

insert into @output values ('12345', 'Smith', 'Tom', 1)
insert into @output values ('12643', 'Peter', 'Tom', 1)
insert into @output values ('2354365', 'Smith', 'Amy', 0)


select count(*) from @output where Account like 'BN%'

select count(*) from @output where Active = 1

select count(*) from @output where Active = 0

select count(*) from @output where Account like '%99'

And know we’ve done analysis on our results without needing to create a SQL table; total time to create the script may be faster, depending on how proficient you are at Excel. Excel has a lot of nifty functions helpful for data conversion. Is this dirty? Yes. Is it a one-time throwaway solution? Yes. But it can be handy in a bind, and may simplify a one-time migration later on.

Dynamic Script Registering Component in ASP.NET MVC

One of the performance enhancements recommended in tools like YSlow and others is to put your JavaScript script files at the bottom of the page. The great difficulty in a web forms or MVC web application with doing that is scripts at the bottom of the page are typically added in the master page. Any pages or views can make use of a special section just for scripts, but partial views or user controls can’t take advantage of that.

I’m about to illustrate a solution that will do just that. It’s setup for MVC, but could be easily adapted for web forms by making the helpers as controls. You have to understand that MVC is a top down structure. So as views or partials render, they render their contents in place. The common approach is to have a section for scripts:

site.master:


  // master page common scripts


@* for any custom scripts in the view *@
@RenderSection("scripts", false)

The view can add a @section scripts { } to take advantage of this, but the partial view cannot. Additionally, everything is rendered in top down format; if a partial view at the top of your page has a script like:


$(function() { alert("In the partial view"); });

The problem with this script is that the final rendered view (when scripts are defined at the bottom) will look like:


$(function() { alert("In the partial view"); });


.
.
.
view content
. . // master page common scripts // view scripts

Since it’s top down, it’s also dependent on order, so scripts defined have no dependency (lacking a dependency feature like angular or script JS).

The following solution was meant to give more control over placement of defined scripts. Scripts are defined in a wrapper, and queued up for later rendering. The solution I’m providing is simple, but easily enhanced to add dependencies in scripts, custom ordering, and more features.

To give an idea of what we need to do, the system will achieve something like the following:

view.cshtml:

@Html.AddScript("ViewInit", () => @$(function() { .. }));

partial view:

@Helper Script()
{
   
     ..
   
}

@Html.AddScript("PartialInit", (m) => Script());

master:

@Html.PlaceScripts()

The idea is to add scripts via a method, which queues them up to be rendered, and writes them out at the point the PlaceScripts method is called. Note that for partial views, I’m using a little trick I noted earlier, whereby using a helper is the only way to circumvent exceptions that occur when using the action-based approach.

AddScript essentially adds the definitions to the HttpContext.Items collection, a common trick for storing singleton-like data elements that are available across the application. Since HttpContext.Items is available to add and then render scripts, it’s a great place to share content. The AddScript method shoves in data:

public void AddScript(HtmlHelper html, string key, Action fn)
{
    var http = html.ViewContext.HttpContext;
	var list = http.Items["_Scripts"] as Dictionary;
	
	if (list == null)
		list = new Dictionary();
	
	list[key] = fn;
	
	http.Items["_Scripts"] = list;
}

This method checks to see if we have a dictionary in the items collection; if not, it creates it, pushes in our item, and updates the collection. Similar, our other method, which takes a helper result, uses another overload to store the string result for easy rendering:

public void AddScript(this HtmlHelper html, string key, HelperResult fn)
{
	this.AddScript(html, key, fn(html.ViewData.Model).ToHtmlString());
}

public void AddScript(this HtmlHelper html, string key, string script)
{
    var http = html.ViewContext.HttpContext;
	var list = http.Items["_Scripts"] as Dictionary;
	
	if (list == null)
		list = new Dictionary();
	
	list[key] = script;
	
	http.Items["_Scripts"] = list;
}

We could easily add dependencies and ordering requirements to this solution if we needed; for now, let’s finish this out by showing how to grab the contents and render them:

public void PlaceScripts(this HtmlHelper html)
{
var http = html.ViewContext.HttpContext;
var list = http.Items["_Scripts"] as Dictionary;

if (list == null)
return;

using writer = new HtmlTextWriter(html.ViewContext.Writer)
{
foreach (var entry in list)
{
if (typeof(entry.Value) is string) {
writer.WriteLine(entry.Value);
}
else {
//This is an action; the action will be
//rendered in the current output stream, and
//do not need to be directly written
entry.Value(html.ViewData.Model);
}
}
}

}

This would need greatly enhanced for dependencies and sorting; right now, we take the list in order and render it out. For HTML results or just plain string definitions, it gets rendered to the current response stream. Actions get processed the same way, but MVC handles that internally, rather than your code.

And voila, we are done; now we can define scripts in the view and partial views and expect them to be rendered where we want them placed, and have a component that can be easily enhanced to do much more.

Calling Properties on Master Pages

I still see quite a few questions on stackoverflow.com that deal with calling a property or method on the master page. There are two solutions to this problem:

  1. Use a Strongly Typed Master Page
  2. Use an interface.

Strongly-typed master pages is the way to do to deal with the problem. I personally experienced issues with strong-typing, and getting Visual Studio intellisense; therefore, I would often dump this in favor of an interface. To do this, create an interface:

public interface ISomeInterface
{
      string DataValue { get; set; }
}

Implement this in the code-behind file of the master page:

public class MyMaster : MasterPage, ISomeInterface
{

    public string DataValue
    {
       get { return LabelControl.Text; }
       set { LabelControl.Text = value; }
    }
}

Then you can cast all future reference to the page’s Master property to this interface:

public class TestPage : Page
{
    public void Page_Load(object sender, EventArgs e)
    {
        ((ISomeInterface)Master).DataValue = "Some Value";
    }
}

To make it easier, you can put this in a base page class:

public abstract class BasePage : Page
{
    new public ISomeInterface Master
    {
       get { return (ISomeInterface)Master; }
       set { base.Master = value; }
    }
}

And then all of your classes can implement this.

Single Page Binding with Kendo UI ListView

If you’ve looked at the demos on the Kendo UI site, you’ll notice the ListView and DataSource combination are a pretty capable combination. The ListView gives you full control over the user interface, while supporting common functionality like paging, sorting, and more. The DataSource provides read and update functionality to a local or remote data source, even giving you full control over these processes (for instance, you can use JQuery directly for AJAX communication). The pager natively pages through the contents of the listview automatically for you. The three widgets offer great functionality for the developer.

While the combination works well, I personally ran into a snag using the Kendo UI core version (open source). With the pager component natively paging the content, the data source required a complete dataset downloaded from the server. If your ListView will be presenting 30 records, all 30 must be retrieved from the server; if 3,000, all 3,000 must be retrieved from the server. Now natively the data source control supports server paging, most examples illustrated using the Telerik MVC wrappers for handling the server-side processing, and as such, I wasn’t sure whether it was supported. If it works, I would recommend using the kendo pager as it offers the smoothest navigation; but if not, this workaround worked just as well, with some added effort. In order to get around this, I ended up using the Bootstrap Paginator plugin.

The bootstrap paginator allows you control over the number of pages to show, and provide delegation on user interaction, with the example below.

$('#pager').bootstrapPaginator({
            currentPage: 1,
            totalPages: 10,
            onPageClicked: function(e, evt, type, page){
                //Reload the listview - listview uses custom AJAX option
                #("#listview").data("kendoListView").dataSource.read();
            }
        });

The currentPage option sets the current page in the list, within the range of total pages. It’s possible to preload these from an MVC model, especially when you factor in postbacks (the pager needs reinitialized on postbacks), as in the following:

{
   currentPage: @Model.CurrentPage,
   totalPages: @Model.TotalPages
}

When the user clicks on a page, it fires the onPageClicked callback handler, which subsequently triggers an AJAX action back to the server. We’ll need the new page index from the event handler; we can get it directly from the plugin, or store the current index in a hidden variable. Either way, this piece of information needs passed back to the server.
If you looked in the example online, it has a custom AJAX action as shown below.

$("#listview").kendoListView({
    dataSource: {
       transport: {
           read: function(o) {
              var index = // Get index from hidden variable or wherever

              $.ajax({
                  type: "post",
                  url: "@Url.Action("Action", "Controller"),
                  data: { index: index, otherParams: "OTHER PARAMS" },
                  success: function(d) { o.success(d); },
                  error: function(d) { o.error(d); }
              }); 
              
           }
       }
    }
});

Here we trigger the postback to the server, and pass in the list of form data values, as well as the currently selected page index. Since the items per page is hard-coded in this scenario (at say 10, for example), we can quickly calculate the starting index and ending index of the current page.

There are many ways to implement this solution; this is one quick, simple way to introduce paging with large lists of data, where each page is dynamically loaded at paging time.

Kendo MaskedTextBox Custom Rules

The Kendo UI MaskedTextBox plugin is great for limiting the input entered into a textbox. For instance, to validate a phone number, you could do the following:

$("#tb").kendoMaskedTextBox({
    mask: "(000) 000-0000"
});

The value zero ensures only numbers are entered. The masked text box also has these placeholders as well:

9 – 0 to 9 plus space
# – 0 to 9, space, plus, minus
L – a-z, A-Z
? – a-z, A-Z, space
& – any character
C – any character, space
A – letters, digits
a – letters, digits, space
. – Decimal
, – Thousands placeholder
$ – Currency

Outside of these rules, it’s possible to add your own, such as the following:

$("#tb").kendoMaskedTextBox({
    mask: "~A-00-000",
    rules: {
       "~": /[3-9]/,
       "*": /[ASDF]/
    }
});

The “~” and the “*” are the mask placeholder, whereas the /[3-9]/ and /[ASDF]/ define the range of acceptable values. This way, you can create a range to be whatever you desire.

Rendering Markup Anywhere in MVC

I had a hard time coming up with the title, because as you know, markup is pretty dynamic in MVC. However, I came across an interesting limitation when it came to rendering markup. I’m not talking about rendering partial view content using Html.Partial, or using a helper method. I’m actually talking about rendering markup segments, which I’ll demonstrate with a VB.NET example (sorry, I’ve been stuck in the VB world for some time, it’s become more natural than C#):

@* SomeView.vbhtml *@
@Code
Html.ScriptDefine(
   @
      alert("ME");
   )
End Code

Html.ScriptDefine is not something Microsoft created, but a custom extension I created. This was a helper method to register script segments. It is not a method defined globally or in a view, but a helper that’s code, which can be easily reused across projects, which is why I tried to utilize this technique. Somewhere in the master page, a process read all of these scripts and rendered them in the master page. This was excellent; I could define these script blocks anywhere in the view, and they would all get rendered in one place.

My helper looked like the following:

Public Sub ScriptDefine(Of TModel)(html As HtmlHelper, key as String, fn as Action(Of TModel))
    Dim script = fn(html.ViewData.Model)
    'Store reference to model and script somewhere, 
    'which the master page retrieves all of the scripts and renders
End Sub

It worked, except in one scenario: Partial Views, which is a key reason why I wanted it. See, I often found myself using scripts in a partial view. I tried using an optimization technique where scripts run at the end of the page; the only problem was a partial view that used a script had it’s <script /&rt; block defined wherever the partial was, which was usually above the end of the view. The issue with partial views has to do with the rendering process, and although I wasn’t quite sure how to figure out why, I found a better solution anyway: HelperResult.

By defining the script in a helper (a small caveat) and then storing the helper result, this solved the problem much more easily. I was able to define an extension like the following:

Public Sub ScriptDefineHelper(Of TModel)(html As HtmlHelper, key As String, fn As Func(Of TModel, HelperResult))
   Dim helperResult = fn(html.ViewData.Model) 'Returns the content as IHtmlString
   Dim list = CType(html.ViewContext.HttpContext.Items("_Scripts_"), List(Of String))

   if (list Is Nothing) Then
      list = new List(Of String)
   End If

   list.Add(helperResult.ToHtmlString()) 'Store the scripts as a string, which is easy to render later

   html.ViewContext.HttpContext.Items("_Scripts_") = list
End Sub

Now wherever we use our helper, we can use it like:

@Code
  'Use in view or partial view
  Html.ScriptDefineHelper(Function(i) Scripts())
End Code

@Helper Scripts()
   
      alert("Hello");
   
End Helper

And we can render out all the scripts with the following code (we can also use a helper method for this):

Dim items = CType(html.ViewContext.HttpContext.Items("_Scripts_"), List(Of String))
For Each item in items
  @Html.Raw(item)
Next

The real question is why do all of this, when all of the scripts could be in the page? Well, there are good reasons for doing this. First and foremost, keeping the scripts used in a partial view are best defined in the partial view. Out of sight is out of mind, especially for JavaScript. By using this technique, scripts can be defined, and rendered at the designed area, more effectively. That is the primary benefit; outside of that, there aren’t a lot of benefits.

Kendo UI Lists and Twitter Bootstrap Simplified

Bootstrap is a great CSS layout framework for setting up the user interface of your application. Bootstrap provides a grid system, whereby content can be structured into a grouping of columns up to a maximum of 12 total. This works great for laying out content, and can also be useful for layout out content in grids too. The following is an example of defining a template. Kendo uses templates to define the item view for the list. The following is a row within the eventual list rendered vertically:

<script type="text/x-kendo-template">
  <div class="row">
    <div class="col-md-3">
      #: FirstName #
    </div>
    <div class="col-md-4">
      #: LastName #
    </div>
    <div class="col-md-2">
      #: State #
    </div>
    <div class="col-md-3">
      <a href='@Url.Action("Where", "Some")/#=ID#'>
        View
      </a>
    </div>
  </div>
</script>

Next, we need to use the template, which we would supply to the kendo listview initialization plugin. Below is the initialization of the list, as well as the passing of the template to the list:

<div id="listview"></div>

  $(function() {
    $("#listview").kendoListView({
       autoBind: false,
       dataSource: new kendo.data.DataSource(..),
       template: kendo.template($("#template").html())
    });
  });

Notice our listview doesn’t need to define anything special; it gets built up by the kendoListView widget. The initialization properties passed are disabling auto binding on init (manual binding occurs later, which is good for views that need the user to interact with a form first). It also defines a data source and supplies our template.

The listview then binds the data, grabbing each record and generating a collection of <div class=”row”> objects, one for each record of data. That’s all that it takes to use the listview to bind a collection of rows using the bootstrap styles. Now when the screen collapses, each cell also collapses into it’s own row.

Introduction to Xamarin Forms

For the longest time, developers have dreamed to write one set of code to support multiple application platforms. PhoneGap was one product that achieved that dream; it only had one caveat: it isn’t native. PhoneGap runs within the operating system’s browser, essentially making it a localized web application. It’s certainly a valid option for developing mobile applications.

When it comes to Xamarin, the iOS and Android interfaces were separate, but code sharing could occur between the backend code, by using PCL’s, shared projects, or code linking. Either way, most of the code was separated, but only the UI code was differentiated.

In comes Xamarin Forms 3.0, a new way to share 100% of the code. Xamarin forms offers an API for building applications using pages or views. For instance, below is a sample page that works in both iOS and Android:

var page = new ContentPage {
    Title = "My Profile",
    Icon = "MyProfile.png",
    Content = new StackLayout {
        Spacing = 15, Padding = 25,
        VerticalOptions = LayoutOptions.Center,
        Children = {
            new Entry { Placeholder = "Name" },
            new Entry { Placeholder = "Address" },
            new Entry { Placeholder = "City/State" },
            new Button {
                Text = "Save",
                TextColor = Color.Black,
                BackgroundColor = Color.White }}}
};

Additionally, Xamarin Forms has content views that use a XAML interface with an associated code-behind, giving a WPF feel to application development. With this approach, you can create 100% SHARED CODE, a remarkable achievement. To get an idea of what Xamarin Forms can do, check out the online samples.

I plan to continue to write more about Xamarin Forms in the year to come. Stay tuned.

Xamarin Shared and PCL Projects – A Comparison

Developers have always had problems supporting multiple platforms in application development, mainly because even though a lot of the code was repetitive, there wasn’t many options available for code sharing. What would often happen is that developers (say wanting to support classes for an ASP.NET web site and a Windows Phone Library application) would write the code for one project and then link the file to the other project. This solved the problem, but adds the extra step of adding the files (plus potential synchonization issues if you forget to add the file). Microsoft offered up Project Linker, a free tool that linked code in projects together. This did work, and solved some of these problems. You were dependent on an external tool for that ability.

A while back, portable class library (PCL) support was added, which addressed this very need: provide the ability to create a project that can be shared across all of these different platforms. This works create to write common code like interfaces, DTO’s, and utility classes, but doesn’t necessarily cover platform-specific features.

When it comes to sharing code, PCL’s don’t quite stretch into the mobile arena, which is where Xamarin comes into play. Since it uses C#, Xamarin supports portable class library (PCL) projects in Visual Studio, and have recently added support within Xamarin Studio. But they also have provided another feature to address code sharing: shared projects. Xamarin documented how shared projects work with great detail. Essentially a shared project works by copying its code into any project that references it. If you have an iOS, Android, or Windows Phone app, the code is copied to each of the projects that refer to it, and thus this shared code is pushed into (similar to how Project Linker works) the dependent projects. This is yet another way that project sharing can work within Xamarin.

Code-sharing probably provides a little more capability than PCL; PCL only contains a subset of assemblies it can refer to, and thus the sharing feature provides you with a little more capability, at the reduced cost of needing separate projects to compile the code into (although, you probably have a project setup for each environment anyway). The Sharing feature is just another tool available to developers to create great applications.

Adding Testability Through Mocking Workarounds

Sometimes when you are developing code, you are bound to run into something that is a roadblock from unit testing. That is why frameworks like TypeMock are so great; TypeMock is a powerful mocking tool that can reflect and override any code you feel like mocking, even if that code is internal. If you are using an open source tool like Moq or RhinoMocks, you don’t have the ability to mock non-virtual (non-overridable for the VB crowd) or private methods. TypeMock allows you to do it all. As far as my open-source or free mocking tools go, I like Moq a lot. The interface is simple, straightforward, and is as functional as any of the other frameworks out there. (Note this post isn’t meant to be a selling point for Moq, but it’s useful to know as my examples will be using it.)

As an architect, your bound to run into segments of code that are like the following:

[ExportType(typeof(ICacheLoadingService))]
public class CacheLoadingService : ICacheLoadingService
{
    //A quick wrapper around ASP.NET cache
    private ICache _cache = null;

    //DI constructor injection of relationships
    public CacheLoadingService(ICache cache)
    {
        _cache = cache;
    }
   
    public void Init()
    {
        //instantiate database connector that has a customized constructor that can't be easily mocked
        IDbConnector ctx = new SQLServerDBConnector();
        //Write the commonly used reference data to cache
        _cache.Add(typeof(State), ctx.Get().ToList());
        .
        .
    }

}

Here we have a few things. ICache is a simple wrapper around ASP.NET cache, and is loaded in the constructor of the service (not relevant to this article; just note for reference). Later on, Init() opens up a database connection, which we have no control over creating, but does implement an interface. This is there the challenge for unit testing lies, and we’ll see a workaround very soon. Lastly, [ExportType] is from an open-source project I created to make registration of dependencies very simple.

Back to IDbConnector; with the current setup, using a framework like Moq makes testing this class hard without a live actual database connection. But with a little help, we can add a wrapper around the creation process easier. In walks our little helper:

public interface ICreator
{
   T Create(Func fn);
}

[ExportType(typeof(ICreator))]
public class ObjectCreator : ICreator
{
   public T Create(Func fn)
   {
      return fn();
   }
}

While this solution seems redundant and pointless, this actually solves the problem above. Because we use an object, defining an interface, to create the object, it makes mocking the object that much easier. Changing our init method from above, we add the following to CacheLoadingService:

[ExportType(typeof(ICacheLoadingService))]
public class CacheLoadingService : ICacheLoadingService
{
    private ICache _cache = null;
    private ICreator _creator = null;

    public CacheLoadingService(ICache cache, ICreator creator)
    {
        _cache = cache;
        //passed in through dependency injection
        _creator = creator;
    }
   
    public void Init()
    {
        //ICreator takes the pain out of testing, which you will see next
        IDbConnector ctx = _creator.Create(() => new SQLServerDBConnector());
        _cache.Add(typeof(State), ctx.Get().ToList());
        .
        .
    }

}

Our change is subtle, and it required more code (to create the new class), and a little overhead, sure. But now let’s look at how we can test the DB connection, which wasn’t possible to do earlier. The following is the test logic using Moq:

var creator = new Mock();
creator.Setup(i => i.Get()).Returns(stateTestData);
.
.

var cache = new Mock();
cache.Setup(i => i.Add(typeof(State), It.IsAny<List>()).Verify();

var service = new CacheLoadingService(cache.Object, creator.Object);
service.Init(); //now our DB connection is mocked

cache.Verify();

Now in this test, we verify the DB connection returns the test data and actually gets passed to the cache. This is a very simple way to test objects with external dependencies, like databases, files, etc. Databases hinder unit testing because it breaks the isolation aspect (but is fine for integration tests). And our cache.Verify() statement verifies that everything gets added to the cache as expected.

We managed to take code that was hard to test and provide a simple way of testing it in this article. This is not a one-size-fits-all solution, but it does add testability to a lot of bases you never previously expected, and only adds a small overhead to the application.