Discussing ASP.NET MVC Model Binding With Hidden Data

If you have taken over an MVC application, you may have come across a dilemma like the following. Notice that in a view, within a form, you’ll see a group of hidden fields mixed in with a form like the following.

Html.HiddenFor(i => i.ContactID)
Html.HiddenFor(i => i.ContactTypeID)
Html.HiddenFor(i => i.ContactDescription)
Html.HiddenFor(i => i.ContactTypeDescription)
Html.HiddenFor(i => i.OrganizationID)

Html.LabelFor(i => i.OrganizationName, new { @class = "control-label" }) Html.TextBoxFor(i => i.OrganizationName, new { @class = "form-control" })
@* and so on *@

My example had five hidden fields, but I’ve seen a lot more. These fields are not useful to the form per se; the user doesn’t enter them. However, the information is important to the records about to be updated in the database, and thus are included in the form as hidden fields for that reason. However, another reason is it acts as a quasi-ViewState mechanism that makes it very easy to replenish the following model:

public class ContactInfoModel
{
    public int? ContactID { get; set; }

    public int? ContactTypeID { get; set; }

    public string ContactDescription { get; set; }

    public string ContactTypeDescription { get; set; }

    public int? OrganizationID { get; set; }

    public string OrganizationName { get; set; }

    /* and so on */
}

[HttpPost]
public ActionResult Contact(ContactInfoModel model)
{
 ...
}

While OrganizationName and the other unlisted properties came from the form, the first five properties (and potentially more) are stored in the hidden field. The hidden field makes it nice and easy to replenish this model, and than allow the developer to use the parameters in the post action. MVC does a very nice job posting the values back to the server and populating these properties.

However, notice I said before, that this was a ViewState-like solution? It’s not quite within the model of ViewState because all of the parameters are embedded in the form in clear text. In most systems, the user is required to login before they can ever get to a page that contains this type of information. Either way, the users have the tools (IE developer tools, Firebug) to inspect and change the values in the hidden fields if they so desired. That is where the danger can lie.

Here’s the other side of the conundrum: if we don’t reload the parameters, what do we do? If we didn’t include the hidden fields, the first five parameters are not loaded from the client and thus the model is partially replenished. In most cases, these pieces of information might not be needed anyway (with the exception of OrganizationID in our scenario).

The application can certainly requery the information it needs, but then the information gets reloaded on every postback, which can get more expensive. The application could also use Session to store the information fields too.

I’m not saying “NEVER use hidden fields”, but I’m debating the value of a large number of hidden fields for replenishing the model. I probably fit within the category of storing any relative information in Session or requerying the information from the database, depending on how much information we are talking about. What is your thoughts or preference? Using hidden fields, session, database?

Authenticating over AD with the ADMembershipProvider

Want to authenticate against AD but don’t want to use LDAP directly? Would you like to use the AD membership provider, but don’t want tied into the membership API? No problem! That’s because you can programmably create the membership provider, provide some configuration options, and then made an API call to check the login status. The code is very simply the following:

var membershipProviderName = "Provider"; // Does not need to be in web.config
var connectionStringName = "SomeConnectionString";
var config = new NameValueCollection();

config.Add("connectionStringName", connectionStringName);

var provider = new ActiveDirectoryMembershipProvider();
provider.Initialize(providerName, config);

var isValidUser = provider.ValidateUser(user, password);

And that’s all it takes. The provider name is the name you would normally see in the configuration file; we’re defining it during initialization. Next, we need to create configuration options. Our implementation only supplies the connection string name defined in the connection strings setting. Define a connection string, with the name supplied above, that points to the LDAP store. Note that no membership API needs to exist in the

Next, we create and initialize the provider, and use the provider to validate the credentials. Is there an expense to building this up every time? I haven’t tested it out, but logically it seems that would be the case. Just plan it’s use accordingly.

ASP.NET MVC Dynamic Reference Data Loading

MVC, like any other language, is a tool that has plenty of capabilities when rendering client markup; one method call can render pages of client markup. Using a server-side approach can simplify some of the rendering of content needed. As a for instance, here is one common situation I often use a server-side approach to rendering content within an application. An application may have a Kendo DropDownList, which is highly dependent on a list of some data in JSON format. The initialization script looks like the following:

$("DropDownList").kendoDropDownList({
   dataSource [{ .. }],
   dataTextField: "Text",
   dataValueField: "Value"
});

Typically, the data source is some lookup data source; in most applications, lookup data typically contains a key and value. The dataTextField and dataValueField properties map to values within these It’s helpful to have lookup data that’s standardized, because it’s easy to create a helper function that does the following:

@Helper LookupData(IList data)
{
   return NewtonSoft.JSON.JsonConvert.SerializeObject(data);
}

This helper converts data to JSON, which is used in the script below.

$("DropDownList").kendoDropDownList({
   dataSource @LookupData(Model.CustomerTypes),
   dataTextField: "Text",
   dataValueField: "Value"
});

Here we take a collection of reference items and use that to convert it to JSON. Really, that approach may not be that useful, but where you can find use is if you have a common reference table loader object. Using this, the helper function can automatically retrieve a common dataset based on a given type of reference data:

@Helper LookupData(Type referenceType)
{
    var container = Framework.Container; // this is a singleton reference to the dependency injection container
    var referenceData = container.GetInstance(); //A utility class to load common reference data

    return JSONConvert.DeserializeObject(referenceData.GetValues(referenceType));
}

The reference data uses an IReferenceTableManager class to perform the work. Behind the scenes, this could be anything, including Entity Framework. Below is one possible solution:

public class ReferenceValue
{
    public int ID { get; set; }

    public string Text { get; set; }
}

public interface IReferenceTableManager { 
   IEnumerable GetValues(Type type);
}

public class EFReferenceTableManager : IReferenceTableManager {
  public IEnumerable GetValues(Type type)
  {
     var ctx = new EntityFrameworkContextInstance();

     //From a given type, there are multiple ways to get the data from the DB; I am not
     //aware of a way to query by type (at least easily). There may be a better way to do this,
     // but for the time being, the two possible ways to do query referenced data dynamically
     //by generics are: using reflection or, using a long if statement and type checking like:
     if (type.Equals(typeof(Countries))) {
       return ctx.Set().ToList();
     }
     else if (type.Equals(typeof(States))) {
       return ctx.Set().ToList();
     }
     .
     .
  }

}

Not the best implementation; I’m sure there is something better out there, but you get the idea as to how this dependency can now be implemented dynamically. However, the question most will have is whether this is scope creep on behalf of the view (as to whether this is a responsibility of the controller). It’s also possible to have the lookup helper examine a given model and extract the data from there too. There are multiple ways to solve this problem.

Alternative Tricks To Importing Data From Excel

I often find myself with the task of importing some dataset into a database, and the source of that data is usually Excel. The common approach to importing data into a database like SQL Server is to import the data into a table. While that usually works, there are times where that doesn’t work, because the data is non-standard, or for who knows what reason.

The spreadsheet I’m about to show you will easily import, but let’s assume it doesn’t for some reason. If we had a datasheet like this:

AccountNumber LastName FirstName Active
12345 Smith Tom Yes
12643 Peter Tom Yes
2354365 Smith Amy No

Nothing complicated here. But assuming we can’t get it in, one trick i have found myself doing is using some Excel functions that generate a SQL statement. For instance, if we did the following equation in Excel:

=CONCATENATE("insert into Customers (AccountNumber, LastName, FirstName) values ('", A4, "', '", B4, "', '", C4, "')")

Concatenate is an Excel function, very similar to .NET String.Concat method, that takes each string and combines them into one. Instead of using the plus sign to concatenate values, each string entered, separated by commas, is concatenated together, which may also include a cell value. The following above takes the values in the row 4 set of ABC cells and combines them into the template insert statement.

If we copy this function across all of the rows, it would generate a bulk insert statement like:

insert into Customers (AccountNumber, LastName, FirstName) values ('12345', 'Smith', 'Tom')
insert into Customers (AccountNumber, LastName, FirstName) values ('12643', 'Peter', 'Tom')
insert into Customers (AccountNumber, LastName, FirstName) values ('2354365', 'Smith', 'Amy')

Again nothing complicated; however, this can be useful. It can even be used to not include inserts if the user is inactive; if we update our formula:

=IF(D2="Yes",CONCATENATE("insert into Customers (AccountNumber, LastName, FirstName) values ('",A2,"', '",B2,"', '",C2,"')"),"")

This would only output the two active records:

insert into Customers (AccountNumber, LastName, FirstName) values ('12345', 'Smith', 'Tom')
insert into Customers (AccountNumber, LastName, FirstName) values ('12643', 'Peter', 'Tom')

The inactive record didn’t copy. After generating the scripts, we can copy the entire bunch of statements and paste into SQL Server Management Studio, which will execute each statement separately. With this approach, we could also do updates; maybe we want to sync the active status (a boolean in this scenario). We could use the following formula:

=CONCATENATE("update dbo.Customers set Active = ", IF(D2 = "YES", "1", "0"), " where AccountNumber = '", A2, "'")

Which generates:

update dbo.Customers set Active = 1 where AccountNumber = '12345'
update dbo.Customers set Active = 1 where AccountNumber = '12643'
update dbo.Customers set Active = 0 where AccountNumber = '2354365'

This feature may not be useful as a whole, but one scenario it is useful in is to insert data into a table variable. Data in a table variable can be used to conduct research, do data massaging before inserting into a table, or whatever else you want. For instance, suppose we had the following script shell in SQL Server:

declare @output table
(
  ROWID int identity(1,1) primary key clustered not null,
  Account varchar(100),
  First varchar(100),
  Last varchar(100),
  Active bit
)

-- INSERT insert templated scripts here

select count(*) from @output where Account like 'BN%'

select count(*) from @output where Active = 1

select count(*) from @output where Active = 0

select count(*) from @output where Account like '%99'

We use the following formula to generate the inserts into the table:

=CONCATENATE("insert into @output values ('",A2,"', '",B2,"', '",C2,"', ",IF(D2="YES","1","0"),")")

Which produces the scripts:

insert into @output values ('12345', 'Smith', 'Tom', 1)
insert into @output values ('12643', 'Peter', 'Tom', 1)
insert into @output values ('2354365', 'Smith', 'Amy', 0)

And our final script is:

declare @output table
(
  ROWID int identity(1,1) primary key clustered not null,
  Account varchar(100),
  First varchar(100),
  Last varchar(100),
  Active bit
)

insert into @output values ('12345', 'Smith', 'Tom', 1)
insert into @output values ('12643', 'Peter', 'Tom', 1)
insert into @output values ('2354365', 'Smith', 'Amy', 0)


select count(*) from @output where Account like 'BN%'

select count(*) from @output where Active = 1

select count(*) from @output where Active = 0

select count(*) from @output where Account like '%99'

And know we’ve done analysis on our results without needing to create a SQL table; total time to create the script may be faster, depending on how proficient you are at Excel. Excel has a lot of nifty functions helpful for data conversion. Is this dirty? Yes. Is it a one-time throwaway solution? Yes. But it can be handy in a bind, and may simplify a one-time migration later on.

Adding Testability Through Mocking Workarounds

Sometimes when you are developing code, you are bound to run into something that is a roadblock from unit testing. That is why frameworks like TypeMock are so great; TypeMock is a powerful mocking tool that can reflect and override any code you feel like mocking, even if that code is internal. If you are using an open source tool like Moq or RhinoMocks, you don’t have the ability to mock non-virtual (non-overridable for the VB crowd) or private methods. TypeMock allows you to do it all. As far as my open-source or free mocking tools go, I like Moq a lot. The interface is simple, straightforward, and is as functional as any of the other frameworks out there. (Note this post isn’t meant to be a selling point for Moq, but it’s useful to know as my examples will be using it.)

As an architect, your bound to run into segments of code that are like the following:

[ExportType(typeof(ICacheLoadingService))]
public class CacheLoadingService : ICacheLoadingService
{
    //A quick wrapper around ASP.NET cache
    private ICache _cache = null;

    //DI constructor injection of relationships
    public CacheLoadingService(ICache cache)
    {
        _cache = cache;
    }
   
    public void Init()
    {
        //instantiate database connector that has a customized constructor that can't be easily mocked
        IDbConnector ctx = new SQLServerDBConnector();
        //Write the commonly used reference data to cache
        _cache.Add(typeof(State), ctx.Get().ToList());
        .
        .
    }

}

Here we have a few things. ICache is a simple wrapper around ASP.NET cache, and is loaded in the constructor of the service (not relevant to this article; just note for reference). Later on, Init() opens up a database connection, which we have no control over creating, but does implement an interface. This is there the challenge for unit testing lies, and we’ll see a workaround very soon. Lastly, [ExportType] is from an open-source project I created to make registration of dependencies very simple.

Back to IDbConnector; with the current setup, using a framework like Moq makes testing this class hard without a live actual database connection. But with a little help, we can add a wrapper around the creation process easier. In walks our little helper:

public interface ICreator
{
   T Create(Func fn);
}

[ExportType(typeof(ICreator))]
public class ObjectCreator : ICreator
{
   public T Create(Func fn)
   {
      return fn();
   }
}

While this solution seems redundant and pointless, this actually solves the problem above. Because we use an object, defining an interface, to create the object, it makes mocking the object that much easier. Changing our init method from above, we add the following to CacheLoadingService:

[ExportType(typeof(ICacheLoadingService))]
public class CacheLoadingService : ICacheLoadingService
{
    private ICache _cache = null;
    private ICreator _creator = null;

    public CacheLoadingService(ICache cache, ICreator creator)
    {
        _cache = cache;
        //passed in through dependency injection
        _creator = creator;
    }
   
    public void Init()
    {
        //ICreator takes the pain out of testing, which you will see next
        IDbConnector ctx = _creator.Create(() => new SQLServerDBConnector());
        _cache.Add(typeof(State), ctx.Get().ToList());
        .
        .
    }

}

Our change is subtle, and it required more code (to create the new class), and a little overhead, sure. But now let’s look at how we can test the DB connection, which wasn’t possible to do earlier. The following is the test logic using Moq:

var creator = new Mock();
creator.Setup(i => i.Get()).Returns(stateTestData);
.
.

var cache = new Mock();
cache.Setup(i => i.Add(typeof(State), It.IsAny<List>()).Verify();

var service = new CacheLoadingService(cache.Object, creator.Object);
service.Init(); //now our DB connection is mocked

cache.Verify();

Now in this test, we verify the DB connection returns the test data and actually gets passed to the cache. This is a very simple way to test objects with external dependencies, like databases, files, etc. Databases hinder unit testing because it breaks the isolation aspect (but is fine for integration tests). And our cache.Verify() statement verifies that everything gets added to the cache as expected.

We managed to take code that was hard to test and provide a simple way of testing it in this article. This is not a one-size-fits-all solution, but it does add testability to a lot of bases you never previously expected, and only adds a small overhead to the application.