02 September, 2010

WPF - Navigation

Over the past few weeks, I was scrambling to develop a WPF 4.0 application along with a UI designer who specializes in Expression Blend. Though I have worked on WPF 3.5 projects previously, I was really flummoxed while implementing fundamental portions like navigation, modal pop-up’s, “Loading” animation, session management, etc. in WPF. I am not saying that WPF 4.0 is vastly different from WPF 3.5, but I realized that I have never closely worked with XAML and WPF specifics like dependency properties, resource dictionaries, routed events, etc.
Anyway, since I had very limited time, I mostly chose the easiest ways to implement specific areas of the application. In this post, I will describe the simplest method to implement navigation in WPF.

You might have come across WPF architectural frameworks such as PRISM, MVVM, etc. I have some experience with PRISM, but for my current application, I needed a "very very basic” navigation mechanism. So I would not undermine or explain the pros and cons of other WPF frameworks here.

WPF – Navigation – Basic

1) Create a WPF window called Shell.xaml (or replace MainWindow.xaml).
2) In App.xaml, set the StartupUri property to “Shell.xaml” which is our navigation container.
3) As shown above, you may divide the Shell.xaml into Header, Body and Footer portions.
4) In the Body portion, place a ContentControl which holds the content of various controls.
5) In Shell.xaml.cs, add the below method:

public void LoadContent(UserControl ucShell)
ccShell.Content = ucShell;

6) In App.xaml.cs, add the below method:

public static void LoadContent(UserControl ucShell)
Shell shellWindow = (Shell)App.Current.MainWindow;

7) Now, you can create WPF user-controls as needed (say Login.xaml, Home.xaml, etc.)
8) In order to navigate between different user-controls, you need to call App.LoadContent() method and pass in the required user-control instance.

#  You would call the App.LoadContent() method inside the constructor of Shell.xaml.cs to load the initial view like Login.xaml.
#  You could retain the user-control reference in Shell.xaml.cs or create new instances as needed.
#  The Header and Footer portions of Shell.xaml could hide or show elements (like Log out button) based on the current view. And you can add more methods to App.xaml.cs to achieve this behavior (like App. IsLogoutVisible)
#  If you do not need the Header and Footer portions, simply skip all the steps and use the App.Current.MainWindow.Content property to set the view from any WPF user-control.

Hope this helps. Based on your feedback, I plan to cover my other WPF learnings like modal pop-up’s, “Loading” animation and session management in future posts.

26 July, 2010

Open source / Free tools and frameworks (for .NET)

It's been a while now since my last post. In 2009-10, I was out of the country for 7 months on consulting assignments. Got an opportunity to learn exciting technologies and work on tight deadlines, but it drained me quite a bit too! 2010 has been a roller-coaster so far with more emphasis on personal commitments, but now things are in a steady state and I am back to blogging.

As you can see, my blog has undergone few changes - new template, comment moderation, links to websites, etc. One feature I could not figure out is "highlighting of author's comments", I tried the approach given here, but it did not work for me. It would be awesome if you can leave me a tip.

Ok, now let's start off with the post. In my current assignment, I am delighted to work with a couple of seasoned .NET technical architects. One of them (will link him once I get his profile) shared the above mind-map of open source and free tools and frameworks for .NET applications in general. The mind map is created using Mind Meister, again a free tool!

The list a wonderful collation of information and I am impressed by the sheer amount of free tools one can use. Visual Studio too is arguably one of the best IDE's available, but only the Developer and Team System editions really provide all the great features. So I am motivated to understand / use these tools and frameworks.

My current area of interest is IoC and DI containers. Planning to do a detailed post soon. Please leave your valuable comments. Hoping you could still help me with the author-comment feature :o)

21 July, 2008

WCF - Large Data Transfer - Best Practices

Transferring large volume of data over the wire is indeed an architectural challenge for any application. The word “large” is relative and the range could be from MB's to GB's. There are several options for large data transfer viz. FTP (out-of-band), Streaming, Chunking, etc.

Above image lists the viable options for Large Data Transfer with their pros and cons and possible usage scenario.

Options mapped to scenarios:

1) Intranet scenario:


Assume the application is a line-of-business Windows based thick client. It is deployed within the organization intranet and interoperability is low priority.

Further, assume that the application consumes some intranet services which handle medium to high payloads (several MB’s to few GB’s). For example, the application might consume an in-built WCF service for uploading large data to a central database. Last assumption is that the payload data is saved in the database as a single entity (as an image).


FTP is not preferred since it is an out-of-band approach.

Chunking is not applicable since the complete data for the service has to be buffered on the server side and stored into the image column of the database. If database had stored the data as chunks or if it was persisted in a file system, chunking could have been leveraged. Also, since the service is intranet scoped, chunking does not add much value.

Streaming over TCP is the best option for an intranet application especially when the payload data cannot be segmented.

Design considerations:

• In request data contract, use input type as Stream to support streaming for any type (File, Network, and Memory).

• Use the binding as NetTcpBinding which is optimum for intranet, but not interoperable.

• Use message encoding to Binary, since interoperability is not required for intranet scenario.

• Use host type as Console application since TCP service cannot be hosted in IIS.

• On the client end, before consuming the service, check if service is started since TCP service host is not self starting like IIS.

Advantages of Streaming over TCP:

• NetTcpBinding is optimized and for WCF-to-WCF communication; is the fastest mode of data transfer.

• Performance wise, streaming is much better than buffering / chunking.

• For servers, streaming is highly advantageous since it prevents the server from loading individual instances of a file in memory for concurrent requests, something that could quickly lead to memory exceptions, preventing any requests from being serviced.

• At the client, this is helpful when files are very large since they may not have enough memory to buffer the entire file.

Disadvantages of Streaming over TCP:

• No support for interoperability.

• No support for message level security. Supports only transport level security.

• No support for reliable messaging. Since TCP is a stateful protocol, handling connection drops is considerably difficult and one might have to stream the complete data all over again.

• Scalability is an issue for stateful protocols like TCP. If N clients connect to the server, there would be N open connections which might result in a server crash.

2) Internet scenario – High payloads:


Consider a typical ASP.NET web application which needs to consume internet based services involving medium to high payloads (several MB’s to few GB’s). For example, the web application would want to upload considerably large files to a central server.


FTP and Streaming are ruled out since FTP is out-of-band and streaming is not scalable in an internet scenario.

A possible option is to use the Chunking Channel (MS community code), which is a custom channel extended from the built-in channels offered by WCF. In this feature, data travels over the network in chunks (or pieces) and the complexity is handled within the channel / binding instead of the client or the server. It might be useful for trivial scenarios. However, there is a significant learning curve required to understand and adopt these custom channels. Another drawback is that advanced features like reliability (connection drop), durability, etc. are not supported out-of-box. So Chunking Channels are not a suitable option.

One more option is to use WCF Durable Services introduced in WCF 3.5. This feature is a mechanism of developing WCF services with reliability and persistence support. It is a good candidate for long running processes and the construction is easy due to use of attributes model. However, since the process context is dehydrated and rehydrated from the storage for each call, there is a significant performance overhead (benchmark results suggest upto 35%).

The best approach is to implement a chunking channel at the application level (over Http). In other words, the chunking is handled by the application (client and server) and not the network. This would keep the construction and the resulting APIs simple, while offering features like reliability (handling connection drop), durability, etc.


The application level chunking service can be used for upload / download of large data between client and server. Chunking works only when the client and the server are aware of the chunking mechanism and communicate appropriately. Some common parameters could be chunk size, number of chunks, etc.

Also, two potential storage providers namely SQL Server and File System are could be configured. Note that for SQL Server provider, the data should be stored as chunks (or separate records in a table) whereas the File System provider means that the storage area is just a Windows folder. Uploading and downloading data boils down to copying chunks of data across Windows folders in disparate systems.

Design considerations:

• For internet scenario, the service could be configured WSHttpBinding or BasicHttpBinding. If session is to supported, use the corresponding Context Binding.

• Use message encoding as Text for interoperability or MTOM for optimized transfer of binary data.

• Each end-point (based on binding) should be deployed in a separate service host.

• The data access providers (SQL Server and File System) could implement a common interface (Provider model). The service implementation layer would instantiate the appropriate interface based on configuration settings. This would ensure loose coupling and also provide extensibility.

• For SQL Server provider, a file upload means storing the file metadata and its chunks as records in tables. The file metadata could be stored in a master table and its chunks could go into a child table. Consequently, a file download involves reading the table contents and reconstructing the file on the client.

• For either provider, it is important to handle upload requests for files having the same names. A simple solution is to append GUID’s to get unique names. Other implementations might be needed based on the scenario.

• Another good practice is to use a InitializeUpload() or InitializeDownload() method to initialize the data transfer before the actual method call. These initialization methods could establish the chunk size, number of chunks, status of server, etc. so that the client and server are in synchronization.

• For testing purposes, it is useful to call the service methods asynchronously (using Callback methods) so that the status of the transfer (and each chunk) can be displayed to the client. However, for performance testing purposes do not use the asynchronous call mechanism.


• MSDN's Chunking Channel sample with source code.

• Yasser Shohoud's high-level analysis on moving large data.

• Mike Taulty's insightful screencast on Durable Services.

• Jesus Rodriguez's write-up on Durable Services with code snippets.

14 May, 2008

ClickOnce - Deployment and Security aspects

What is ClickOnce?
ClickOnce is a deployment technology used for WPF, Windows and Console applications. A ClickOnce application can be configured to download updates automatically or from a remote location like web page, network share or even from a CD. Further, a ClickOnce application can run in offline mode as well.
Unlike Windows Installer, ClickOnce provides several advanced features viz. update from web, custom permission sets, etc. Enabling or configuring ClickOnce can be done easily via the Security page of the Project Designer, while publishing can be done through the Publish page of the Project Designer.
The core ClickOnce deployment architecture is based on two XML manifest files: an application manifest and a deployment manifest.
The application manifest describes the application itself. This includes the assemblies, the dependencies and files that make up the application, the required permissions, and the location where updates will be available.
The deployment manifest describes how the application is deployed. This includes the location of the application manifest, and the version of the application that clients should run.

Trust Levels
In Partial Trust mode, the permission set can be custom or inherited from zones like Internet and Local Intranet. When specific zones are used, permission elevation (where end user can grant permission for uncommon actions) is supported.
It's quite easy to configure ClickOnce in Partial Trust when the smart client application performs actions such as File IO, Isolated Storage File IO, web service access or SQL database access. Each permission can be included in a custom permission set and tested during debugging itself.
Enterprise Library 3.1 blocks do not support partial trust out of the box. A small tweak is required to able partial trust. This involves adding the attribute “AllowPartiallyTrustedCallers” in the AssemblyInfo.cs file of the ObjectBuilder source code.
However, SQL Server CE 3.5 does not support partial trust currently and hence full trust mode has to be used.

Suggested Best Practices
1) ClickOnce is a good deployment option, with useful features viz. automatic updates, Partial Trust support, etc.
2) The Calculate Permissions button on the Security page of the Property Designer estimates permissions very conservatively and is not a viable measure of required permissions. It is better to determine the minimum set of permissions manually.
3) When ClickOnce is deployed in Partial Trust mode, it is possible to debug the application in the partially trusted security context. This would identify any issues, even prior to deployment.
4) If EL 3.1 blocks are to be used in Partial Trust, the tweak (see References) would serve you well.
5) SQL Server CE does not support partially trusted callers and requires Full Trust.

1) Detailed information about ClickOnce can be found here.
2) Partial trust support for SQL Server CE is ruled out in this MS forum post.
3) This blog post provides the tweak to run Enterprise Library 3.1 in partial trust mode.
4) Permission sets needed for various EL 3.1 blocks are given here.

09 April, 2008

.NET is dead. Long live .NET!!!!

The title is derived from the historic phrase "The King is dead. Long live the King!" signalling the immediate succession of a monarch. Yeah! .NET frameworks 1.0 and 1.1 are on their path towards extinction. The mainstream support by Microsoft has ended for .NET 1.0 and would end in October 2008 for .NET 1.1. Not to worry, a new bunch of leaders, read .NET 2.0, .NET 3.0 and .NET 3.5 have come to the fore now.

The focus of this article would be to provide a brief history of various .NET frameworks and related technologies. Major (not all) technology releases in chronological order:

.NET Framework 1.0
Visual Studio .NET 2002 (Rainier)

.NET Framework 1.1
Visual Studio .NET 2003 (Everett)

SQL Server 2005
Visual Studio 2005 (Whidbey)
Team Foundation Server 2005

.NET Framework 2.0
C# 2.0
ASP.NET AJAX 1.0 (initially ATLAS)
ASP.NET AJAX Control Toolkit
.NET Framework 3.0 (WinFX)

.NET Framework 3.5
Visual Studio 2008 (Orcas)
Team Foundation Server 2008

SQL Server 2008 (Katmai)
C# 3.0

My thoughts
Looking at Microsoft's "run-rate", newer technologies are being unleashed at shrinking intervals. Although this mandates a learning curve for the developer (/tester) community, it provides multiple benefits like lesser "time-to-market", increased productivity, more robust applications, etc. However, not all businesses would appreciate migrating to a newer technology every other year, due to inherent costs and risks. But, the possibilities are very positive! The latest technology offerings offer the fastest and most efficient way to develop enterprise ready applications.
Scott Guthrie, the General Manager of the .NET Framework (and a very good blogger) sells the idea of "multi targetting" support in his useful post. Do subscribe to his blog to get latest updates.
The Visual Studio Team System 2008 is one of the best IDE's around and offers specific flavors to multiple stakeholders including database professionals and testers. It is a one-stop-shop for project/build management (through TFS), development, testing, database, performance tuning, deployment, etc. See feature list in this nice post.
One more welcome move is an affinity towards being "open-source". The Microsoft patterns & practices team allows open source development at CodePlex and this has resulted in great guidance (code/patterns/best practices/etc.) to and from the community.
But, I personally feel that the naming / branding exercise should be done in a more intuitive way, rather than juggling around with numbers (why usage of decimals instead of whole numbers?)! See a discussion against recent naming conventions in .NET.

Note: The launch dates (rather years) are accurate to the best of my (researched) knowledge; incase of any errors, do let me know and I will correct them.

06 March, 2008

WCF Durable Services

The advent of .Net 3.5 has made two of its core tenets: WCF and WF interact with each other and interdependent to a certain extent. One notable product of this association is WCF Durable Services.

What is it?
A Durable Service (new in WCF 3.5) is an implementation of a long running WF service, which persists the state of the service and its message contents on a per client basis. The state can be persisted "out-of-process" and the client can resume the execution at the last saved point.
The persisting and depersisting activities (also called dehydrating and rehydrating) are done immediately prior to and after a service method is called. The persistence store is fully configurable (via config file) and could be SQL Server, file system or any custom store.
A Durable Service inherently supports durability and reliability.

Why is it required?
The most apt use of a Durable Service would be in business scenarios with heavy data dependent tasks that require per-client persistence and high reliability. One such candidate might be a large file transfer system, where the file is split into multiple smaller chunks and transferred between client and server. If client loses a session or a chunk, Durable Services would enable the client to resume the download / upload activity without having to redo the entire file transfer activity again.

Bucking the trend?
The general design of a typical service (say web service) is to perform the requested operation and terminate the state of the service immediately. It was always considered a bad practice to persist the state of the service itself. However, Durable Services (are intended to) persist the bare minimum data with which the client can resume the activity.

Pros and Cons
The main advantages of Durable Services are persistence and reliability.
One obvious disadvantage is the performance overhead brought about by the persistence activities in each call.

1) Mike Taulty's informative screencast.
2) Jesus Rodriquez's crisp
3) Udi Dahan's
podcast of Durable Services with WCF, WF and NServiceBus.
4) MSDN's calculator
5) I have explored Durable Services to a good depth as part of my work assignments. I have implemented a large file (2 GB or more) download system with Durable Servies (WS-Http-Binding and SQL Express).

22 January, 2008

Credit Card WOW's!

Is a credit card a woe or a wow? Read on and be surprised by the power of credit cards!

Personally, I own credit cards and have cultivated innovative ways of utilizing them based on my experiences. Below is my list of tips to maximize the benefits of your credit card(s) in India:

1) Repayment (Don’t pay interest):

  • Pay your FULL credit card bill each month. This way, you can completely avoid the interest charges (as high as 36% per year!). Yes, if you pay the entire amount billed in your credit card statement, you do not pay any interest to the company.
  • As a credit card holder, your primary responsibility is to maintain manageable debt and interest rates. The interest rate does not reduce to zero, until you repay the full amount at some point.
  • Incase you are unable to pay the full amount due to financial constraints, refer points 2 and 3 below.

2) Balance transfer (Pay in parts, keep it dormant):

  • You can transfer the balance from one credit card to another credit card (of yours, not others) at nominal interest rates (about 0% to 3% up to 6 months). This is a much better option to repay high debt.
  • Once you transfer the balance to your 2nd credit card, do not use it (keep it dormant!) until you clear off the balance (in convenient part payments). This would avoid high interest rates as described in point 1.
  • My advice is to keep at least 2 credit cards so that you can do this effectively. Think of it as a short-term low-interest unsecured loan!

3) EMI (Pay monthly, keep it dormant):

  • If you purchase an expensive item (say digital camera, cell phone, etc.), go for the EMI option. You may choose this option while swiping the card or through customer care. Some companies offer 0% EMI schemes for fresh purchases.
  • Treat EMI similar to Balance transfer and don’t use the card till you clear off all EMI payment.

4) Reward points (Earn and use):

  • Most cards earn reward points for each purchase. Keep track and avail these points against an array of gifts.
  • A simple advice is to pay all your utility bills using credit card and pay the full credit card bill each month. This way, you can analyze your spends, pay 0% interest and still earn reward points!

5) Charges (Be aware):

  • Opt for credit cards with features such “free-for-life” option, no annual / joining fees, no non-usage fees, etc.
  • Do not choose the life-insurance option with the card unless you want to pay the premium. Look, understand and fill / sign the application forms.
  • Most companies offer a one-time waiver of late payment fees. Call the customer care and avail this option when needed. Pay your dues before the due date.

6) Special offers (Don’t be tempted):

  • Special offers are announced during festival and occasions. Look before you leap here since such offers may carry hidden charges.

7) Convenience (Click and forget):

  • A credit card allows you to pay most of your bills (power, water, mobile, landline, life-insurance, vehicle-insurance, etc.) from the convenience of your home or office.

8) Security (Better safe than sorry):

  • Minimize credit card hacks by always using them on trusted computers (Cyber cafĂ© is a big NO-NO!), on SSL enabled websites, etc.
  • Incase of any discrepancy, block the credit card and complain to the customer care immediately. You might have to send a written complaint in some cases.
  • Maintain complex passwords and uncommon PIN combinations. Change them frequently, if possible.
  • Get online access and monitor your card usage frequently.

9) Customer care (Call 24*7):

  • It’s a pain to get through to the customer care sometimes. But, we need to be patient and utilize the available services well.
  • You could call at off-peak hours to avoid long wait-times or use email.

Hope the above tips change your opinion about credit cards and influence you into trying them. You may still choose to avoid credit cards due to their inherent misuse (by self or others), but please don’t view me as a blog-marketer of credit cards :).

.NET Reference Source project launched!

Moving away from tradition, Microsoft has launched the .NET Reference Source project. Yeah, Microsoft has gone "open-source". Basically, this new feature would allow developers and enthusiasts to view / debug the .NET Framework source code in Visual Studio 2008 IDE.

What's in it for me?
  • Guidance: The source code would reveal the patterns and practices followed (and preached) by the Microsoft team. This would help architects and developers to follow (or customize) these best practices for their solutions.

  • Defect identification: The source of an application defect can be traced inside the .NET Framework source code (and not just upto the user code, previously). System defects can be reported to Microsoft and alternative code flows can be used temporarily.

  • Community feedback: The biggest advantage of "open-source" concept is the feedback received from the community. Microsoft could incorporate developer feedback in their further releases and make the .NET brand more robust.

  • Out-of-the-box: Once the small hotfix (see suggested readings below) is installed, the source code (.NET 2.0/3.0/3.5/further) can be viewed or debugged in Visual Studio 2008 (only). I admit that few one-time configurations (to download symbols to local cache) are required, but it is still a one-time activity. Currently, most of the Framework classes are supported and indications are that more / all classes would be added in the future.
  • Suggested readings

    • Scott Guthrie introduces this concept nicely in his blog.
    • Shawn Burke's blog is a detailed write-up on this concept and illustrates technical details step-by-step.

    Hope this post and the references encourage you to appreciate (or utilize) Microsoft's initiative.