Potty Little Details

Just another WordPress.com weblog

Connection Pooling for the SQL Server .NET Data Provider

leave a comment »

Pooling connections can significantly enhance the performance and scalability of your application. The SQL Server .NET Data Provider provides connection pooling automatically for your ADO.NET client application. You can also supply several connection string modifiers to control connection pooling behavior (see the section “Controlling Connection Pooling with Connection String Keywords” later in this topic).

Pool Creation and Assignment

When a connection is opened, a connection pool is created based on an exact matching algorithm that associates the pool with the connection string in the connection. Each connection pool is associated with a distinct connection string. When a new connection is opened, if the connection string is not an exact match to an existing pool, a new pool is created.

In the following example, three new SqlConnection objects are created, but only two connection pools are required to manage them. Note that the first and second connection strings differ by the value assigned for Initial Catalog.

SqlConnection conn = new SqlConnection();

conn.ConnectionString = “Integrated Security=SSPI;Initial Catalog=northwind”;

conn.Open();

// Pool A is created.

SqlConnection conn = new SqlConnection();

conn.ConnectionString = “Integrated Security=SSPI;Initial Catalog=pubs”;

conn.Open();

// Pool B is created because the connection strings differ.

SqlConnection conn = new SqlConnection();

conn.ConnectionString = “Integrated Security=SSPI;Initial Catalog=northwind”;

conn.Open();

// The connection string matches pool A.

Once created, connection pools are not destroyed until the active process ends. Maintenance of inactive or empty pools involves minimal system overhead.

Connection Addition

A connection pool is created for each unique connection string. When a pool is created, multiple connection objects are created and added to the pool so that the minimum pool size requirement is satisfied. Connections are added to the pool as needed, up to the maximum pool size.

When a SqlConnection object is requested, it is obtained from the pool if a usable connection is available. To be usable, the connection must currently be unused, have a matching transaction context or not be associated with any transaction context, and have a valid link to the server.

If the maximum pool size has been reached and no usable connection is available, the request is queued. The object pooler satisfies these requests by reallocating connections as they are released back into the pool. If the time-out period (determined by the Connect Timeout connection string property) elapses before a connection object can be obtained, an error occurs.

CAUTION You must always close the Connection when you are finished using it. This can be done using either the Close or Dispose methods of the Connection object. Connections that are not explicitly closed are not added or returned to the pool.

Connection Removal

The object pooler will remove a connection from the pool if the connection lifetime has expired, or if the pooler detects that the connection with the server has been severed. Note that this can be detected only after attempting to communicate with the server. If a connection is found that is no longer connected to the server, it is marked as invalid. The object pooler periodically scans connection pools looking for objects that have been released to the pool and are marked as invalid. These connections are then permanently removed.

If a connection exists to a server that has disappeared, it is possible for this connection to be drawn from the pool even if the object pooler has not detected the severed connection and marked it as invalid. When this occurs, an exception is generated. However, you must still close the connection in order to release it back into the pool.

Transaction Support

Connections are drawn from the pool and assigned based on transaction context. The context of the requesting thread and the assigned connection must match. Therefore, each connection pool is actually subdivided into connections with no transaction context associated with them, and into N subdivisions that each contain connections with a particular transaction context.

When a connection is closed, it is released back into the pool and into the appropriate subdivision based on its transaction context. Therefore, you can close the connection without generating an error, even though a distributed transaction is still pending. This allows you to commit or abort the distributed transaction at a later time.

Controlling Connection Pooling with Connection String Keywords

The ConnectionString property of the SQLConnection object supports connection string key/value pairs that can be used to adjust the behavior of the connection pooling logic.

The following table describes the ConnectionString values you can use to adjust connection pooling behavior.

Name

Default

Description

Connection Lifetime

0

When a connection is returned to the pool, its creation time is compared with the current time, and the connection is destroyed if that time span (in seconds) exceeds the value specified by Connection Lifetime. This is useful in clustered configurations to force load balancing between a running server and a server just brought online.

A value of zero (0) will cause pooled connections to have the maximum time-out.

Connection Reset

‘true’

Determines whether the database connection is reset when being removed from the pool. For Microsoft SQL Server version 7.0, setting to false avoids making an additional server round trip when obtaining a connection, but you must be aware that the connection state, such as database context, is not being reset.

Enlist

‘true’

When true, the pooler automatically enlists the connection in the current transaction context of the creation thread if a transaction context exists.

Max Pool Size

100

The maximum number of connections allowed in the pool.

Min Pool Size

0

The minimum number of connections maintained in the pool.

Pooling

‘true’

When true, the connection is drawn from the appropriate pool, or if necessary, created and added to the appropriate pool.

Performance Counters for Connection Pooling

The SQL Server .NET Data Provider adds several performance counters that enable you to fine-tune connection pooling characteristics, detect intermittent problems related to failed connection attempts, and detect problems related to timed-out requests to your SQL Server.

The following table lists the connection pooling counters that can be accessed in Performance Monitor under the “.NET CLR Data” performance object.

Counter

Description

SqlClient: Current # of pooled and non pooled connections

Current number of connections, pooled or not.

SqlClient: Current # pooled connections

Current number of connections in all pools associated with the process.

SqlClient: Current # connection pools

Current number of pools associated with the process.

SqlClient: Peak # pooled connections

The highest number of connections in all pools since the process started. Note: this counter is only available when associated with a specific process instance. The _Global instance will always return 0.

SqlClient: Total # failed connects

The total number of connection open attempts that have failed for any reason.

Advertisements

Written by oneil

September 11, 2008 at 2:54 am

Posted in ADO dot net

Rewriting or Redirecting URLs

leave a comment »

Fritz Onion (http://pluralsight.com/blogs/fritz/archive/2004/07/21/1651.aspx) has a great URL redirecting engine, very similar to the URL rewriting module written for DasBlog (http://www.sf.net/projects/dasblogce). There is often confusion between URL redirecting and the URL rewriting.

Redirecting is the server’s way of informing the client that something has moved. For example, a browser requests http://www.computerzen.com and the server responses with an HTTP 302 Status Code and points the browser to http://www.hanselman.com/blog. The browser then has to request http://www.hanselman.com/blog itself, and receives an HTTP 200 Status Code indicating success. During this redirection, the URL in the browser’s address bar will be updated. The final successful URL will ultimately appear in the address bar.

Rewriting, on the other hand, occurs entirely on the server side; the browser only requests a page once and the address bar’s displayed URL doesn’t change. For example, if you type http://www.hanselman.com/blog/zenzoquincy.aspx into your browser’s address bar, you’ll get a page showing off my infant son. However, the file zenzoquincy.aspx doesn’t actually exist anywhere on the disk. The only page that does exist is permalink.aspx, the page that my blog engine uses to show all blog posts. The real page is permalink.aspx?guid=cee8aa6e-de46-43ad-8d27-e1c764df30f5. However, that unique post ID isn’t very memorable and certainly not any fun. When the blog engine I run, DasBlog, sees ZenzoQuincy.aspx requested, it looks in its data store to see whether the words “ZenzoQuincy” are associated with a unique blog post ID and then rewrites the requested URL on-the-fly, on the server side, and ASP.NET continues dispatching the request.

URL redirecting and URL rewriting are together the most powerful techniques you have available to control the URL presented to the user, as well as to maintain your site’s permalinks. It is very important to most website content owners that their links remain permanent, hence “permalink.” Netiquette—Internet etiquette—dictates that if the URL does change, then you at least provide a redirect to inform the browser automatically that the resource has moved. As a protocol, HTTP provides two ways to alert the browser: the first is a temporary redirect, or 302, and the second is a permanent redirect, or 301.

To extend the example, my website uses a temporary redirect to send visitors from http://www.computerzen.com to http://www.hanselman.com/blog. It’s temporary because I might change the location of my blog at some point, pointing my top-level domain somewhere else. I use a permanent redirect for my blog’s RSS (Rich Site Summary) feed to inform aggregators and syndicators that I would prefer they always use a specific URL. When aggregators receive a 301, or permanent redirect, they know to update their own data and never visit the original URL again.

Fritz’s HttpModule uses a configuration section like this, which includes regular expressions to match target URLs to destination URLs via a redirect. Note that Fritz’s, and most, rewriting modules use regular expressions to express their intent. Regular expressions give a concise description of intent. For example, ∘/(fritz|aaron|keith|mike)/rss\.xml matches both the strings /fritz/rss.xml and /mike /rss.xml. Regular expressions are used in both the target and destination URL. The destination URL uses an expression like /blogs/$1/rss.aspx, where $1 is the first match in parentheses, in this case “fritz” or “mike”.

A simple hard-coded 301 redirect looks like this within ASP.NET:

response.StatusCode = 301;
response.Status = “301 Moved Permanently”;
response.RedirectLocation = “http://www.hanselman.com/blog”;
response.End();

Here’s an example of how DasBlog uses URL rewriting to service HTTP requests for files that don’t exist on the file system. Within my blog’s web.config file is a custom configuration section that includes regular expressions that are matched against the requested file. For example, the file http://www.hanselman.com/blog/rss.ashx doesn’t exist. There’s no handler for it, and the file doesn’t exist on disk. However, I’d like people to think of it as my main URL for the RSS XML content on my site. I’d like to easily change which service handles it internally with just a configuration change. I add this exception to my web.config custom section:

Note that it is mapped to “http://www.hanselman.com/blog/SyndicationService.asmx/GetRss” with the {basedir} having expanded. That URL isn’t nearly as friendly as rss.ashx, is it? Remember that the name rss.ashx isn’t special, it’s just unique. I picked it because the extension was already mapped within ASP.NET. It could have been something else like foo.bar, as long as the .bar extension was mapped to ASP.NET within the IIS configuration.

private void HandleBeginRequest( object sender, EventArgs evargs )
{
HttpApplication app = sender as HttpApplication;
string requestUrl = app.Context.Request.Url.PathAndQuery;
NameValueCollection urlMaps =
(NameValueCollection)ConfigurationSettings.GetConfig(“newtelligence.DasBlog.UrlMapp
er”);
for ( int loop=0;loop<urlMaps.Count;loop++)
{
string matchExpression = urlMaps.GetKey(loop);
Regex regExpression = new Regex(matchExpression,RegexOptions.IgnoreCase|
RegexOptions.Singleline|RegexOptions.CultureInvariant|
RegexOptions.Compiled);
Match matchUrl = regExpression.Match(requestUrl);
if ( matchUrl != null && matchUrl.Success )
{
string mapTo = urlMaps[matchExpression];
Regex regMap = new Regex(“\\{(?\\w+)\\}”);
foreach( Match matchExpr in regMap.Matches(mapTo) )
{
Group urlExpr;
string expr = matchExpr.Groups[“expr”].Value;
urlExpr = matchUrl.Groups[expr];
if ( urlExpr != null )
{
mapTo = mapTo.Replace(“{“+expr+”}”, urlExpr.Value);
}
}
app.Context.RewritePath(mapTo);
}
}
}
It starts by getting the NameValueCollection of URLs from the web.config file. The regular expression for each potential match is run against the request URL, which is pulled from HttpContext.Current.Request.Url.PathAndQuery. If an expression is found to match, each match in the requested URL is mapped to its spot in the destination URL. For example, note in the following code how the {postid} is extracted from the request URL and reused in the destination. Any good URL rewriting engine has support for this in some fashion, whether by {token} or by numeric position such as $1.

Check out the source code for DasBlog, or one of the other redirecting/rewriting modules I’ve mentioned, for more details and ideas on how you can create more “hackable” URLs for your application. Christopher Pietschmann has a nice VB version for ASP.NET 2.0 at http://pietschsoft.com/blog/post.aspx?postid=762.

Written by oneil

September 10, 2008 at 2:12 pm

Posted in ASP DOT NET

IP Blacklisting

leave a comment »

I do not have direct access to the IIS administrative console at my ISP, so when I wanted to block some troublesome IP addresses that were spamming my blog, I needed a software solution that would run in ASP.NET. An HttpModule was the easiest solution to write—it is easily configurable and easily added to my ASP.NET application without recompiling. This module will listen on the BeginRequest event that fires for every HttpRequest that comes into the configured application. Because modules like this listen on every request, you’ll want to be especially diligent about the work you do and get out of the module as soon as possible.
An IP Blacklisting HttpModule
using System;
using System.Text.RegularExpressions;
using System.IO;
using System.Web;
using System.Web.Caching;
using System.Collections.Specialized;

namespace MVPHacks
{
public class IPBlackList : IHttpModule
{
public IPBlackList(){}

void IHttpModule.Dispose(){}

void IHttpModule.Init(HttpApplication context)
{
context.BeginRequest += new EventHandler(this.HandleBeginRequest);
}

const string FILE = “~/blockedips.config”;
const string CACHEKEY = “blockedips”;

public static StringDictionary GetBlockedIPs(HttpContext context)
{
StringDictionary ips = (StringDictionary)context.Cache[CACHEKEY];
if (ips == null)
{
ips = GetBlockedIPs(GetBlockedIPsFile(context));
context.Cache.Insert(CACHEKEY, ips,
new CacheDependency(GetBlockedIPsFile(context)));
}
return ips;
}

private static string BlockedIPFileName = null;
private static object blockedIPFileNameObject = new object();
public static string GetBlockedIPsFile(HttpContext context)
{
if (BlockedIPFileName != null) return BlockedIPFileName;
lock(blockedIPFileNameObject)
{
if (BlockedIPFileName == null)
{
BlockedIPFileName = context.Server.MapPath(FILE);
}
}
return BlockedIPFileName;
}

public static StringDictionary GetBlockedIPs(string configPath)
{
StringDictionary retval = new StringDictionary();
using (StreamReader sr = new StreamReader(configPath))
{
String line;
while ((line = sr.ReadLine()) != null)
{
line = line.Trim();
if (line.Length != 0)
{
if (retval.ContainsKey(line) == false)
{
retval.Add(line, null);
}
}
}
}
return retval;
}

private void HandleBeginRequest( object sender, EventArgs evargs )
{
HttpApplication app = sender as HttpApplication;
if ( app != null )
{
string IPAddr = app.Context.Request.ServerVariables[“REMOTE_ADDR”];
if (IPAddr == null || IPAddr.Length == 0)
{
return;
}

//Block the PHPBB worm and other WGET-based worms
if (app.Context.Request.QueryString[“rush”] != null
||
app.Context.Request.RawUrl.IndexOf(“wget”) != -1)
{
app.Context.Response.StatusCode = 404;
app.Context.Response.SuppressContent = true;
app.Context.Response.End();
return;
}

StringDictionary badIPs = GetBlockedIPs(app.Context);
if (badIPs != null && badIPs.ContainsKey(IPAddr))
{
app.Context.Response.StatusCode = 404;
app.Context.Response.SuppressContent = true;
app.Context.Response.End();
return;
}
}
}
}
}
With this HttpModule, now I can upload a text file called blockedips.txt with one IP address per line to my site and the changes are recognized immediately. The IP addresses are stored in ASP.NET’s cache as a StringDictionary and the cached object is invalidated if the underlying file is updated.

Notice that this module returns a 404 when an IP address is blocked. I wanted to discourage the spammers as much as possible so I decided to fool them into thinking my website had no content at all:

The assembly’s qualified name (QN) for this or any HttpModule is added to the httpModules section of the application’s web.config. A QN consists of the full namespace and class name, a comma, and then the actual assembly filename without the .dll extension.

Written by oneil

September 10, 2008 at 2:10 pm

Posted in ASP DOT NET

Moving ViewState to the Bottom of the Page

leave a comment »

If your sites stay in the first few pages of Google’s search results, it is said that you have “Google juice.” Many developers worry that because ViewState’s hidden form field appears very early in the page and almost always before any meaningful content, web bots and spiders such as Google won’t bother looking past a giant glob of ViewState. To get around this problem you may want to move ViewState to the bottom of a rendered page. You likely wouldn’t want to take the performance hit on every page for this hack, but certainly it’s reasonable on occasion. It also gives you a great opportunity to override Render and seriously mess with the resulting HTML to include any other hacks or HTML modifications that you were previously unable to do using standard techniques.

Let’s try this technique first. Override your page’s Render method and call up to the base class’s Render and insist that the page render in its entirety. The downside here, of course, is that by hacking into the render you’re bypassing the buffered writing to the response output and dealing with strings. It’s almost as if you’re saying to the page, “Render yourself … stop! Wait. OK, continue, I’m done messing around.”

Moving ViewState to the bottom of the page—technique 1

C#

protected override void Render(System.Web.UI.HtmlTextWriter writer)
{
System.IO.StringWriter stringWriter = new System.IO.StringWriter();
HtmlTextWriter htmlWriter = new HtmlTextWriter(stringWriter);
base.Render(htmlWriter);
string html = stringWriter.ToString();
int StartPoint = html.IndexOf(“= 0)
{
int EndPoint = html.IndexOf(“/>”, StartPoint) + 2;
string viewstateInput = html.Substring(StartPoint, EndPoint – StartPoint);
html = html.Remove(StartPoint, EndPoint – StartPoint);
int FormEndStart = html.IndexOf(“”) – 1;
if (FormEndStart >= 0)
{
html = html.Insert(FormEndStart, viewstateInput);
}
}
writer.Write(html);
}

Moving ViewState to the bottom of the page—technique 2
static readonly Regex viewStateRegex = new Regex(@”()”,RegexOptions.Multiline|RegexOptions.Compiled);

static readonly Regex endFormRegex = new
Regex(@””,RegexOptions.Multiline|RegexOptions.Compiled);

protected override void Render(HtmlTextWriter writer)
{
//Defensive coding checks removed for speed and simplicity.
// If these don’t work out,
System.IO.StringWriter stringWriter = new System.IO.StringWriter();
HtmlTextWriter htmlWriter = new HtmlTextWriter(stringWriter);
base.Render(htmlWriter);

string html = stringWriter.ToString();
Match viewStateMatch = viewStateRegex.Match(html);
string viewStateString = viewStateMatch.Captures[0].Value;
html = html.Remove(viewStateMatch.Index,viewStateMatch.Length);

Match endFormMatch = endFormRegex.Match(html,viewStateMatch.Index);
html = html.Insert(endFormMatch.Index,viewStateString);
writer.Write(html);
}

Written by oneil

September 9, 2008 at 4:31 pm

Posted in ASP DOT NET

Alternative Storage for ViewState

with one comment

Storing ViewState within the returned HTML page is convenient, but when ViewState gets to be more than 20K or 30K, it may be time to consider alternative locations for storage. You might want to store ViewState in the ASP.NET Session object, on the file system, or in the database to minimize the amount of data shipped back and forth to the user’s browser.

The most advanced ViewState hacking product that I’ve seen is the Flesk ViewStateOptimizer (http://www.flesk.net), as it enables you to compress ViewState and move it to a file or to the session. When moving ViewState to the Session object, you have to take into consideration how ViewState is supposed to work. Remember that when ViewState is stored in a page’s hidden form field, the ViewState for a page is literally stored along with the page itself. This is an important point, so read it again. When you choose to store ViewState elsewhere—separated from the page—you need a way to correlate the two. Your first reaction might be to think that each user needs a copy of the ViewState for each page they visit. However, it’s not as simple as the equation “users * page = number of ViewState instances,” as a user can and will visit a page multiple times. Each page instance needs its own copy of ViewState.

There are many ways to squirrel away ViewState. Some folks think that creating unique files on the file system and then collecting them later is a good technique. Personally, I would always rather add more memory than be bound to the always slower disk. No matter where you store ViewState, if you store it outside the page, then you’ll need some process to later delete the older bits of state. This could take the form of a scheduled task that deletes files or a SQL Server job that removes rows from a database.

Folks who really want to get their ViewState out of the page have tried many ways to solve this problem. Snippet below shows a hack that stores the ViewState value within the user’s ASP.NET Session using a unique key per request. The ViewState string value that is usually sent out in a hidden form field is stored in the Session using the unique key, and then that considerably smaller key is put into a hidden form field instead. Therefore, each page gets its own Guid, as each HTTP request is unique. This Guid is declared as the ViewState “key” and is stored as its own hidden form field. This key is then used to store the ViewState in the Session.
Storing ViewState in the ASP.NET Session

private string _pageGuid = null;
public string PageGuid
{
get
{
//Do we have it already? Check the Form, this could be a post back
if (_pageGuid == null)
_pageGuid = this.Request.Form[“__VIEWSTATE_KEY”];
//No? We’ll need one soon.
if (_pageGuid == null)
_pageGuid = Guid.NewGuid().ToString();
return _pageGuid;
}
set
{
_pageGuid = value;
}
}

protected override object LoadPageStateFromPersistenceMedium()
{
return Session[this.PageGuid];
}

protected override void SavePageStateToPersistenceMedium(object viewState)
{
RegisterHiddenField(“__VIEWSTATE_KEY”, this.PageGuid);
Session[this.PageGuid] = viewState;
}

The load and save methods are very simple, just storing the PageGuid in the Session object and the ViewState object within the Session. The real magic happens in the new PageGuid page-level property. When the PageGuid is requested the first time, the form is checked for the unique key. If it’s not there, a new key is created because it will likely be needed soon after.

Written by oneil

September 9, 2008 at 4:30 pm

Posted in ASP DOT NET

LosFormatter: The Missing Serializer

leave a comment »

The limited object serialization (LOS) formatter is designed for highly compact ASCII format serialization. This class supports serializing any object graph, but is optimized for those containing strings, arrays, and hashtables. It offers second order optimization for many of the .NET primitive types.This is a private format, and needs to remain consistent only for the lifetime of a Web request. You are not allowed to persist objects serialized with this formatter for any significant length of time.The LosFormatter fills in a gap between the verbose XmlSerializer and the terse BinaryFormatter. You can think of the LosFormatter as a BinaryFormatter “light” that is optimized for objects containing very simple types. It’s also a nice convenience that LosFormatter creates an ASCII string representation of your object graph.
Serializing an object with the LosFormatter
string LosSerializeObject(object obj)
{
System.Web.UI.LosFormatter los = new System.Web.UI.LosFormatter();
StringWriter writer = new StringWriter();
los.Serialize(writer, obj);
return writer.ToString();
}
The LosFormatter creates an interestingly formatted string just before Base64 encodes it. For example, the code

aTw1Pg==

looks like this when decoded:

i

This indicates an integer with the value 5. It’s not XML even though it uses angle brackets—it’s just an encoding schema. You may think it is odd to use more bytes after Base64 encoding, but remember that the “==” at the end of a Base64-encoded string is a standard suffix. Additionally, the overhead would be less if the value to be encoded were longer. Usually you won’t need to use the LosFormatter, but it’s good to know it’s available in your Toolbox.

Put an integer value in ViewState programmatically as in the following code. You’ll be storing this integer within ViewState as the page loads. The ASP.NET subsystem will serialize everything within ViewState as the page renders. That means that the number 5 in this example will be serialized into the _VIEWSTATE hidden form field. We’ll then use an inspection utility to examine the contents of the hidden form field and see what it holds.

private void Page_Load(object sender, System.EventArgs e)
{
ViewState[“example”] = 5;
}

object RetrieveObjectFromViewState( string serializedObject)
{
System.Web.UI.LosFormatter los = new System.Web.UI.LosFormatter();
return los.Deserialize(serializedObject);
}

Written by oneil

September 9, 2008 at 4:25 pm

Posted in ASP DOT NET

Encrypting the Message Transfer -Dotnet Remoting

leave a comment »

Encrypting the Transfer
Even though using an asymmetric/symmetric combination such as HTTPS/SSL for the encryption of the network traffic provides the only real security, in some situations HTTPS isn’t quite helpful.

First, .NET Remoting by default only supports encryption when using an HTTP channel and when hosting the server-side components in IIS. If you want to use a TCP channel or host your objects in a Windows service, there’s no default means of secure communication.

Second, even if you use IIS to host your components, callbacks that are employed with event notification will not be secured. This is because your client (which is the server for the callback object) does not publish its objects using HTTPS, but only HTTP.

Essential Symmetric Encryption
Symmetric encryption is based on one key fact: client and server will have access to the same encryption key. This key is not a password as you might know it, but instead is a binary array in common sizes from 40 to 192 bits. Additionally, you have to choose from among a range of encryption algorithms supplied with the .NET Framework: DES, TripleDES, RC2, or Rijndael.

To generate a random key for a specified algorithm, you can use the following code snippet. You will find the key in the byte[] variable mykey afterwards.

String algorithmName = “TripleDES”;
SymmetricAlgorithm alg = SymmetricAlgorithm.Create(algorithmName);

int keylen = 128;
alg.KeySize = keylen;
alg.GenerateKey();

byte[] mykey = alg.Key;

Because each algorithm has a limited choice of valid key lengths, and because you might want to save this key to a file, you can run the separate KeyGenerator console application, which is shown in below.
A Complete Keyfile Generator

using System;
using System.IO;
using System.Security.Cryptography;
class KeyGen
{
static void Main(string[] args)
{
if (args.Length != 1 && args.Length != 3)
{
Console.WriteLine(“Usage:”);
Console.WriteLine(“KeyGenerator [ ]”);
Console.WriteLine(“Algorithm can be: DES, TripleDES, RC2 or Rijndael”);
Console.WriteLine();
Console.WriteLine(“When only is specified, the program”);
Console.WriteLine(“will print a list of valid key sizes.”);
return;
}

String algorithmname = args[0];

SymmetricAlgorithm alg = SymmetricAlgorithm.Create(algorithmname);

if (alg == null)
{
Console.WriteLine(“Invalid algorithm specified.”);
return;
}

if (args.Length == 1)
{
// just list the possible key sizes
Console.WriteLine(“Legal key sizes for algorithm {0}:”,algorithmname);
foreach (KeySizes size in alg.LegalKeySizes)
{
if (size.SkipSize != 0)
{
for (int i = size.MinSize;i<=size.MaxSize;i=i+size.SkipSize)
{
Console.WriteLine(“{0} bit”, i);
}
}
else
{
if (size.MinSize != size.MaxSize)
{
Console.WriteLine(“{0} bit”, size.MinSize);
Console.WriteLine(“{0} bit”, size.MaxSize);
}
else
{
Console.WriteLine(“{0} bit”, size.MinSize);
}
}
}
return;
}

// user wants to generate a key
int keylen = Convert.ToInt32(args[1]);
String outfile = args[2];
try
{
alg.KeySize = keylen;
alg.GenerateKey();
FileStream fs = new FileStream(outfile,FileMode.CreateNew);
fs.Write(alg.Key,0,alg.Key.Length);
fs.Close();
Console.WriteLine(“{0} bit key written to {1}.”,
alg.Key.Length * 8,
outfile);

}
catch (Exception e)
{
Console.WriteLine(“Exception: {0}” ,e.Message);
return;
}

}
}

When this key generator is invoked with KeyGenerator.exe (without any parameters), it will print a list of possible algorithms. You can then run KeyGenerator.exe to get a list of possible key sizes for the chosen algorithm. To finally generate the key, you have to start KeyGenerator.exe . To generate a 128-bit key for a TripleDES algorithm and save it in c:\testfile.key, run KeyGenerator.exe TripleDES 128 c:\testfile.key.

The Initialization Vector
Another basic of symmetric encryption is the use of a random initialization vector (IV). This is again a byte array, but it’s not statically computed during the application’s development. Instead, a new one is generated for each encryption taking place.

To successfully decrypt the message, both the key and the initialization vector have to be known to the second party. The key is determined during the application’s deployment (at least in the following example) and the IV has to be sent via remoting boundaries with the original message. The IV is therefore not secret on its own.

Creating the Encryption Helper
Next I show you how to build this sink in the same manner as the previous CompressionSink, which means that the sink’s core logic will be extracted to a helper class. I call this class EncryptionHelper. The encryption helper will implement two methods, ProcessOutboundStream() and ProcessInboundStream(). The methods’ signatures look like this:

public static Stream ProcessOutboundStream(
Stream inStream,
String algorithm,
byte[] encryptionkey,
out byte[] encryptionIV)

public static Stream ProcessInboundStream(
Stream inStream,
String algorithm,
byte[] encryptionkey,
byte[] encryptionIV)

As you can see in the signature, both methods take a stream, the name of avalid cryptoalgorithm, and a byte array that contains the encryption key as parameters. The first method is used to encrypt the stream. It also internally generates the IV and returns it as an out parameter. This IV then has to be serialized by the sink and passed to the other party in the remoting call. ProcessInboundStream(), on the other hand, expects the IV to be passed to it, so this value has to be obtained by the sink before calling this method. The implementation of these helper methods can be seen below.

The EncryptionHelper Encapsulates the Details of the Cryptographic Process

using System;
using System.IO;
using System.Security.Cryptography;

namespace EncryptionSink
{

public class EncryptionHelper
{

public static Stream ProcessOutboundStream(
Stream inStream,
String algorithm,
byte[] encryptionkey,
out byte[] encryptionIV)
{
Stream outStream = new System.IO.MemoryStream();

// setup the encryption properties
SymmetricAlgorithm alg = SymmetricAlgorithm.Create(algorithm);
alg.Key = encryptionkey;
alg.GenerateIV();
encryptionIV = alg.IV;

CryptoStream encryptStream = new CryptoStream(
outStream,
alg.CreateEncryptor(),
CryptoStreamMode.Write);

// write the whole contents through the new streams
byte[] buf = new Byte[1000];
int cnt = inStream.Read(buf,0,1000);
while (cnt>0)
{
encryptStream.Write(buf,0,cnt);
cnt = inStream.Read(buf,0,1000);
}
encryptStream.FlushFinalBlock();
outStream.Seek(0,SeekOrigin.Begin);
return outStream;
}
public static Stream ProcessInboundStream(
Stream inStream,
String algorithm,
byte[] encryptionkey,
byte[] encryptionIV)
{
// setup decryption properties
SymmetricAlgorithm alg = SymmetricAlgorithm.Create(algorithm);
alg.Key = encryptionkey;
alg.IV = encryptionIV;

// add the decryptor layer to the stream
Stream outStream = new CryptoStream(inStream,
alg.CreateDecryptor(),
CryptoStreamMode.Read);

return outStream;
}

}
}

Creating the Sinks
The EncryptionClientSink and EncryptionServerSink look quite similar to the previous compression sinks. The major difference is that they have custom constructors that are called from their sink providers to set the specified encryption algorithm and key. For outgoing requests, the sinks will set the X-Encrypt header to “yes” and store the initialization vector in Base64 coding in the X-EncryptIV header. The complete client-side sink is shown in Listing 9-8.

The EncryptionClientSink

using System;
using System.Runtime.Remoting.Channels;
using System.Runtime.Remoting.Messaging;
using System.IO;
using System.Text;

namespace EncryptionSink
{
public class EncryptionClientSink: BaseChannelSinkWithProperties,
IClientChannelSink
{
private IClientChannelSink _nextSink;
private byte[] _encryptionKey;
private String _encryptionAlgorithm;

public EncryptionClientSink(IClientChannelSink next,
byte[] encryptionKey,
String encryptionAlgorithm)
{
_encryptionKey = encryptionKey;
_encryptionAlgorithm = encryptionAlgorithm;
_nextSink = next;
}

public void ProcessMessage(IMessage msg,
ITransportHeaders requestHeaders,
Stream requestStream,
out ITransportHeaders responseHeaders,
out Stream responseStream)
{

byte[] IV;

requestStream = EncryptionHelper.ProcessOutboundStream(requestStream,
_encryptionAlgorithm,_encryptionKey,out IV);

requestHeaders[“X-Encrypt”]=”yes”;
requestHeaders[“X-EncryptIV”]= Convert.ToBase64String(IV);

// forward the call to the next sink
_nextSink.ProcessMessage(msg,
requestHeaders,
requestStream,
out responseHeaders,
out responseStream);

if (responseHeaders[“X-Encrypt”] != null &&
responseHeaders[“X-Encrypt”].Equals(“yes”))
{
IV = Convert.FromBase64String(
(String) responseHeaders[“X-EncryptIV”]);
responseStream = EncryptionHelper.ProcessInboundStream(
responseStream,
_encryptionAlgorithm,
_encryptionKey,
IV);
}
}

public void AsyncProcessRequest(IClientChannelSinkStack sinkStack,
IMessage msg,
ITransportHeaders headers,
Stream stream)
{

byte[] IV;

stream = EncryptionHelper.ProcessOutboundStream(stream,
_encryptionAlgorithm,_encryptionKey,out IV);

headers[“X-Encrypt”]=”yes”;
headers[“X-EncryptIV”]= Convert.ToBase64String(IV);

// push onto stack and forward the request
sinkStack.Push(this,null);
_nextSink.AsyncProcessRequest(sinkStack,msg,headers,stream);
}

public void AsyncProcessResponse(IClientResponseChannelSinkStack sinkStack,
object state,
ITransportHeaders headers,
Stream stream)
{
if (headers[“X-Encrypt”] != null && headers[“X-Encrypt”].Equals(“yes”))
{

byte[] IV =
Convert.FromBase64String((String) headers[“X-EncryptIV”]);
stream = EncryptionHelper.ProcessInboundStream(
stream,
_encryptionAlgorithm,
_encryptionKey,
IV);
}

// forward the request
sinkStack.AsyncProcessResponse(headers,stream);
}

public Stream GetRequestStream(IMessage msg,
ITransportHeaders headers)
{
return null; // request stream will be manipulated later
}

public IClientChannelSink NextChannelSink {
get
{
return _nextSink;
}
}

}
}

The EncryptionServerSink shown in Listing 9-9 works basically in the same way as the CompressionServerSink does. It first checks the headers to determine whether the request has been encrypted. If this is the case, it retrieves the encryption initialization vector from the header and calls EncryptionHelper to decrypt the stream.

The EncryptionServerSink

using System;
using System.Runtime.Remoting.Channels;
using System.Runtime.Remoting;
using System.Runtime.Remoting.Messaging;
using System.IO;

namespace EncryptionSink
{
public class EncryptionServerSink: BaseChannelSinkWithProperties,
IServerChannelSink
{

private IServerChannelSink _nextSink;
private byte[] _encryptionKey;
private String _encryptionAlgorithm;

public EncryptionServerSink(IServerChannelSink next, byte[] encryptionKey,
String encryptionAlgorithm)
{
_encryptionKey = encryptionKey;
_encryptionAlgorithm = encryptionAlgorithm;
_nextSink = next;
}

public ServerProcessing ProcessMessage(IServerChannelSinkStack sinkStack,
IMessage requestMsg,
ITransportHeaders requestHeaders,
Stream requestStream,
out IMessage responseMsg,
out ITransportHeaders responseHeaders,
out Stream responseStream) {

bool isEncrypted=false;

//checking the headers
if (requestHeaders[“X-Encrypt”] != null &&
requestHeaders[“X-Encrypt”].Equals(“yes”))
{
isEncrypted = true;

byte[] IV = Convert.FromBase64String(
(String) requestHeaders[“X-EncryptIV”]);
// decrypt the request
requestStream = EncryptionHelper.ProcessInboundStream(
requestStream,
_encryptionAlgorithm,
_encryptionKey,
IV);
}
// pushing onto stack and forwarding the call,
// the flag “isEncrypted” will be used as state
sinkStack.Push(this,isEncrypted);

ServerProcessing srvProc = _nextSink.ProcessMessage(sinkStack,
requestMsg,
requestHeaders,
requestStream,
out responseMsg,
out responseHeaders,
out responseStream);

if (isEncrypted)
{
// encrypting the response if necessary
byte[] IV;

responseStream =
EncryptionHelper.ProcessOutboundStream(responseStream,
_encryptionAlgorithm,_encryptionKey,out IV);

responseHeaders[“X-Encrypt”]=”yes”;
responseHeaders[“X-EncryptIV”]= Convert.ToBase64String(IV);
}

// returning status information
return srvProc;
}

public void AsyncProcessResponse(IServerResponseChannelSinkStack sinkStack,
object state,
IMessage msg,
ITransportHeaders headers,
Stream stream)
{
// fetching the flag from the async-state
bool isEncrypted = (bool) state;

if (isEncrypted)
{
// encrypting the response if necessary
byte[] IV;
stream = EncryptionHelper.ProcessOutboundStream(stream,
_encryptionAlgorithm,_encryptionKey,out IV);

headers[“X-Encrypt”]=”yes”;
headers[“X-EncryptIV”]= Convert.ToBase64String(IV);
}

// forwarding to the stack for further ProcessIng
sinkStack.AsyncProcessResponse(msg,headers,stream);
}

public Stream GetResponseStream(IServerResponseChannelSinkStack sinkStack,
object state,
IMessage msg,
ITransportHeaders headers)
{
return null;
}

public IServerChannelSink NextChannelSink {
get {
return _nextSink;
}
}

}
}

Creating the Providers
Contrary to the previous sink, the EncryptionSink expects certain parameters to be present in the configuration file. The first one is “algorithm”, which specifies the cryptographic algorithm that should be used (DES, TripleDES, RC2, or Rijndael). The second parameter, “keyfile”, specifies the location of the previously generated symmetric keyfile. The same file has to be available to both the client and the server sink.

The following excerpt from a configuration file shows you how the client-side sink will be configured:

In the following snippet you see how the server-side sink can be initialized:

You can access additional parameters in the sink provider’s constructor as shown in the following source code fragment:

public EncryptionClientSinkProvider(IDictionary properties,
ICollection providerData)
{
String encryptionAlgorithm = (String) properties[“algorithm”];
}

In addition to reading the relevant configuration file parameters, both the client-side sink provider and the server-side sink provider have to read the specified keyfile and store it in a byte array. The encryption algorithm and the encryption key are then passed to the sink’s constructor.

Listing 9-10: The EncryptionClientSinkProvider

using System;
using System.IO;
using System.Runtime.Remoting.Channels;
using System.Runtime.Remoting;
using System.Collections;

namespace EncryptionSink
{
public class EncryptionClientSinkProvider: IClientChannelSinkProvider
{

private IClientChannelSinkProvider _nextProvider;

private byte[] _encryptionKey;
private String _encryptionAlgorithm;

public EncryptionClientSinkProvider(IDictionary properties,
ICollection providerData)
{
_encryptionAlgorithm = (String) properties[“algorithm”];
String keyfile = (String) properties[“keyfile”];

if (_encryptionAlgorithm == null || keyfile == null)
{
throw new RemotingException(“‘algorithm’ and ‘keyfile’ have to ” +
“be specified for EncryptionClientSinkProvider”);
}
// read the encryption key from the specified fike
FileInfo fi = new FileInfo(keyfile);

if (!fi.Exists)
{
throw new RemotingException(“Specified keyfile does not exist”);
}

FileStream fs = new FileStream(keyfile,FileMode.Open);
_encryptionKey = new Byte[fi.Length];
fs.Read(_encryptionKey,0,_encryptionKey.Length);
}

public IClientChannelSinkProvider Next
{
get {return _nextProvider; }
set {_nextProvider = value;}
}

public IClientChannelSink CreateSink(IChannelSender channel, string url,
object remoteChannelData)
{
// create other sinks in the chain
IClientChannelSink next = _nextProvider.CreateSink(channel,
url, remoteChannelData);

// put our sink on top of the chain and return it
return new EncryptionClientSink(next,_encryptionKey,
_encryptionAlgorithm);
}
}
}

Listing 9-11: The EncryptionServerSinkProvider

using System;
using System.IO;
using System.Runtime.Remoting.Channels;
using System.Runtime.Remoting;
using System.Collections;

namespace EncryptionSink
{
public class EncryptionServerSinkProvider: IServerChannelSinkProvider
{
private byte[] _encryptionKey;
private String _encryptionAlgorithm;

private IServerChannelSinkProvider _nextProvider;

public EncryptionServerSinkProvider(IDictionary properties,
ICollection providerData)
{
_encryptionAlgorithm = (String) properties[“algorithm”];
String keyfile = (String) properties[“keyfile”];

if (_encryptionAlgorithm == null || keyfile == null)
{
throw new RemotingException(“‘algorithm’ and ‘keyfile’ have to ” +
“be specified for EncryptionServerSinkProvider”);
}

// read the encryption key from the specified fike
FileInfo fi = new FileInfo(keyfile);

if (!fi.Exists)
{
throw new RemotingException(“Specified keyfile does not exist”);
}

FileStream fs = new FileStream(keyfile,FileMode.Open);
_encryptionKey = new Byte[fi.Length];
fs.Read(_encryptionKey,0,_encryptionKey.Length);
}

public IServerChannelSinkProvider Next
{
get {return _nextProvider; }
set {_nextProvider = value;}
}

public IServerChannelSink CreateSink(IChannelReceiver channel)
{
// create other sinks in the chain
IServerChannelSink next = _nextProvider.CreateSink(channel);
// put our sink on top of the chain and return it
return new EncryptionServerSink(next,
_encryptionKey,_encryptionAlgorithm);
}

public void GetChannelData(IChannelDataStore channelData)
{
// not yet needed
}

}
}

When including the sink providers in your configuration files a s presented previously, the transfer will be encrypted as shown in Figure .

A TCP-trace of the encrypted HTTP traffic
You can, of course, also chain the encryption and compression sinks together to receive an encrypted and compressed stream.

Written by oneil

September 9, 2008 at 3:51 pm

Posted in C#