Saturday, December 25, 2010

C# 4.0 Features

C# 4.0 provides access to the Dynamic Language Runtime (DLR) using the new ‘dynamic’ keyword in .NET4, allowing dynamic languages like Ruby and Python to expose their objects to C#. Apart from consuming objects from dynamic languages, this feature also helps us in implementing customized dynamically dispatched objects. This can be done by implementing the IDynamicObject interface, which itself is usually done by inheriting from the abstract DynamicObject class and providing our own implementation and invocation; the IDynamicObject interface allows us to interoperate with the DLR and implement our own behavior!

Before we can go into what Dynamic type is and how it acts, we need to have a basic understanding about DLR and its components, so we’re going to spend a bit of time investigating this new feature.

Dynamic Language Runtime (DLR)

The DLR is the new API in .NET Framework 4 that is responsible for implementing dynamic programming, and is common runtime for the dynamic languages. The C# runtime is built on the top of the DLR in order to give provision for the dynamic typing. The below figure illustrates the block diagram of the DLR and its internal components.

The Dynamic Runtime Language

Fig 1: C#4.0 dynamic programming, and how DLR works.

The languages with dynamic capabilities (such as C#4.0 and VB 10.0) are built on top of the DLR which, as we can see above, has three main components at it’s core:

1. Expression Trees
2. Dynamic Dispatch
3. Call Site Caching

An Expression Tree depicts code in the form of a tree, which allows languages to be translated into a standard design on which the DLR can operate. These are the same kind of expression trees that were introduced in C# 3 LINQ, but which have now been improved to support statements. Once code is in a tree representation, the DLR can take the tree and use it to generate CLR code.

Dynamic Dispatch is the process of mapping messages to sequences of code at runtime . This is the way that the system lets the binders decide on the target method. Code is generated for dynamic invocations to the appropriate Language Binders.

Call Site Caching is used to avoid the need to call into the binder. Normally the binder returns an expression tree which the DLR compiles, but this step can be avoided if the types of the arguments are the "same".

Speaking of the binders, these exist beneath the DLR, and are responsible for communicating with the respective environments of different technologies. For example, the Object binder allows communication with .NET Objects, the JavaScript binder allows communication with JavaScript in Silverlight, the Python and Ruby binders allow to communication with their respective languages, and the COM binder allows communication with Office / COM Objects.

All of this is wrapped up in the DLR, which C# 4.0 provides access to using the new ‘dynamic’ keyword. This permits data types to be decided dynamically at runtime, as opposed to statically at compile-time, by redirecting any calls involving a parameter of type dynamic through the DLR. Dynamic type signifies to the compiler that all operations based on that type should (unsurprisingly) be inferred dynamically, and instructs the compilerto ignore the compile time checking for this type.

The C# compiler now allows for calling a method with any name and any arguments on dynamically created object types. Consider the code below.

dynamic d = GetNum();
d.divide(40,5); // allows us to call any method with any signature..

As d is declared as dynamic, the compiler will not generate any runtime errors for the above declaration and, although it will still engage in type checking, it will not decide the target data type of the call until runtime; so the actual object that is being dynamically referred to will be determined at runtime. As mentioned in the code annotation above, the compiler allows calling dynamic objects with any method or signature.


Because C# is a statically typed language, the ‘dynamic' type informs the compiler that it is working with a dynamic invocation, and so can forget about the normal compile-time checking for that type. However, this also means that illegal operations (if any) will only be detected at runtime.

The Difference between Var and Dynamic

People often get confused between var and dynamic, so I would like to you to understand exactly what is meant by each keyword.

If the ‘var’ keyword is used, the data type is still determined by the compiler at compile time. On the other hand, when the ‘dynamic’ keyword is used, the member and method lookups are determined at runtime. In addition, dynamic can also be used as the return type for methods, and the Var keyword cannot be used for procedure return calls.

In addition, the dynamic keyword will not give you trouble if it doesn't have any associated method. On the other hand, var will never allow the application to compile if any method is used which is not associated with it.

An Example of Dynamic

static void Main(string[] args)
{ var d = new Math();
Console.WriteLine(d.Add(10, 20) + " -- Calls the int version. "+ "\n" +// Calls the //int version.
d.Add("abc", "def") + " -- Calls the string version" + "\n" +// Calls the string
// version.
d.Add((object)10, (object)20) + " -- Calls the integer version " + "\n" +// Calls the
// integer version.
d.Add((object)"abc", (object)"def") + " -- Calls the object version " + "\n" +// Calls
// the object version.
d.Add((dynamic)10, (dynamic)20) + " -- Calls the int version " + "\n" + // Calls the
// int version
d.Add((dynamic)"abc", (dynamic)"def") + " -- Calls the string version "); // Calls the
// string version.
Console.WriteLine(d.Multiply(10, 20)); ;
Console.ReadLine();
}

public class Math : Program
{
///


/// The method a returns Dynamic type that can be used to pass the objects of any
/// datatype.
/// The parameters get added and returned dynamically at runtime.
/// The data type is decided at runtime as the dynamic datatype gets assigned at
/// runtime.
///

///
///
///
dynamic Add(dynamic a, dynamic b)
{
return a + b;
}

///
/// This method is used for multiplying the numbers. We need not declare 2
/// separate methods for different datatypes (integer and double).
/// The dynamic data type gets assigned at runtime.
///

///
///
///
dynamic Multiply(dynamic a, dynamic b)
{
return a * b;
}
}

In the above example, parameters a and b are dynamic, and so their runtime data types are used to resolve the method. If we are doing subtraction, multiplication and division operations, then using Dynamic type really saves time, as we do not need to create separate methods for each data type, like integer and double. However, make sure that you pass valid data type values, otherwise runtime errors will still be thrown. For example, you cannot use string data types for the division operator, as this is clearly illegal.

Although Dynamic is really just hiding the use of reflection under the hood, Dynamic produces the same code created by the compiler, and so has a advantage over reflection when you need dynamic access to objects at runtime. For example, consider the code below that is used to get Authors from a publisher, and which uses reflection to invoke the GetAuthor () method:

//Before dynamic
object publisher = GetPublisher();
Type pubtype = publisher.getType();
object pubObj = pubtype.InvokeMember("GetAuthor", BindingFlags.InvokeMethod, null, new
object
[] {}); //use reflection to invoke the method
string author = pubObj.ToString();

This can now be written as the code below, using the dynamic keyword:

//With dynamic:

dynamic publisher = GetPublisher();
string author= publisher.GetAuthor();

The Dynamic type is also very helpful when interoperating with Office Automation API’s, which saves you from casting everything from the object. Finally, to finish off this look at the capabilities of the dynamic data type, bear in mind that it can be applied not only to method calls, but also for several other operations:

  • Field and property accesses,
  • Indexer and operator calls,
  • Delegate invocations and constructor calls.

Limitations

  • The Dynamic keyword cannot be used for the Class base type.
  • The Dynamic keyword cannot be used with the Operator type.
  • Extension methods cannot be used dynamically; Extension methods are introduced for the ability to add the assembly which contains the extension via a using clause. This is available at compile time for method resolution, but not at runtime; hence, dispatching to the extension method at runtime is not supported.
  • LINQ relies completely on extension methods to perform query expression operation, but extension methods cannot be resolved at runtime due to the lack of information in the compiled assembly. Hence, using LINQ Queries over dynamic objects is problematic.
  • Anonymous functions cannot be used as parameters, as the compiler cannot bind an anonymous function without knowing the type it is converting
  • A lambda expression cannot be passed in extension methods as an argument to a dynamic operation.
  • A dynamic object's type is not inferred at the compile time of an operation, so if any error occurs, it will be identified only at runtime. Static or strongly typing is not maintained in the case of dynamic, and the introduction of dynamic C# opens the doors for duck typing.
  • Additionally, the result of any dynamic operation is itself of type dynamic, with the two exceptions:
    • The type of a dynamic constructor call is the constructed type; for example, the type of demo in the following declaration is Demo Class, not dynamic.

var demo = new Demo(d);

    • The type of a dynamic implicit or explicit conversion is the target type of the conversion.
  • Optional Parameters

    Microsoft’s coevolution in C# and VB languages has made this feature possible now. These parameters need to be declared with a default value in the method signature, and allow for omitting arguments to member invocations. The below example describes the syntax:

    private void CreateNewStudent(string name, int studentid = 0, int year = 1)

    Note: The Optional Parameters must be placed after the required parameters, or else the C# compiler will show a compile time error.

    There are a few limitations to the optional parameters feature, and we’ll look at them after we’ve considered the new Names Arguments feature, as the two are very useful when deployed together.

Named Arguments

Named arguments are a way to provide an argument using the name of the desired parameter, instead of depending on its [the parameter’s] position in the parameter list. Now, if we want to omit the OrderId parameter value in the above code, but specify the year parameter, the new named arguments feature (highlighted below) can be used. All of the following are also valid calls:

CreateOrders(OrderId:2AS34, OrderName:"Demo");

4. Generic Variance

The term "variance" refers to the ability to use one type where another was specified, and in that context there are 3 terms we need to become familiar with:

Invariant: A return parameter is invariant if we must use the exact match of the type name between the runtime type and declared type. For invariant parameters, neither covariance nor contravariance is permitted

Covariant: A parameter is covariant if we can use a derived type as a substitute for the parameter type, and a derived class instance can be used where a parent class instance was expected. Covariance is the conversion of a type from more specific to more general. For example, converting an object of type car to the type automobile.

Contravariant: Contravariance is exactly the opposite of Covariance, i.e. it is the conversion of a type from more general to more specific. A return value is contravariant if we can assign the return type to a variable of a less derived type than the parameter. A base class instance can be used where a subclass instance was expected.

  • Variance is a property of operators that act on types. It is the concept of specifying in and out parameters on generic types and allowing assignments where it is safe.
  • Variant type parameters can be declared for interfaces and delegate types.

Generic parameters in interfaces are invariant by default., so we need to explicitly specify whether we need a particular generic parameter to be covariant or contravariant. The example below demonstrates both co-variance and contra-variance support in C# 4.0:

class Fruit { }

class Apple : Fruit { }

class Program
{
delegate T Func<out T>();
delegate void Action<in T>(T a);

static void Main(string[] args)

{
// Covariance
Func<Apple> apple = () => new Apple();
Func<Fruit> fruit = apple;
// Contravariance
Action<Fruit> fru1 = (fru) =>
{ Console.WriteLine(fru); };
Action<Apple> app1 = fru1;
}
}

Covariant parameters will only be used in output positions: method return values, get-only properties or indexers, and Contravariant parameters will only occur in input positions: method parameters, set-only properties or indexers.

Additionally, the generic variance feature also allows the assignment of the object type IEnumerable to a variable with the type IEnumerable.

Limitations

  • Due to a limitation in the CLR4, variant type parameters can only be declared on interfaces and delegates.
  • V ariance only applies when there is a reference conversion. Eric Lippet's blog can be referred for more syntax options and to play with variance in depth.

5. COM Interoperability

The above features, like Dynamic type, Named and optional arguments, all undoubtedly improve the experience of interoperating with COM APIs, such as Office Automation and PIAs. There are also interesting enhancements in C#4.0 related specifically to COM Interop development, which greatly enhances productivity.

Compiling without PIAs

Primary Interop Assemblies are huge .NET assemblies generated from COM interfaces to assist strongly typed interoperability. They provide excellent support at design time, where we find the experience of the Interop is the same as if the types were really defined in .NET. However, at runtime these large assemblies can easily cause trouble for the program. We may also get versioning issues, because these assemblies are distributed independently in the application.

C#4.0’s embedded-PIA feature allows the use of PIAs at design time without having them around at runtime; the C# compiler will pull the small part of the PIA that a program actually uses directly into its assembly. So, in reality, the PIA does not have to be loaded at runtime, and there’s no need to deploy the PIAs; COM component developers only require them to build with.

Omitting References

Most of the COM APIs contains a lot of reference parameters because they need to support different programming models. In order to change a passed-in argument to pass these parameters by reference, it is now no longer required to create temporary variables for them. For COM methods, the compiler now:

  • Allows the declaration of the method call by passing the arguments by value,
  • Automatically generates the necessary temporary variables to hold the values, in order to pass them by reference,
  • Will discard these values after the call returns.

Consider this method call in C#3.0:

object fileName = "SimpleTalkCS.NET4.docx";
object missing = Missing.Value;

document.SaveAs(ref fileName,
ref missing, ref missing, ref missing,
ref missing, ref missing, ref missing,
ref missing, ref missing, ref missing,
ref missing, ref missing, ref missing,
ref missing, ref missing, ref missing,
ref missing, ref missing, ref missing,
ref missing, ref missing, ref missing);

In C# 4.0, it can now be written as follows:

document.SaveAs("MicrosoftDgr8.docx",
Missing.Value, Missing.Value, Missing.Value,
Missing.Value, Missing.Value, Missing.Value,
Missing.Value, Missing.Value, Missing.Value,
Missing.Value, Missing.Value, Missing.Value,
Missing.Value, Missing.Value, Missing.Value,
Missing.Value, Missing.Value, Missing.Value,
Missing.Value, Missing.Value, Missing.Value);

Moreover, because all the parameters that are receiving Missing.Value have default values, the declaration of the method call can even be reduced to this:

document.SaveAs("MicrosoftDgr8.docx");

From the point of view of the programmer, the arguments are being passed by value.

Dynamic Import

In C# 3.0, most of the COM methods accept and return variant types, and these methods are represented in the PIAs as objects. It might be difficult to call these methods, as we need to cast on their return types. To make the developer’s life easier, it is now possible to import the COM APIs with PIA-embedding, in such a way that variants are instead represented using the type dynamic. So, the COM signatures now have occurrences of dynamic instead of object. As a result, it is now easy to access members directly from a returned object, and assign members to a strongly typed local variable without having to cast explicitly.

Previously you might have been writing the code as below. Here we are casting explicitly to enter a value in an Excel Cell:

((Excel.Range)excel.Cells[2, 3]).Value = "Microsoft";

However, now we can directly assign cell value, without explicitly doing type casting:

excel.Cells[2, 3].Value = "Microsoft";
Excel.Range range = (Excel.Range)excel.Cells[2, 3];

In fact, the last piece of the above code can be rewritten as:

Excel.Range range = excel.Cells[2, 3];

Of course, without the Dynamic type, the value returned from excel.Cells[2, 3] is of type Object, which must be cast to the Range type before its value property can be accessed. However, when producing a Runtime Callable Wrapper (RCW) assembly for a COM object, any use of VARIANT in the COM method is actually converted to dynamic (a process called dynamification). So excel.Cells[2,3] is of type dynamic, and now we don’t have to explicitly cast it to the Range type before its Value property can be accessed. As you can see, dynamification can greatly simplify code that interoperates with COM objects.

Just to make sure I’m getting the point across, here is some simple code to demonstrate. This code creates an excel work book, and adds the text ‘simple talk’ into the specified cell:

static void Main(string[] args)

{
Application excel
= new Application();

excel.Visible = true;

excel.Workbooks.Add(Type.Missing);

// C#3.0
excel.Cells[2, 3].Value = "Microsoft";

}

Indexed and Default Properties

Finally, since the COM interface can be accessed dynamically, C# will now allow the declaration of indexed properties. So, instead of:

o.set_P(i+1, o.get_P(i) * 2);

we can now write:

o.P[i+1] = o.P[i] * 2;

Limitations

These new default properties of COM interface features are allowed if we access COM dynamically, but statically typed C# code will still not recognize them.

Summary

C# has evolved from Managed code and Generics, to LINQ, and now to dynamic programming. As you will have seen in my previous article, Visual Basic 2010 already allows reference parameters to be omitted, and exposes indexed properties, and PIA embedding and variance are both being introduced to VB and C# at the same time, thanks to the languages’ co-evolution. Whereas Parallel programming used to require the help of PLINQ, a .NET4 feature that makes it possible to code in multi-core processors with much greater ease has also been added in the latest iterations of both the VB and C# languages. All in all, C#4.0 is an impressive improvement to the language, opening up new avenues for C# "old hands" to explore, and bringing familiar features in for VB or dynamic language users who are C#-curious. No matter which camp you fall in, your ability to write elegant & powerful code has just been leveled-up. As I’m exploring the latest advances in .NET, my next article is going to look at ASP.NET 4 enhancements, so stay tuned.

uploading images to database using ASP.Net and C#

Uploading images to a Sql Server database is extremely easy using ASP.NET and C#. A couple of months ago I wrote a similar article, using VB.NET. This article will show you how to upload Images (or any Binary Data ) to a Sql Server database using ASP.NET and C#. Part II, Retrieving Images from a Database ( C# ) , will show you how extract images from a database.
Building the Database Table
We start out by building our database table. Our image table is going to have a few columns describing the image data, plus the image itself. Here is the sql required to build our table in SQL Server or MSDE.
CREATE TABLE [dbo].[image] (
[img_pk] [int] IDENTITY (1, 1) NOT NULL ,
[img_name] [varchar] (50) NULL ,
[img_data] [image] NULL ,
[img_contenttype] [varchar] (50) NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO

ALTER TABLE [dbo].[image] WITH NOCHECK ADD
CONSTRAINT [PK_image] PRIMARY KEY NONCLUSTERED
(
[img_pk]
) ON [PRIMARY] GO
I'm a great fan of having a single column primary key, and making that key an Identity column, in our example, that column is img_pk. The next column is img_name, which is used to store a friendly name of our image, for example "Mom and Apple Pie". img_data is actually our image data column, and is where we will be storing our binary image data. img_contenttype will be used to record the content-type of the image, for example "image/gif" or "image/jpeg" so we will know what content-type we need to output back to the client, in our case the browser.
Building our Webform
Now that we have a warm, fuzzy place to store our images, lets build a webform to upload our images into the database.
Enter A Friendly Name



Select File To Upload:


The first interesting point about our webform, is the attribute "enctype". Enctype tells the browser and server that we will be uploading some type of binary data. This binary data needs to be parsed, using a different mechanism from our normal text data. The next control we of interest is the type=file control. This control will present the user with an upload file dialog box. The user browses for the file they want to upload.
Working with the Uploaded Image
Once the user posts the data, we have to be able to parse the binary data and send it to the database. Along with the main body of the code, we use a helper function called SaveToDB() to achieve this.
private int SaveToDB(string imgName, byte[] imgbin, string imgcontenttype)
{
//use the web.config to store the connection string
SqlConnection connection = new SqlConnection(ConfigurationSettings.AppSettings["DSN"]);
SqlCommand command = new SqlCommand( "INSERT INTO Image (img_name,img_data,img_contenttype) VALUES ( @img_name, @img_data,@img_contenttype )", connection );

SqlParameter param0 = new SqlParameter( "@img_name", SqlDbType.VarChar,50 );
param0.Value = imgName;
command.Parameters.Add( param0 );

SqlParameter param1 = new SqlParameter( "@img_data", SqlDbType.Image );
param1.Value = imgbin;
command.Parameters.Add( param1 );

SqlParameter param2 = new SqlParameter( "@img_contenttype", SqlDbType.VarChar,50 );
param2.Value = imgcontenttype;
command.Parameters.Add( param2 );

connection.Open();
int numRowsAffected = command.ExecuteNonQuery();
connection.Close();

return numRowsAffected;
}
In this function we are passing in 3 different parameters
imgName - the friendly name we want to give out image data
imgbin -- the binary or Byte array of our data
imgcontenttype - the content type of our image. For example: image/gif or image/jpeg

There are 3 parameters as SQLParameters and defines the type. Our first SQLParameter is @img_name and is defined as a VarChar with a length of 50. The 2nd parameter, @img_data, is the binary or Byte() of data and is defined with a data type of Image. The last parameter is @img_contenttype, is defined as a VarChar with a length of 50 characters. The remainder of the function opens a connection to the database and executes the command by calling command.ExecuteNonQuery().
Calling our Functions
Ok, now that we have our worker functions written, let's go ahead and get our image data.
Stream imgStream = UploadFile.PostedFile.InputStream;
int imgLen = UploadFile.PostedFile.ContentLength;
string imgContentType = UploadFile.PostedFile.ContentType;
string imgName = txtImgName.Value;
byte[] imgBinaryData = new byte[imgLen];
We need to access three important pieces of data for our example. We need the image:
Name (imgName_value)
Content-Type (imgContentType)
and the Image Data. (imgBindaryData)
First we access to the image stream, which we are able to get by using the propertyUploadFile.PostedFile.InputStream. (Remember, UploadFile was the name of our upload control on the webform). We also need to know how long the Byte array we are going to create needs to be. We can get this number by calling UploadFile.PostedFile.ContentLength, and storing it's value in imgLen. Once we have the length of the image, we create a byte array by byte[] imgBinaryData = new byte[imgLen]; We access the content type of the image by accessing the ContentType property of UploadFile.PostedFile. Lastly we need the friendly name we are going to use for the image.
The Good Stuff
Ok, we know how to connect to the database, we know how to insert data into the database, and we have access to the uploaded image's properties. But how do we pass the stream of the image to SaveToDB(). Again, .NET comes to the rescue. With 1 line of code we are able to access the image stream and convert it to a Byte array.
int n = imgStream.Read(imgBinaryData,0,imgLen);
The stream object provides a method called Read(). Read() takes 3 parameters:
buffer - An array of bytes. A maximum of count bytes are read from the current stream and stored in buffer.
offset -The byte offset in buffer at which to begin storing the data read from the current stream.
count - The maximum number of bytes to be read from the current stream.
So we pass in our Byte array, imgBinaryData; the place to start at, 0; and the amount of bytes we want to read. n number of bytes read into our array is returned.
Extending Beyond Images
Because we are able to access the binary stream of data, images are not the only object we can store in the database. Some other objects might be streaming video, com objects, or sound clips. As an example I also uploaded a streaming avi into my database. I ran a select query to show the results.
Conclusion
So there we have it, ASP.NET provides us some easy functionality for uploading images into a database. In Part II, we will actually look at pulling these images out of a database and sending them to a browser. The complete code used for this article can be found below.
--------------------------------------------------------------------------------------------------------------------------------------
Image SQL
CREATE TABLE [dbo].[image] (
[img_pk] [int] IDENTITY (1, 1) NOT NULL ,
[img_name] [varchar] (50) NULL ,
[img_data] [image] NULL ,
[img_contenttype] [varchar] (50) NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO

ALTER TABLE [dbo].[image] WITH NOCHECK ADD
CONSTRAINT [PK_image] PRIMARY KEY NONCLUSTERED
(
[img_pk]
) ON [PRIMARY]
GO
UploadImage.aspx
<%@ Page language="c#" Src="UploadImage.aspx.cs" Inherits="DBImages.UploadImage" %>







The ASPFree Friendly Image Uploader


Enter A Friendly Name



Select File To Upload:






UploadImage.aspx.cs ( codebehind file)
using System;
using System.Configuration;
using System.Collections;
using System.ComponentModel;
using System.Data;
using System.Data.SqlClient;
using System.Drawing;
using System.Web;
using System.IO;
using System.Web.SessionState;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.HtmlControls;

namespace DBImages
{
public class UploadImage : System.Web.UI.Page
{
protected System.Web.UI.WebControls.Button UploadBtn;
protected System.Web.UI.WebControls.RequiredFieldValidator RequiredFieldValidator1;
protected System.Web.UI.HtmlControls.HtmlInputText txtImgName;
protected System.Web.UI.HtmlControls.HtmlInputFile UploadFile;

public UploadImage() { }

private void Page_Load(object sender, System.EventArgs e){ }

public void UploadBtn_Click(object sender, System.EventArgs e)
{
if (Page.IsValid) //save the image
{
Stream imgStream = UploadFile.PostedFile.InputStream;
int imgLen = UploadFile.PostedFile.ContentLength;
string imgContentType = UploadFile.PostedFile.ContentType;
string imgName = txtImgName.Value;
byte[] imgBinaryData = new byte[imgLen];
int n = imgStream.Read(imgBinaryData,0,imgLen);

int RowsAffected = SaveToDB( imgName, imgBinaryData,imgContentType);
if ( RowsAffected>0 )
{
Response.Write("
The Image was saved");
}
else
{
Response.Write("
An error occurred uploading the image");
}

}
}


private int SaveToDB(string imgName, byte[] imgbin, string imgcontenttype)
{
//use the web.config to store the connection string
SqlConnection connection = new SqlConnection(ConfigurationSettings.AppSettings["DSN"]);
SqlCommand command = new SqlCommand( "INSERT INTO Image (img_name,img_data,img_contenttype) VALUES ( @img_name, @img_data,@img_contenttype )", connection );

SqlParameter param0 = new SqlParameter( "@img_name", SqlDbType.VarChar,50 );
param0.Value = imgName;
command.Parameters.Add( param0 );

SqlParameter param1 = new SqlParameter( "@img_data", SqlDbType.Image );
param1.Value = imgbin;
command.Parameters.Add( param1 );

SqlParameter param2 = new SqlParameter( "@img_contenttype", SqlDbType.VarChar,50 );
param2.Value = imgcontenttype;
command.Parameters.Add( param2 );

connection.Open();
int numRowsAffected = command.ExecuteNonQuery();
connection.Close();

return numRowsAffected;
}
}
}
Web.Config
configuration
appSettings
add key="DSN" value="server=localhost;uid=sa;pwd=;Database=aspfree"
appSettings
system.web
customErrors mode="Off"
system.web
configuration






ASP.Net mail going to spam instead of inbox

Make sure that subject line with out special characters, below link may be helpful to you

Model View Controller MVC

MVC stands for Model-View-Controller.

Every application can be broken into three parts - the presentation layer (UI), the core business logic (aka domain logic) and the persistence layer (can be a RDBMS or a Directory Server, etc.). MVC targets at separating the presentation layer from the business logic layer (assume persistence to be a part of the BL layer for the time-being).

Model represents the Business Logic and the View represents the presentation layer. Coming to the Controller, it is the component which bridges the gap between these layers. I guess I can make it more clear using an example. Lets say I am building a desktop application which contains a button and on its click I store some data. The UI part and the button-click will be a part of View and the method doing the storing part will be a part of the Model.

However the UI will not call this method directly. The UI will call a method of the controller which will in turn call the storing method. Such a methodology will isolate the Business Logic from the UI logic. The advantage of this isolation is that migrating my desktop application to a web application would require only my UI code to rewritten.

Why would I want to do this through a controller? - One case may be doing formatting logics at the controller so that the BL layer focuses on the domain logic. Another case can be the BL may be a Web Service, the controller can contain code to setup the connection and call the web service methods and the UI can call the controller methods without even knowing that it is going through a web service. Possibilities are limitless.

Here's a good example which shows a live MVC - http://www.codeproject.com/KB/tips/ModelViewController.aspx.

However coming to .NET - WinForms use a slightly modified version of MVC called MVP - http://en.wikipedia.org/wiki/Model_View_Presenter and for WPF another version called MVVM - http://en.wikipedia.org/wiki/Model_View_ViewModel.

Lambda expression To get a customer names who placed orders in the month of May

IEnumerable filtered = oCustomer.Where(n => n.Orders [0].Month.Equals("May"))

.Select(n => n.Name);

To display 10 records per page in SSRS2008. Total I got 100 records in dataset.

Add a group to your report and use the following expression to group on:

=Floor((RowNumber(Nothing) - 1) / 10)


Also set the 'Page break at and' property for the group.

Test Connection failed because of an error in initializing provider.


Solution:

In start menu programs, there is "Configuration Tools" menu is there for both SQL Server 2005/2008 and with in this menu there is another menu " Configuration Manager". In Configuration manager, click on 2005/2008 Network configuration and u will get the instance name of sql server. Just check the TCP/IP is enable or disabled. If it is disabled then enabled then it solve ur problem.

Thursday, May 13, 2010

Performance comparision of WCF vs ASMX

1. Introduction

Windows Communication Foundation (WCF) is a distributed communication technology that ships as part of the .NET Framework 3.0. This article concentrates on comparing the performance of WCF with existing .NET distributed communication technologies. A prerequisite for this article is sufficient understanding of WCF. For an architectural overview of WCF please read "Windows Communication Foundation Architecture Overview" and to learn how to build services using WCF standard bindings please read "Introduction to Building Windows Communication Foundation Services" athttp://msdn.microsoft.com/en-us/library/.

2. Goals

The goal of this article is to provide performance comparisons between WCF and other existing .NET distributed communication technologies. These technologies are:

  • ASP.NET Web Services (ASMX)
  • Web Services Enhancements (WSE)
  • .NET Enterprise Services (ES)
  • .NET Remoting

The scenarios and data presented in this article quantify the underlying cost of the different technologies. This data is useful in understanding the relation between these technologies and can be helpful in planning migrations between the technologies. However, care should be taken in the conclusions drawn from the data presented in this article. The limiting performance factor in a well-designed Service Oriented Architecture (SOA) solution is most likely the service implementation itself and not the cost of the underlying communication technology. One must measure each application to determine the performance characteristics of that application. Note that this article does not address performance best practices when using WCF. Rather, it endeavors to provide sufficient information to enable you to make informed performance decisions when you are using an existing .NET distributed communication technologies as a basis.

3. Comparisons

All data presented in this article was collected using the same hardware configuration: four 2-way client systems were used to drive a server that was configured as a Uni or Quad processor. Two 2 GB cards were employed to guarantee that the network was not the bottleneck for any of the scenarios. See Figure 14 for details of the topology employed.

The number of client processes used on the client systems was sufficient to ensure that the CPU on the server was completely saturated. The data collected and presented reflects the average of the multiple convergent runs and care was taken to make sure all data was highly repeatable and sustainable.

All the comparisons in article are throughput comparisons and as such higher the value achieved, the better it is. In all the graphs, the higher bars reflect better performance.

This article focuses on the server throughput of the .NET distributed communication technologies. This is defined as the number of operations for each second that these technologies can sustain. An operation is a request and reply message with little processing done by the service. As mentioned in the introduction, but reiterated here as it is critically important, it is expected that the business logic will dominate the cost of a service in a well constructed SOA solution. By leaving out business logic processing at the service, only the cost of the messaging infrastructure is measured.

The message payloads used are different based on the comparison scenario and are explained for each comparative technology.

3.1 ASP .NET Web Services (ASMX)

In this section the performance of ASP.NET Web services is compared with the performance of WCF. The scenario is request/reply between the client and the service. This is the typical message exchange pattern for both technologies. The request message in this scenario is required to send an integer. The reply message is comprised of an array of 1, 10 or 100 objects, each object being approximately 256 bytes long. The WCF object is an instance of a strongly typed data contract.

The signature of the function used to generate the message payload (objects) at the service is described in the following:

Order[] GetOrders(int NumOrders);

{

Order[] orders = new Order[numOrders];

for (int i = 0; i <>

{

Order order = new Order();

OrderLine[] lines = new OrderLine[2];

lines[0] = new OrderLine();

lines[0].ItemID = 1;

lines[0].Quantity = 10;

lines[1] = new OrderLine();

lines[1].ItemID = 2;

lines[1].Quantity = 5;

order.orderItems = lines;

order.CustomerID = 100;

order.ShippingAddress1 = "012345678901234567890123456789";

order.ShippingAddress2 = "012345678901234567890123456789";

order.ShippingCity = "0123456789";

order.ShippingState = "0123456789012345";

order.ShippingZip = "12345-1234";

order.ShippingCountry = "United States";

order.ShipType = "Courier";

order.CreditCardType = "XYZ";

order.CreditCardNumber = "0123456789012345";

order.CreditCardExpiration = DateTime.UtcNow;

order.CreditCardName = "01234567890123456789";

orders[i] = order;

}

return orders;

}

3.1.1 IIS Hosted Interoperable Basic Profile 1.0 Web Service

This section compares the performance of ASMX and WCF while they are hosted in IIS 6.0. In both cases, no security is used. The WCF binding used is the BasicHttpBinding. This standard binding uses HTTP as the transport protocol. The Basic Profile specification can be found at http://www.ws-i.org/Profiles/BasicSecurityProfile-1.0.html. ASP.NET 2.0, part of the .NET Framework 2.0, provides CLR attributes to ensure conformance to the Basic Profile. For WCF the BasicHttpBinding provides the same level of guarantees.

As shown in Figure 1, WCF has improved performance over ASMX. Three different operation signatures (payloads) are shown in the graph. In each case an integer is passed from the client to the server and an array of objects (1, 10 or 100) is passed back to the client. WCF outperforms ASMX by 27%, 31% and 48% for 1, 10 and 100 objects in a message, respectively.

The graph in Figure 2 shows the throughput comparison of WCF and ASMX for the same scenario as Figure 1 but running on a quad processor. The throughput performance of WCF is better than ASMX by 19%, 21% and 36% for 1, 10 and 100 objects in a message, respectively. Note that the software used was not modified between the two configurations and a single service was exposed on the server. Comparing the data in the preceding two charts, the inherent scalability of the technologies can be noticed.

http://i.msdn.microsoft.com/Bb310550.wcfperform01(en-us,MSDN.10).gif

Figure 1

http://i.msdn.microsoft.com/Bb310550.wcfperform02(en-us,MSDN.10).gif

Figure 2

3.1.2 IIS Hosted Interoperable Basic Profile 1.0 Web Service using Transport Security

In this section, the performance of WCF is compared with the performance of ASMX with both operating over HTTPS. WCF uses the BasicHttpBindingfor this scenario. Figure 3 shows the performance of WCF is better than ASMX when using transport level security. WCF outperforms ASMX by 16%, 18% and 26% for 1, 10 and 100 objects in a message respectively.

Figure 4 shows that the performance of WCF is better than ASMX by 5%, 12% and 13% for 1, 10 and 100 objects in a message respectively for a quad processor scenario.

http://i.msdn.microsoft.com/Bb310550.wcfperform03(en-us,MSDN.10).gif

Figure 3

http://i.msdn.microsoft.com/Bb310550.wcfperform04(en-us,MSDN.10).gif

Figure 4

3.2 Web Services Enhancements (WSE)

In this section, the throughput of WCF is compared with the throughput of Web Services Enhancements. The comparison in this case is with WSE 2.0 but it should be noted that the performance of WSE 2.0 and 3.0 are similar for this payload. The method signatures and payload used for this scenario are identical to that employed in the ASMX scenarios (shown in Section 3.1).

3.2.1 IIS Hosted Interoperable Web Service using WS-Security

In this section, message level security using X. 509 certificates as the security credential is used. The WSHttpBinding is used in WCF, which implements the WS-Security 1.1 specification. The transport protocol used is HTTP and the message exchange pattern remains request/reply.

Figure 5 shows WCF is much more efficient than WSE. The throughput of WCF is nearly 4 times better than WSE. The main reason for this is that WSE uses the System.Xml.XmlDocument class to do message level parsing, thereby loading the full message into memory at once, while WCF uses a streaming System.Xml.XmlReader class that improves the performance significantly.

Figure 6 compares the throughput of WCF and WSE 2.0 for quad processors. These results are similar to the performance gain achieved by WCF over WSE for single processor scenario. WCF is nearly 4 times faster than WSE with full message security.

http://i.msdn.microsoft.com/Bb310550.wcfperform05(en-us,MSDN.10).gif

Figure 5

Figure 5 also illustrates the performance of another mechanism for securing messages in WCF: transport security with message credentials. This configuration combines transport-level security (HTTPS) and message-level credentials (for example, credentials in the SOAP message). To deploy this, WCF (Message Credentials) workload is using the BasicHttpBinding. The chart shows that the single processor scenario performance of WCF (Message Credentials) is better than WCF with WS Security by 129%, 166% and 277% for 1, 10 and 100 messages, respectively. The corresponding numbers for the quad processor scenario for WCF (Message Credentials) are even better and show an improvement of 126%, 156% and 248% for 1, 10 and 100 messages, respectively, over WCF (Message Credentials) for the uni processor scenario.

As can be seen from the chart, transport security with message credentials provides improved performance while still allowing rich message-level credentials. The message-level credentials include timestamp processing, canonicalization and signature processing. The message protection (signature, encryption, replay detection and other protection mechanisms) are still done at the transport byte stream level, below the individual message boundaries. There is a WS-Security header, but that only contains a timestamp, a security token and signature using that security token over the timestamp. Whereas, in the case where WCF uses full WS-Security, the message protection is done as a message-level transformation with signing and encryption done over the XML fragments for the headers and body. Also, the WS-Security header contains all the required security metadata as XML constructs. This extra XML-aware security processing and the larger size of the security header account for the performance difference. You have to consider the tradeoff between performance and security features available for the specific application that you might want to use this WCF setting in.

http://i.msdn.microsoft.com/Bb310550.wcfperform06(en-us,MSDN.10).gif

Figure 6

3.3 .NET Enterprise Services (ES)

In this section, the throughput of Enterprise Services (ES) is compared with WCF using two different service operation signatures and payloads. These are referred to as primitive and order messages. The primitive message is of a primitive type and this allows ES to execute its fast serialization path. The order message is a typical scenario that imitates a book order online and is approximately 512 bytes. The request/reply message exchange pattern is used for these comparisons.

The signature of the primitive payload is as follows:

string TransferFunds(int source, int destination, Decimal amount);

Here the service just returns a string "successful" or "failure".

For the order message the following code service is used:

static public ProductInfo CreateProductInfo(int count)

{

ProductInfo productInfo = new ProductInfo();

productInfo.TotalResults = count.ToString();

productInfo.TotalPages = "1";

productInfo.ListName = "Books";

productInfo.Details = new Details[count];

for (int x = 0; x <>

{

productInfo.Details[x] = GetDetail();

}

return productInfo;

}

static Details GetDetail()

{

Details details = new Details();

details.Url =

"http://www.abcd.com/exec/obidos/ASIN/043935806X/qid=1093918995/sr=k

a-1/ref=pd_ka_1/103-9470301-1623821";

details.Asin = "043935806X";

details.ProductName = "Any Book Available";

details.Catalog = "Books";

details.ReleaseDate = "07/01/2003";

details.Manufacturer = "Scholastic";

details.Distributor = "Scholastic";

details.ImageUrlSmall =

"http://images.abcd.com/images/P/043935806X.01._PE60_PI_SZZZZZZZ_.jpg";

details.ImageUrlMedium =

"http://images.abcd.com/images/P/043935806X.01._PE60_PI_MZZZZZZZ_.jpg";

details.ImageUrlLarge =

"http://images.abcd.com/images/P/043935806X.01._PE60_PI_LMZZZZZZZ_.jpg";

details.ListPrice = "29.99";

details.OurPrice = "12.00";

details.UsedPrice = "3.95";

details.Isbn = "043935806X";

details.MpaaRating = "";

details.EsrbRating = "";

details.Availability = "Usually ships within 24 hours";

return details;

}

In these scenarios, the WCF service is self hosted and employs the NetTcpBinding.

Note You can use IIS 7.0, which is shipped with Windows Vista for hosting TCP services. In this case, the performance achieved is slightly less than the self-hosted case.

3.3.1 Self-Hosted Request/Reply TCP Application

This section compares WCF with ES for two payloads previously discussed without any security. Figure 7 shows that sometimes ES is faster while other times WCF is faster. The performance of ES is better by 21% for the primitive message payload when the fast serializer can be used (possibly on a handful of primitive types like integers) but WCF outperforms it by 149% for order message payload.

Figure 8 shows the same benchmark and payload comparison on a quad processor. As WCF scales better than ES, WCF is faster than ES by 7% for primitive message even though ES can utilize its fast serialization path. For the order message, WCF is faster than ES by 104%.

http://i.msdn.microsoft.com/Bb310550.wcfperform07(en-us,MSDN.10).gif

Figure 7

http://i.msdn.microsoft.com/Bb310550.wcfperform08(en-us,MSDN.10).gif

Figure 8

3.3.2 Self-Hosted Secure Request/Reply TCP Application

In this section, the performance of WCF and ES are compared for the same message loads as the previous section (Section 3.3.1) with security enabled. Specifically, transport-level SSL security is employed and ASP.NET Role principle is used for the authorization. Figure 9 shows that the performance of ES on a uni processor is faster than WCF by 24% for the primitive message type while for the order message type, WCF is faster than ES by 69%.

Figure 10 shows that for quad processor, ES is better than WCF by 16% for the primitive message type and for the order message type WCF is faster by 37%.

http://i.msdn.microsoft.com/Bb310550.wcfperform09(en-us,MSDN.10).gif

Figure 9

http://i.msdn.microsoft.com/Bb310550.wcfperform10(en-us,MSDN.10).gif

Figure 10

3.3.3 Secure Transacted Request/Reply TCP Application

In the previous two sections (Sections 3.3.1 and 3.3.2), the work done by the service was doing little more than creating the objects that were returned to the client. In this section, the throughput of WCF is compared with .NET ES when the services that are implemented use a database transaction. Please note that the transaction used is not flowed but is created and utilized within the service. The purpose of this scenario is to demonstrate that any substantial service implementation dominates the cost of the infrastructure independent of the technology used to deploy it. Hence the comparison is done only for the single proc scenario and only for primitive message type.

In Figure 11, WCF performance is compared with the performance of .NET Enterprise Service for a primitive message payload. As expected, the throughput of this scenario is significantly lower than the previous scenario because transactions are being used. Also as expected, the performance of the two technologies is nearly identical with WCF having slightly better performance.

http://i.msdn.microsoft.com/Bb310550.wcfperform11(en-us,MSDN.10).gif

Figure 11

3.4 .NET Remoting

This section compares the performance of WCF and .NET Remoting when communication is required across processes on the same machine. Three different sized payloads, each an array of bytes are used for this comparison. The following interface illustrates the service operation signature:

public interface IRemoteObject

{

[OperationContract]

byte [] GetRBytes(int NumBytes);

}

The size of the message payload returned is determined by the "NumBytes" which for the data below is 128 bytes, 4k and 256k. TheNetNamedPipeBinding is employed without any security for this scenario.

3.4.1 Request/Reply Named Pipe Application

The cross-process named pipe is used as the transport protocol along with request/reply as the message exchange protocol. As seen in Figure 12, WCF outperforms .NET Remoting by 29% and 30% for message payloads of 128 bytes and 4k bytes, respectively. As the payload grows in size, the performance of the technologies converge so that for the 256k byte array the performance is nearly identical.

In Figure 13, the corresponding data for quad processors is shown. The throughput of WCF is better by 38%, 18% and 28% for message payloads of 128 bytes, 4k bytes and 256k bytes, respectively.

http://i.msdn.microsoft.com/Bb310550.wcfperform12(en-us,MSDN.10).gif

Figure 12

http://i.msdn.microsoft.com/Bb310550.wcfperform13(en-us,MSDN.10).gif

Figure 13

4. Conclusion

When migrating distributed applications written with ASP.NET Web Services, WSE, .NET Enterprise Services and .NET Remoting to WCF, the performance is at least comparable to the other existing Microsoft distributed communication technologies. In most cases, the performance is significantly better for WCF over the other existing technologies. Another important characteristic of WCF is that the throughput performance is inherently scalable from a uni processor to quad processor.

To summarize the results, WCF is 25%—50% faster than ASP.NET Web Services, and approximately 25% faster than .NET Remoting. Comparison with .NET Enterprise Service is load dependant, as in one case WCF is nearly 100% faster but in another scenario it is nearly 25% slower. For WSE 2.0/3.0 implementations, migrating them to WCF will obviously provide the most significant performance gains of almost 4x.

5. Appendix

5.1 Description of Bindings

System-provided bindings are used to specify the transport protocols, encoding, and security details required for clients and services to communicate with each other. The system-provided WCF bindings are listed in the table. More details on the bindings can be found in the WCF documentation. WCF also allows you to define your own custom bindings.

Bindings

Descriptions

BasicHttpBinding

A binding that is suitable for communication with WS-Basic Profile conformant Web Services like ASMX-based services. This binding uses HTTP as the transport and Text/XML as the message encoding.

WSHttpBinding

A secure and interoperable binding that is suitable for non-duplex service contracts.

WSDualHttpBinding

A secure and interoperable binding that is suitable for duplex service contracts or communication through SOAP intermediaries.

WSFederationHttpBinding

A secure and interoperable binding that supports the WS-Federation protocol, enabling organizations that are in a federation to efficiently authenticate and authorize users.

NetTcpBinding

A secure and optimized binding suitable for cross-machine communication between WCF applications

NetNamedPipeBinding

A secure, reliable, optimized binding that is suitable for on-machine communication between WCF applications.

NetMsmqBinding

A queued binding that is suitable for cross-machine communication between WCF applications.

NetPeerTcpBinding

A binding that enables secure, multi-machine communication.

MsmqIntegrationBinding

A binding that is suitable for cross-machine communication between a WCF application and existing MSMQ applications.

5.2 Performance Test Machine Configuration

http://i.msdn.microsoft.com/Bb310550.wcfperform14(en-us,MSDN.10).gif

Figure 14

Figure 14 shows the machine configuration used is a single server and four client machines connected over two 1 Gbps Ethernet network interface. The server is a quad processor AMD 64 2.2 GHz x86 machine running Windows Server 2003 SP1. Each of the client machines are dual processor AMD 64 2.2GHz x86 machines running the same operating system as the server. The system CPU utilization is maintained at nearly 100%. All the scenarios that required hosting were done using an Internet Information Services (IIS) 6.0 server. For the single processor scenarios, the server is booted as a single processor machine.

Disclaimer Note: The information furnished here is taken from the Microsoft White papers for WCF