SOA has been quite popular in recent years and is something to strive for. However, there are so many technology options for creating and hosting services. In addition, there exists many SOA definitions and ways of implementing SOA standards. We have bumped our heads and came close many times to slitting our wrists by using WCF. Below are my opinions on WCF and 2 other of the main technologies used to build services.
Requires the creation of DataContracts and OperationContracts. It follows a RPC style architecture.This has been for many years the defacto standard of implementing services using the Microsoft Stack.
- Huge support in .NET community as it is included by default in Visual Studio Templates.
- Big support from Microsoft.
- Lots of resources available.
- Add Service Reference is very easy to use.
- Many features
- Inflexible and brittle
- Difficult configuration and setup
- Huge learning curve
- Difficult to debug and host
- Difficult to add REST and JSON support
- Very limited content negotation support
- Lots of configuration required
- Add Service Reference provides many problems with re-use of code.
- As a developer, I hate working with this technology.
Microsoft’s answer to support REST. Use controllers to setup REST endpoints. Used within a Web project only.
- Built in Visual Studio templates.
- Similar programming style to MVC controllers.
- Easy to use and reasonably easy learning curve.
- Lots of resources available.
- Supports JSON, XML and FormData content types out of the box.
- Very new technology.
- Complicated Route configuration.
- Dependency Injection is difficult.
- Only REST is supported – Need to maintain separated code bases to support REST and SOAP.
- No code re-use between MVC types (controllers) and WebAPI types.
- Limited hosting options.
- Difficult to add custom content negotiation types.
ServiceStack is a bit of a different mind set as it is built using HTTP handlers. This is an open source project (stable since 2008) that was built using SOA and Integration best practices. It is built around Message Contracts (rather than OperationContracts) to which many big companies adhere to – such as Amazon and Google.
Benefits of message contracts: https://github.com/ServiceStack/ServiceStack/wiki/Advantages-of-message-based-web-services
For more info: https://github.com/ServiceStack/ServiceStack/wiki
- Metadata page that exposes all services and usages.
- Supports content negotiation
- Built-in data and content negotiation types: JSON, XML, CSV, JSV, SOAP, ProtoBuf, HTML, Text, byte, Stream, IStreamWriter
- Easily add custom content negotiation types.
- Multiple hosting options – Website, Console App, Windows Service
- One of the fastest serializers in .NET.
- One of the fastest ORMs in .NET.
- Plug-in model that allows the addition of features on global or service level (pipe and filter model).
- Support for most clients and formats.
- OSS from 2008 – free and many contributions
- Strong security model – built in support for OpenId 2.0 providers.
- OSS – most corporate companies prefer paid support structures.
- Lack of built-in Visual Studio templates – need to use NuGet
My conclusion is that ServiceStack looks very promising and as a development team we would love to through away WCF because of all the issues we have experienced over the years. We are currently building some prototypes and so far it looks good.
From hearing and watching the guys from DNR (dot net rocks) and DRNTV I have decided to try out the DevExpress CodeRush tools. This comes with Refactor! Pro. I must say I am quite impressed. It has all the refactor features from Resharper but with some added benefits.
It has some nice visuals with the refactor capabilities which helps a developer during his coding phases. One thing that stands out is the CodeRush tool panel. I normally dock this on the left hand side (luckily I have a wide screen). This allows you to see what short cut keys and refactor commands are available at any given time. As you type it will also filter the results. This is especially helpful when learning to use CodeRush.
They say that after about 5 days of using this you can increase productivity to about 25%. Must say, its worth a shot.
CodeRush supports refactoring for a large number of technologies including:
- C#, VB.NET, C++
- XML (especially nice for config files and WCF configurations)
It comes with a 30 day trial.
I have a couple of hobbies that I pursue. For those of you that like games (PC or Xbox360) please check out my other blog at:
Hope you guys have some fun with it.
Wow. Just had a look at a DNR TV show (http://www.dnrtv.com/default.aspx?showNum=115) by Bill Hollis where he illustrates how to build great user experiences using WPF.
This is quite a mind shift from my usual black and white screens (and some will know me for my orange ones :)) using Microsoft Sans Serif fonts. The combination of layout, animations, design and patterns is mind blowing. You need to see it to realize how far technology has come from traditoiinal windows forms and ASP.NET forms.
You simply cannot purely focus on integration and business logic anymore. We will need to consider the UI integration and patterns as well. I think the MVVM patterns will be a good fit here as well as the powerful WPF routing capabilities and the power of layouts (especially list boxes).
Will seriously need to consider these kind of principles in all new projects going forward using Silverlight or WPF.
I might have become a MEF head…. and I know what you are thinking and the answer is no… I have not resorted to drugs to fix my coding problems. MEF stands for Microsoft Extensibility Framework.
I was watching an interesting DNR TV show yesterday (http://www.dnrtv.com/default.aspx?showNum=130). This show was on MEF. Unfortunately Glen Block did not provide a very good code example on how to implement MEF, however he did get the concepts across. I must say, I like the idea.
Basically, MEF allows you to “extend” existing sealed applications by dropping in DLLs. How this works is when an application needs a piece of functionality, it will signal it needs an assembly that does ‘X’, and MEF will go and find all assemblies that can fullfill that requirement.
This is how it works: Lets say you have an application that needs to perform some sort of comparison (could be String, Int32, Business object, etc). Instead of hard coding those comparisons into your application you can use the Specification pattern and inject an interface (lets say IComparison) into the class that needs to use that comparison. This can also be done using a Service Locator.
Thus, a shared assembly can be used to store these common interfaces that you need to use in your applications. There could be one or many assemblies that use this interface assembly to provide implementations. Lets call one of these assemblies ‘Extensions’. This assembly provides two implementations of IComparison – StringComparison and Int32Comparison. These two classes implement the IComparison interface and implement the necessary methods (like Compare(object1, object 2)).
MEF resides in the System.ComponentModel namespace. In the Extensions assembly you add a reference to it, and mark the classes you want MEF to discover with an [Export] attribute. This attribute has various overloads:
- Just [Export()] allows you to make this specific class discoverable.
- [Export(typeof(IComparison))] will make this class discoverable through the IComparison interface
- [Export(“SomeCategory”)] will make this class discoverable using the custom string type.
The class is not limited to one category however, you can assign multiple [Export] attributes to each class if you so require and thus giving them different categories.
Now, the calling assembly (that needs to use the comparison) will have a reference to the shared assembly (where IComparison resides). It DOES NOT need a reference to the Extensions assembly.
MEF uses Catalogs to define where your assemblies reside – thus you can add it manually, use DirectoryLocator, etc – all these are built in – thus you can just tell MEF that my assemblies reside in some directory and it will do the loading for you. Then you instruct MEF that you require an implementation of IComparison to implement this comparison. MEF will then load those implementations (in our case two of them) into a variable which you have marked with [Import] attribute. You can then use these references as you would any class.
As you can see from this example, it can be very useful in Dependency Injection/Inversion of Control where you want to use Service Locators to fetch an implementation of interface for you. MEF allows you to do this and not worry about the details of dynamically loading DLLs, using refection, etc to get those implementations.
Check it out at http://www.codeplex.com/MEF
I read a very interesting article yesterday. We are busy with a POC on this and will probably be implementing this on one of our big projects coming up.
- It persists all events it receives (in order) to some repository (in memory, database, etc).
- It generates events based on the incoming events and forward the events to the EventProcessor.
- On startup, it will get all available events persisted from the repository and reconstitute the state of the domain.
When developers think of code generation they think generally of ORM tools like LinqToSql or Entity Framework. My feelings on these technologies are putting it midly, “crap”.
I was listening to an interesting podcast this morning on DotNetRocks with Peter Vogel (http://www.dotnetrocks.com/default.aspx?showNum=453) in which he explains other uses of code generation in .Net.
An interesting point he made (and which I think is very applicable) was avoiding to write redundant code. He was not referring to ORM but rather other functions like reading information from a web.config or app.config file. The example he gave was generating code for connection string settings in app.config file. Every time he saves an app.config file, his code generation tool kicks off and generates a ConfigurationManager class which generated code to the settings in the config file.
An example is having multiple connection strings in the config file. For instance, DevConnectionString and StagingConnectionString. His code generation tool will then allow him to access these properties using type safety and early binding. Thus, in code, he can call ConfigurationManager.ConnectionStrings.DevConnectionString or ConnectionStrings.StagingConnectionString and thus avoiding the annoying problems of miss spelling the connection string.
Maybe something to take a look at and invest in. He mentioned a variety of code generation tools in .Net including T4.
Request/Reply: An example is you needing customer information that resides in different CRM system. A request message will be sent to the CRM requesting information. The CRM system will send a reply containing the information.
One way: An example is that you have an event that occurred in your system that you want to broadcast to other systems. These systems do not have to acknowledge or reply to your broadcast.
Messaging consists out of a couple of main concepts:
Channels: Messaging systems transmit data through a Message Channel – a virtual pipe that connects a sender and a receiver.
Messages: Message is an atomic packet of data that can be transmitted on a channel.
Pipes and Filters: Break up various operations between sender and receiver by chainging operations using pipes and filters.
Routing: Router receives a message on input channel and determines how to navigate the channel topology and directs the message to the final receiver.
Transformation: When applications do not agree on the format for piece of data, use a transformer to map one message type to another.
Endpoints: An endpoint bridges the gap between how the application works and how the messaging system works
Thus to illustrate we can use the request reply example. An ordering system needs information stored in CRM system when placing orders. The ordering system would create a request message (CustomerId=1 request) and place this message on a channel (MSMQ as an example). The router will receive this message of the queue, determine the type of message and route it to the CRM system endpoint. The endpoint will receive the message and use a Channel Adapter (discussed in future post) to act as anti corruption layer and transform the message into an object the CRM system understands. The CRM system will process the message, retrieve the valid customer information and send that information via a response message to the Reply Address specified in the original request message. The route will pick up the reponse message, perform any needed operations on it and send it off to the ordering system endpoint.
In further posts I will go into detail on each of these topics by illustrating why they are important and when and where you will use them. I will also show a couple of different flavors on each of them.