Archives
-
LINQ over OOXML: Loving it
In a previous blog post I showed how to efficiently iterate over a WordprocessingML’s document content when creating an object model.
One of the interesting quirks of WordprocessingML is that the content of sections (a section defines information like page size and orientation), instead of being nested inside a section element, are determined by a marker section element in the final paragraph of the section. On top of that, the final section of a document is handled completely differently with the section element instead appearing on the body at the very end of the document.
While I suspect there are good reasons for doing this (I’m guessing the aim was to minimize the amount of change to the document XML structure when doing updates. I'd love to find out if anyone knows), it does make parsing the document and splitting content into sections more difficult. Fortunately with the power of LINQ we can solve this problem in just a couple of statements.
Example
private List<Section> CreateSections(List<Content> content, SectionProperties finalSectionProperties)
{
var sectionParagraphs =
content.Select((c, i) => new { Paragraph = c as Paragraph, Index = i })
// only want paragraphs with section properties
.Where(o => o.Paragraph != null && o.Paragraph.Properties.SectionProperties != null)
// get the section properties
.Select(o => new { SectionProperties = o.Paragraph.Properties.SectionProperties, o.Index })
// add the final section properties plus end index to result
.Union(new[] { new { SectionProperties = finalSectionProperties, Index = content.Count - 1 } })
.ToList();
List<Section> sections = new List<Section>();
int previousSectionEndIndex = -1;
foreach (var sectionParagraph in sectionParagraphs)
{
List<Content> sectionContent =
content.Select((c, i) => new { Content = c, Index = i })
.Where(o => o.Index <= sectionParagraph.Index && o.Index > previousSectionEndIndex)
.Select(o => o.Content)
.ToList();
Section section = new Section(this, sectionContent, sectionParagraph.SectionProperties);
sections.Add(section);
previousSectionEndIndex = sectionParagraph.Index;
}
return sections;
}
The first LINQ statement queries the document content and gets all paragraphs that have associated SectionProperties along with their position within the content. That information is then returned in an anonymous type. The position comes from the Select method, which has an overload that returns the items position as well as item itself. Since the final section properties object is outside the content and therefore not in a paragraph it is unioned on to the end of the result with the position as the end of the content collection.
Now that we have all of the sections and their end positions we loop over the query’s result and create a Section object which is passed the Section properties and the document content that lies in between the section end index and the previous section end index, again using the Select overload that returns an element’s position to find the wanted content.
And that is pretty much it. There is nothing here that couldn’t be achieved in C# 2.0 but using LINQ, lambda expressions, anonymous types and type inference C# 3.0 has probably halved the amount of code that would otherwise be required and made what is there much more concise and understandable. I’m definitely looking forward to using LINQ more in the future.
-
LINQ to XML over large documents
I have been parsing WordprocessingML OOXML over the past week using LINQ to XML and it has been a great learning experience. LINQ to XML is a clean break from the somewhat antiquated DOM that we all know and tolerate, and the new API provides many improvements over the DOM based XmlDocument.
Probably the most talked about change is the functional approach that LINQ to XML encourages. Building a document and querying data can often be accomplished in a single statement compared to the sprawling imperative code that XmlDocument would require.
A less visible and less talked about improvement in LINQ to XML is its ability to work with large documents. DOM is notorious for the amount of memory it consumes when loading large amounts of XML. A typical solution in the past was to use the memory efficient XmlReader object but XmlReader is forward only and frankly a pain in the butt to use in nontrivial situations.
LINQ to XML memory usage and XElement streaming
LINQ to XML brings two notable improvements to working with large documents. The first is a reduction in the amount of memory required. LINQ to XML stores XML documents in memory more efficiently than DOM.
The second improvement is LINQ to XML’s ability to combine the document approach the XmlReader approach. Using the static method XNode.ReadFrom, which takes an XmlReader and will return a XNode object of the reader’s current position, we can create XNode elements one by one, and work on them as needed. You still aren’t loading the entire document into memory but you get the ease of use of working with a document. The best of both worlds! Let’s see an example...
Example: Streaming LINQ to XML over OOXML
WordprocessingML is split into about a dozen separate files, each of which is quite small and can easily be loaded in their entirety in an XDocument. The exception is the main document file. This file stores the content of document and has the potential to grow from just a few kilobytes for Hello World to 100MB+ in the case of a document with 10,000 pages. Obviously we want to avoid loading a huge document like this if we can help it. This example compares the code that loads the entire document at once and the code which loads it one piece at a time.
Before:
public void LoadMainDocument(TextReader mainDocumentReader)
{
if (mainDocumentReader == null)
throw new ArgumentNullException("mainDocumentReader");
XDocument mainDocument = XDocument.Load(mainDocumentReader);
XNamespace ns = Document.Main;
XElement bodyElement = mainDocument.Root.Element(ns + "body");
List<Content> content = DocumentContent.Create(this, bodyElement.Elements());
SectionProperties finalSectionProperties = new SectionProperties(this, bodyElement.Element(ns + "sectPr"));
// divide content up into sections
_sections.AddRange(CreateSections(content, finalSectionProperties));
}
Here you can see the entire document being loaded into an XDocument with XDocument.Load(mainDocumentReader). The problem is of course that document could potentially be quite large and cause an OutOfMemoryException. XNode.ReadFrom to the rescue...
After:
public void LoadMainDocument(TextReader mainDocumentReader)
{
if (mainDocumentReader == null)
throw new ArgumentNullException("mainDocumentReader");
XNamespace ns = Document.WordprocessingML;
List<Content> content = null;
SectionProperties finalSectionProperties = null;
using (XmlReader reader = XmlReader.Create(mainDocumentReader))
{
while (reader.Read())
{
// move to body content
if (reader.LocalName == "body" && reader.NodeType == XmlNodeType.Element)
{
content = DocumentContent.Create(this, GetChildContentElements(reader));
if (reader.LocalName != "sectPr")
throw new Exception("Final section properties element expected.");
finalSectionProperties = new SectionProperties(this, (XElement)XElement.ReadFrom(reader));
}
}
}
// divide content up into sections
_sections.AddRange(CreateSections(content, finalSectionProperties));
}
private IEnumerable<XElement> GetChildContentElements(XmlReader reader)
{
// move to first child
reader.Read();
while (true)
{
// skip whitespace between elements
reader.MoveToContent();
// break on end document section
if (reader.LocalName == "sectPr")
yield break;
yield return (XElement)XElement.ReadFrom(reader);
}
}
Now rather than loading the entire document into an XDocument, XElements are created from the XmlReader as needed. Each one can be used and queried before falling out of scope and being made available for garbage collection, avoiding the danger of running out of memory. Even though this is quite different to what we were doing previously, no code outside of this method needed to be modified!
-
Getting started with OOXML
It can be hard knowing where to get started when working with a new technology. I have recently commenced work on a project heavily involving OOXML and I thought I’d share the websites and resources I found most useful to other people just starting out.
Open XML Developer - http://openxmldeveloper.org/
Open XML Developer is the best all-in-one OOXML site on the web. It features OOXML news, articles, examples and an active community. If you have questions that aren’t answered on the site the Open XML Developer site has forums on just about every OOXML topic you could think of.
Microsoft SDK for Open XML Formats - http://msdn2.microsoft.com/en-us/library/bb448854.aspx
This is Microsoft’s SDK for working with Open XML. Right now the name is slightly confusing as the SDK only provides an API over the OOXML package, not the OOXML file formats themselves. You are able to read a docx for example, and pick out all the individual style, formatting and document parts; but the actual part contents are still XML that you must read and write yourself. The SDK is still in preview at the moment so I’m sure that support for the markup languages will improve as time goes on.
Open XML Explained - http://openxmldeveloper.org/articles/1970.aspx
Open XML Explained is the first book on Open XML development and is freely available to download. The book is 128 pages long and provides a good high level introduction to OOXML and the three main markup languages: WordprocessingML, SpreadsheetML and PresentationML.
Ecma Office Open XML specification - http://www.ecma-international.org/publications/standards/Ecma-376.htm
If you really want to dig into the details of OOXML, the specification is the best place to look. Although there has been much rendering of clothing and gnashing of teeth over the specification’s 6000 page length, that page count includes introductions and primers to the specification. Also the markup reference document, which is by far the largest of the specification documents, is padded out significantly with many elements and attributes described a number of times.
- Part 1: Fundamentals (174 pages)Gives an overview of Open XML packages and the parts that make up the markup languages.
- Part 2: Open Packaging Convention (129 pages)Goes into more detail of the Open XML package conventions.
- Part 3: Primer (472 pages)Describes the markup languages and how they work. Recommended as a good introduction to OOXML.
- Part 4: Markup Language Reference (5219 pages)Provides descriptions of every element and attribute. There is a lot of detail in this document but repetition also contributes to its size. I have found using the document links is a good way to navigate the content and find what you are looking for.
- Part 5: Markup Compatibility and Extensibility (43 pages)Describes how additional markup can be added to the format while still conforming to the specification.
WinRAR - http://www.rarlab.com/
I’m sure there are better tools for this, but I have been using WinRAR to explore existing OOXML packages. Since the packages are just zip archives any zip tool will let you view the contents.
If you know of any other OOXML resources I’d love to hear about them.
-
Mindscape releases LightSpeed 1.1
The industrious people over at Mindscape have released version 1.1 of their LightSpeed domain modeling/ORM framework. I used LightSpeed while it was in beta with my first Silverlight application and it has definitely matured a lot since then.
There are a bunch of new features in the 1.1 release including my pet feature request: default ordering of entities and child entity collections [:)] Now if you have a Purchase entity, the PurchaseLine collection on it can be explicitly ordered by line number for you when it loads. Cool!
You can find out more about LightSpeed here.
-
Utilities.NET 1.0 released
Utilities.NET is a collection of helper classes and components for quickly solving common .NET programming tasks.
The library is pretty large, currently 117 classes, 300ish unit tests and many, many methods. When I'm developing I have a habit of throwing what is generic in helper classes and Utilities.NET is a combination of many of them times nearly 5 years of .NET development.
The code in Utilities.NET is a rather eclectic mix. A lot is the result of other project's I've worked on (i.e. the reflection stuff comes mostly from implementing a serializer for Json.NET) while other parts come from experimenting with .NET features (such as a lot of what is under threading[:)])
There is no way I could summarize the functionality of every class and static method in one post so I recommend you download it and take a look for yourself. I plan to write some posts looking more closely at specific classes that I think are useful or interesting. In the meantime here is a brief summary of what Utilities.NET covers:
- Collections
- Configuration
- Type converters
- Database
- Files and streams
- Events
- Validation
- Reflection
- Resources
- Services
- Testing
- Text
- Threading
- Web and ASP.NET
- Xml
Documentation for Utilities.NET is currently quite light, with XML comments over about half of the library. In the future I plan to use SandCastle to generate help files but for now you are largely on your own [:)]
Utilities.NET CodePlex Project
Utilities.NET 1.0 Download - Utilities.NET source code and binaries
-
Sitemaps 1.1 released
GoogleSitemap.NET has been renamed and re-released as Sitemaps.NET 1.1. When I originally wrote it in 2005, XML sitemaps were a Google only feature but since then Microsoft, Yahoo and Google have gotten together to create a standard, hence the rename.
There a couple of changes and bug fixes in 1.1 but nothing major.
- New feature - Ability to ignore URLs in your ASP.NET sitemap by adding the attribute sitemapsIgnore="true"
- Change - Renamed from GoogleSitemap.NET to Sitemaps.NET to reflect the new standard at http://www.sitemaps.org
- Change - Namespace updated to http://www.sitemaps.org/schemas/sitemap/0.9
- Bug fix - Fixed culture issues related to writing the decimal point in a URL's priority
While I was putting Visual Studio's rename refactor through its paces I also changed the license, created a Sitemaps.NET page on my blog here and created a CodePlex project containing the source. Links below.
-
Json.NET 1.3 + New license + Now on CodePlex
Since the last release of Json.NET JSON's popularity and usage has continued to grow, with many people taking the time to report bugs and make suggestions. This latest version addresses all known bugs and adds a few of the most requested features.
- New feature - Added JsonPropertyAttribute. This attribute lets you define the name of a JSON property when serializing and deserializing. It is equivalent to XmlElementAttribute for XML serialization.
- New feature - JsonSerializer now supports deserializing read only collections and has better support for custom collections.
- Change - Improved JavaScript string escaping.
- Bug fix - XmlNodeConverter now correctly writes integers, floats, booleans and dates in elements.
- Bug fix - JSON text that ends with whitespace no longer causes the JsonReader to enter an infinite loop.
- Bug fix - Fix for culture issues with DateTimes and doubles when deserializing.
What's New
The most notable new feature is the ability to control the name of the JSON property a .NET property will serailize to and from using an attribute. For example if you want the BillingAddress property on your Person .NET class to be serialized as billing_address on your JSON object, just add the JsonPropertyAttribute to the BillingAddress in .NET and set the PropertyName to billing_address. This feature is equivalent to the functionality the XmlElementAttribute providers for XML serialization.
public class Person
{
private Guid _internalId;
private string _firstName;
[JsonIgnore]
public Guid InternalId
{
get { return _internalId; }
set { _internalId = value; }
}
[JsonProperty("first_name")]
public string FirstName
{
get { return _firstName; }
set { _firstName = value; }
}
}
New license
I have changed the license Json.NET is provided under to the MIT license. Json.NET previously used a Creative Commons license, a license which is more commonly used with media than software. Also since Creative Commons isn't compatible with the GPL I have regularly been getting emails asking whether it is ok to use Json.NET in a GPL licensed project, which I have no problem with.
I chose the MIT license because it is the least restrictive license in common use that I could find. Json.NET is yours to do with it as you want [:)]
Codeplex
Finally, I've created a CodePlex project for Json.NET and uploaded the source code plus this release. I've downloaded many projects off CodePlex since it was created but this will be the first time I've set one up. Hopefully Codeplex will be beneficial to Json.NET (plus a good learning experience [:)])
Json.NET 1.3.1 Download - Json.NET source code and binaries
Update:
There was a bug in 1.3's JavaScript string escaping. It has been corrected in Json.NET 1.3.1 which is now on CodePlex.
-
One year at Intergen
It is nearly a year to the day that I started at Intergen and it is fitting that I'm giving personal presentation to the office (sort of an Intergen tradition) at the same time.
Below are the highlights of my year at Intergen:
Highlights
- Meeting a lot of cool people, both at Intergen and in the wider MS software development community. At my previous job I was pretty isolated from the local software scene and working at Intergen has definitely changed that.
- Working with a lot of new technologies: SQL Server 2005, WCF, WF, SSIS, Silverlight.
- Learning that writing good code isn't just about getting it to work, but that it also has to be maintainable and extensible. I had already done some work using unit tests with Json.NET but while at Intergen I've learn't a lot about using unit testing with larger software projects and designing for testability.
- Working in larger teams and combining TFS, build servers and unit testing. Continuous integration seems so natural now I'm suprised that we're the exception, not the norm.
- Having a great time at TechEd 2007. TechEd is something every developer needs to experience at least once. Bug your boss (or come work for Intergen [;)])
- Making great strides in my Halo skills (really looking forward to Halo 3 [:)])
-
Change of location - http://james.newtonking.com/
This blog is moving to http://james.newtonking.com/.
To handle links and traffic going to the old location I've setup a brilliant little HttpModule called UrlRewritingNet.UrlRewrite. It will redirect all traffic to the new address. Hopefully it should help make the transition fairly painless.
-
Delicious
A red letter day in the history of the Internets: I now have over 500 bookmarks on del.icio.us.
About
del.icio.us for those of you who aren't familiar with it is an online social bookmarking website. Essentially what del.icio.us does is it allows you to save your bookmarks onto its website instead of local on your computer like a browser does.
Another cool feature is it can tag bookmarks with words you define. For example you might tag www.nunit.org with .NET, Testing and Tool because it is to do with .NET development, it is used in testing and it is a tool you download. These tags then become very useful when searching for a bookmark. You can use them to filter you bookmarks, which is quite useful when you have over 500 [:)]
Why I Like It
I started using del.icio.us about 2 1/2 years ago and while it hasn't revolutionized my life and brought fame and fortune (yet...), its improved the way I work. When I encounter a problem I know I've solved in the past, I always check my bookmarks. Chances are I had googled the issue and then saved a webpage with the answer. If a webpage is there, tag filtering makes finding it a breeze.
del.icio.us is also useful when you work on multiple computers. Bookmarks created on my work computer are available on my home computer and vice versa. Using del.icio.us also means you don't need to worry about losing bookmarks. If my computer needs to be rebuilt I can just reimport my bookmarks from the del.icio.us website once I am back up and running.
Browser Integration
A number of Firefox extensions allow you to access del.icio.us bookmarks directly from the browser UI. del.icio.us has an official plugin that completely replaces Firefox's builtin bookmarks with del.icio.us bookmarks. Personally I prefer a third party extension called Foxylicious. It is a lot less invasive and works side by side with any existing Firefox bookmarks. All you do is point it at a bookmark folder and then it simply creates a new folder inside it for every tag you have and then creates bookmarks in that folder that belong to that tag. Foxylicious also adds a menu item to Firefox for quickly adding new bookmarks.
Final del.icio.us tip
If you can't remember where the dots go in http://del.icio.us, try http://www.delicious.com instead.
-
TechEd.Close();
Time flies when you're having fun. The past three days have been busy as I went to pretty much every session I could in between running the hands on labs with fellow Intergenites. Interestingly it was the sessions that I chose at the last minute and knew nothing about that I tended to enjoyed the most. Greg Low's talk on Visual Studio Team System for Database Professionals stands out. It has some really useful features that I knew nothing about. If you work with databases (who doesn't?) I recommend you take a look at what it has to offer.
The hands on labs themselves went really smoothly. We put a lot of work in before TechEd started testing them and ensuring the manuals were easy to follow and that definitely paid off.
Finally here I am striking a pose in the Intergen TechEd 07 gear:
Bright yellow camo pants not in view unfortunately [:)]
Update: Expert at work...
-
TechEd 2007
TechEd is starting in a couple of days and I'm heading up with some fellow Intergenites to run the hands on labs. I tutored and ran lab workshops while at University so it should be a fun blast from the past. Look out for us in the yellow camo pants.
As well as the HOL crew, a number of people from Intergen are going to be giving presentations this year: Andrew Tokeley will be talking about the new dynamic data controls in ASP.NET, Mark Orange will be talking about SharePoint document management and content management, and Chris Auld will be talking about developing applications with office and this ActionThis thing.
See you there!
-
LightSpeed iPod
I'd like to thank the Mindscape team for the iPod they gave me. Mindscape ran a competition for feedback given during LightSpeed's beta and my name came out of the hat.
I had a good time using it with my Silverlight project and it was great seeing them implement some of my suggestions.
-
First Silverlight Project - TrafficWarden
For the past few weeks I've been working on my first Silverlight project, an application called TrafficWarden. Essentially TrafficWarden lets you view and report on traffic information gathered from a router. For example if you want to find out how much bandwidth a local user has used, or see what times of the day have heavy internet usage, TrafficWarden shows you.
I started working on it after buying a new router that supported publishing statistics but found none of the applications available did exactly what I wanted. At the same time Chris Auld, a fellow Intergenite, started an internal Silverlight competition after returning from Mix07. Silverlight with its WPF support seemed like it would be perfect for the UI. Finally the three amigos at Mindscape had just started a beta test for their new domain model library - LightSpeed - and I wanted a project that would let me give it a try.
TrafficWarden is comprised of 4 parts: a Windows service, database, web services and a Silverlight front end.
Windows Service
The traffic information is sent from the router over UDP in the Netflow format so I needed a Windows service to capture and save the data. This was the most time consuming area of the application to make. I had never used the .NET sockets functionality before. Also the Netflow packets from the router are sent as raw bytes, which had to be decoded in accordance with the Netflow V5 spec.
The decoded bytes are used to create the LightSpeed domain model representation of the Netflow information, with the majority of the most important information in the FlowRecord class. The FlowRecords are then mapped to an application that caused the traffic using a number of rules before being saved to the database. Despite being in beta at the time, overall I found LightSpeed to be pretty solid. The Mindscape guys were very responsive to posts in their forum and added a number of suggestions I put forward.
There isn't much to say here. The database follows LightSpeed's convention over configuration naming standard, but the table and column names are pretty much what I'd normally use anyway as that was no problem.
The only thing I can think of is perhaps the duplicated source and destination columns in the FlowRecord table could be normalized out to a new FlowLocation table. Maybe next version.
Web services
The 1.1 Alpha added web service support to Silverlight. For TrafficWarden I use web services to pull down the traffic usage information to the Silverlight application for the graph, and for the realtime traffic usage. Currently Silverlight only supports web services in the JSON format so the web services website uses the Microsoft ASP.NET AJAX library. ASP.NET AJAX has a built in JSON serializer, and automatically JSONfies the ASMX web services for you.
Off topic: My Json.NET library is being used by Mono team in there implementation of ASP.NET AJAX for its JSON serialization and will be used to do this. Pretty cool now that I've seen it in action [:)]
Silverlight
Creating the Silverlight front end was easily the most daunting part of the project for me. I've never done anything with previous versions of WPF/E, WPF or even WinForms! I guess you could call me a child of the Internet. All hail the mighty request/response! [:)]
However saying that, apart from some initial trepidation, I found developing with Silverlight to be quite simple. The part of the Silverlight frontend I found the most difficult was doing the pie graph, which I started porting from a Silverlight 1.0 example but essentially had to rewrite. Overall the learning curve definitely isn't as steep or as long as ASP.NET and the XAML markup language makes banging out some shapes and animations quite simple. Unfortunately due to the dynamic nature of what I'm doing with Silverlight I didn't get much of a chance to use XAML and ended up doing a lot of the work in the code behind.
Consuming the JSON web services with Silverlight was a breeze. Just add the web reference using the service's URL and a strongly typed proxy is automatically generated for you to use from Silverlight. The only gotcha with the 1.1 Alpha is that currently only web service calls within the same domain are supported. This wasn't a problem for me but it is something to keep in mind (you can always call to your server and then have it make the cross domain call).
Adding my router's 'realtime' usage (current kilobytes per second and total kilobytes) was an interesting experience. I wanted it to update asynchronously (every 5 seconds) but while attempting to do it on a new thread I quickly found out that a lot of operations, such as modifying the WPF UI or accessing the HTML DOM, must be done on the primary thread. In the end I got it working using the Completed event on the StoryBoard object, which is described here. Microsoft has said that multithreading is something that they are working on so hopefully it should be better in the final version.
Silverlight also has support for LINQ. I’ve always enjoyed SQL and I’m really excited about being able to do queries within C#. At one point in my Silverlight client I needed to get what fraction of bytes a particular slice was. What would usually take a new class and a number of foreach loops instead took one LINQ query. Cool!
var slices = from u in usages select new { u.Name, SliceSize = (double)u.TotalBytes / usages.Sum(uu => uu.TotalBytes) };
Finally I was also really impressed with the WPF support and what you can do with it. Adding scaling to the application, so that the Silverlight app resizes as the client’s browser resizes, required just one ScaleTransform and took about 15 minutes. Give resizing it a try, it is quite cool (double click makes the app go full screen).
Final Words
I’ve learnt a lot on this project and had a lot of fun but TrafficWarden isn’t completely finished just yet. As well as a general clean up of some of the code I’d like to add a login screen for security and add a line graph for showing usage over time.
Overall I’m really impressed with the Silverlight platform. Silverlight 1.1 is still in alpha and definitely has some weak areas, particularly around the built in controls, but Microsoft has said that they’re working on it. Flash better watch out because Silverlight can only get better.
Source Code & Demo
For a short while you can find a demo of TrafficWarden hosted here.
Source code can be found here.
-
Json.NET and Mono
Following on from the recent post on Json.NET in Monorail, Json.NET is now also going to be used in Mono (no relation). It will form the JSON backend for Mono's upcoming System.Web.Extensions implementation, which is used by ASP.NET AJAX.
Again I think it is awesome to see Json.NET being used in projects of this size (!!!) and popularity.
-
Enterprise Library Logging vs log4net
I did a presentation today to the Intergen development team on the Microsoft Enterprise Library. As someone who doesn't have much experience with giving presentations I thought it went really well. I've had a lot of positive feedback from co-workers which is nice as I was sure it was going to be a disaster. (Interesting note: During one of my interviews for Intergen, in a moment of foolishness, I gave public speaking as the answer to "What is your biggest weakness?". Just as a reference to all future potential employers: my real biggest weakness is that I just work too gosh darn hard.)
One feature of my EntLib presentation that I thought was worthwhile sharing was a comparison of the Enterprise Library Logging Block verse log4net. All the examples I found on the net comparing the two were out of date. They often compared log4net with the logging in EntLib 1.1, which is quite different from EntLib 2.0 & 3.0.
Here are what I believe are the strengths of each:
Enterprise Library Logging Application Block
- More actively developed - The recently released Enterprise Library 3.0 contained a number of new features for logging, including WCF support. The last release for log4net on the other hand was over a year ago.
- Takes advantage of and supports newer technologies - The 3.0 release included support for WCF.
- Integrates with other application blocks - The Policy Injection Block and Exception Handling Block both include logging handlers to automatically invoke the Logging Application Block.
- Very extensible - Pretty much all aspects of the Logging Block can be extended and customized. You can create your own TraceListeners, Filters and Formaters quite easily.
- Configuration tool - Enterprise Library comes with a great tool for creating and modifying its configuration sections. This takes away a lot of the pain and guess work of configuring the EntLib application blocks.
log4net
- Works with .NET 1.0 & 1.1 - The much improved logging of EntLib 2.0 and above is only available if your application is running on .NET 2.0 or greater. log4net however works on all versions of .NET.
- Simpler install - When using the Enterprise Library there are some services you really should install. This is as simple as running a bat file included with EntLib but it does complicate your deployment process.
- Slightly faster - log4net was significantly quicker than EntLib 1.1 logging. EntLib 2.0 onwards has improved its performance but log4net remains slightly faster. A benchmark I found while researching my presentation had EntLib taking approximately 5-6 seconds to log 100,000 entries while log4net took about 3 seconds. Does the speed difference matter? Probably not. However log4net net does support...
- Appender Buffering - Buffering support with some appenders lets log4net queue up log entries and write them in a single go. If you are writing entries to the database then buffering is a good way to improve performance.
- More platforms - Enterprise Library does not support the .NET Compact Framework while log4net does.
If you have any other thoughts on the Logging Application Block or log4net I'd love to hear them.
-
Community Server 2007
Although not much appears different on the outside, last night I updated this blog to Community Server 2007. Although I'm always wary of updating something the size and complexity of CS, like before I found the CS upgrade process to be a relatively pain free experience.
One of the nice new features that aided in the update is the concept of override config files. Rather than updating the dauntingly large Community Server .configs, you can put your changes in a new file, i.e. communityserver_override.config. No more trial and error merging.
Community Server's new skinning system is also something I looked at while upgrading and I spent a couple of hours porting my old skin to the new system. Overall I found it was much nicer to use than what CS previously had. One thing that jumped out was the reduction in files. A skin previously was spread over a ridiculous number of files, and half the battle was just finding the one you wanted to edit. In CS 2007 it is down from 55 to a much more manageable 8 according to the documentation.
Also new is blog 'file storage' which you can read about here. Rather than entering this post using the CS control and manually uploading an image like I have previously done, this post is written in Windows Live Writer. Windows Live Writer not only automatically adds the post to CS, but it uploads the image (the one right below the title) into the specified blog storage directory and links everything up for me. Cool!
Finally I noticed that Community Server has FeedBurner support which I've signed up for. My FeedBurner URL is http://feeds.newtonking.com/jamesnewtonking.
Good job Community Server people.
-
Json.NET and MonoRail
I learnt today that MonoRail is using Json.NET for its JSON support. The license I release my free projects under is pretty permissive so I don't often get a chance to learn what they're being used with. To hear that Json.NET is being used with something the size and quality of MonoRail is pretty cool [:)]
-
Sitemaps.org
I'm surprised I missed this: sitemaps.org
Google, Microsoft and Yahoo are collaborating together on a sitemap XML format. Because the format is essentially the same as what Google is already using, GoogleSitemap.NET can now be used with all the major search engines [:)]
-
Five Things Meme
The five things meme continues to consume the blogosphere, devouring all in its path. JD has tagged me so in the interests of playing a long I will now post 5 exclusive, never heard before things about me. Please make sure your tray is upright and locked, and assume the edge of your seat position.
- Apart from playing around with HTML and a little JavaScript at college, I didn't really start programming until my second year of university. The paper than introduced me was on VB 6.0, and I enjoyed it so much I got an A+ and then came back to tutor it the year after.
- I can't scull. I just can't. Alas, my Fear Factor/Survivor career is over before it ever began. I know I'll never pass the "The person who drinks the glass of {disgusting substance} fastest wins" challenge.
- My favourite 80s TV program is DuckTales. That one is for you TY [:)]
- I have a fairly large burn scar on my left bicep, caused by a hot drink and an overly inquisitive baby (me). I was only 3 at the time and I don't remember it happening. The scar has also faded over the years to the point where no one notices unless I point it out. I actually quite like it because I usually win when guys start comparing.
- My blood type is O-negative, universal donor. If you're stranded on a strange island by a plane crash and you need a blood transfusion, I'm your guy.
My victims are: Porges, Cynos, Jerms, Brendan and Tyler Cowen and Alex Tabarrok of Marginal Revolution fame. Go meme go!
-
Json.NET 1.2 released
An update! A blog post! Hope everyone had a good Christmas [:)]
This long overdue release of Json.NET is mostly the result of feedback from users. Thanks for all the suggestions.
- New feature - Added JsonIgnoreAttribute. This is the equivalent to XmlIgnoreAttribute for XML serialization.
- New feature - Added generic DeserializeObject<T> methods to JavaScriptConvert.
- New feature - Added AspNetAjaxDateTimeConverter. Converts DateTimes to and from the ASP.NET AJAX format, e.g. "@1229083932012@".
- Change - Improved many of the library's exception messages to provide more detail.
- Bug fix - Fixed issues around read-only and write-only properties when serializing.
- Bug fix - Fixed typo in XmlNodeConverter.
What's New
A number of people have emailed that properties having a getter but not a setter, and vise-versa, could potentially cause an error when serializing or deserializing. This was something I had overlooked and is fixed in this release. Special thanks to those who took the time to email source of the fix they made to their own copy.
Also requested was an attribute to make the serializer ignore a member, similar to the XmlIgnoreAttribute for .NET XML serialization. These requests could be related to the previous bug but it is a quick and simple addition, and is a good feature to have. JsonIgnoreAttribute is included in this release.
Finally the new converter, AspNetAjaxDateTimeConverter, provides a means to output dates that conform to the JSON 'standard' as well as interoperability with Microsoft's ASP.NET AJAX serializer. You can read more about the ASP.NET AJAX date format here.
Download Json.NET - Json.NET dll and C# source code.