Tuesday, February 17, 2009

Adventures while building a Silverlight Enterprise application part #7

Today I wanted to state this, although for some of you it may seem obvious, others may get caught up in this pitfall. Always make sure that any webservice you use from Silverlight is thread safe, even in the same session.

Here is what could happen. You have some UI action, for example the click of a button, which triggers a call to your webservice. Inside your webservice you use some non thread safe object, to do some operation, for example a SqlConnection to read some data from your database. All is well, but as you build your webservice you may consider to share this SqlConnection, because you need to do two queries within one call.

That's where things start going wrong. On first sight, everything seems to work fine. Until that is, you start bashing away on that button and all of a sudden your Silverlight application crashes mysteriously in the completed event of your service call.
After some debugging you may find out that this is caused by the fact that your SqlConnection failed for some reason. Some more debugging may tell you that the SqlConnection is actually closed, or in the state of being opened, or it may just throw an exception, telling you to remove your results and try again. Any of these is caused by the fact that you use a single SqlConnection to do more than one operation on.

"But what about performance?" I hear you ask. Well, what about it? There is never, and I mean never, a reason to reuse an instance of SqlConnection. The reason is that Microsoft provided us with a great feature called a connection pool, which will just cycle trough a set of already created connections for each request it gets. All you have to do is make sure you open the connection as late as possible and close it as soon as possible. This way you never hold on to a connection to long and the change of a connection pool running out of connections is as small as possible.

But what if you are using some other non thread safe object, that would impact performance when you re instantiate it for every call? One thing I would probably want to do is, disable the control that triggers the action, as soon as an action takes place and only enable it again as the action is complete, or maybe even at a later time, if that's appropriate. This way you make sure that a single user can not run multiple operations and performance should improve already.

Another solution may be to actually rethink the design of your service. Maybe you need a more extensive model than a standard service. For example, you could choose to build an actual Windows Service with a WCF interface. This would allow you to queue up any calls comming in and having them handled by other threads. This model would be useful to provide, for example, a search service.

Hope this was helpful to at least some of you and I'm looking forward to reading your comments.

Random hints and tips on Silverlight

Lately I ran into a few quirks while doing some prototyping in Silverlight and I taught it would be nice to share them with you people.

Silverlight Application Name
Actually this is more about the name of the website that VS2008 creates for you when you create a Silverlight application. If the name of the Silverlight application is relatively long, it may very well result in a to long name for your ASP.NET website. This will result in an error as that project is created.

IntelliSense with Silverlight assemblies
If you use a Silverlight class library to build some usercontrol in and you reference it in a Silverlight application, IntelliSense will not show the class while your typing your namespace alias declaration in xaml. You have to compile the class library before it shows up.

Getting a template applied on your custom control
This may seem obvious, but it had me struggling for quite a bit. Whenever you added a template in generic.xaml to apply it to some custom control it is important to know it has to go into a folder called themes. Otherwise the template won't be found. This is only since release and is not well documented.

Refactoring doesn't effect XAML
Personally I'd say this is an oversight in Visual Studio 2008 SP1. Whenever you use refactoring to rename a property that is used in XAML, it wont change the name in the XAML, breaking any references you had to the property. I guess it's back to manually refactoring those.

Tab navigation on a popup doens't work by default
Whenever you use a popup control to display some controls, you can't use tab navigation by default. You'll need to add a content control to host this functionality. Jeff Handly explains it on his blog.

Allways make sure IsolatedStorageSettings.Save() is called at some point
Altough Save is called automatically called whenever a Silverlight application ends, this could actually fail, without you noticing. If, for some reason, saving settings throws an exception, you will not be able to tell this, because the application already ended. But if you call Save, it will throw the exception at that point and you can resolve any issues.

Make sure your webservices are thread safe
This may not seem like a Silverlight tip, but I put it here, because in most cases the Silverlight programmer is confronted with the initial exceptions. Inside the webservice, if you use a non thread safe object, make sure you do not share instances throughout your code. Create new once as needed and destruct them as soon as you're done. Read more about it here.
You may want to check back here, as chances are I'll post new information here from time to time.

Monday, February 9, 2009

Adventures while building a Silverlight Enterprise application part #6

Last time in this series we discussed projects and how to setup your large application. One point that dictates project layout in our application is the fact that we load XAP files as needed. Recently I've had some interesting insights from our functional specialist. He not only wanted multiple modules on one screen, but he also wanted users, or at least administrators, to be able to configure which modules would be shown on which screen.

The first alarm bells started ringing at this point. It could all end up in a performance nightmare. In the initial faze of the project we figured we would load modules on a single screen from a single XAP file, all at once. This would greatly reduce any overhead involved in loading XAP files dynamically. But now we would have to load multiple modules from multiple XAP files, up to fifteen at a time. So at that moment in time I told the functional specialist, "No, we can't do that." I explained why this was likely to give problems and his response was, "Well can't you build these XAP files dynamically as you need them?" Well, wouldn't that be a heroic solution and a hell of a blog post :-).

I told him that would complicate things a great deal and we decided that for the most part we would decide which modules would end up on which screens, but users would be able to configure extra screens, with the knowledge that these screens could end up being slower than others.

In the two weeks since, I've come up with some alternative solutions to our problem. First of all, you don't have to put an assembly in a XAP file in order to load it, BUT... be warned! If you decide to load an assembly on it's own, you won't load anything referenced from the assembly, not other assemblies you might need and no resource files either. You would have to resolve these references yourself or have some kind of common set of references that every module is restricted to. In our case, this would be troublesome at best, so I discarded this.

I did figure out we would need some kind of meta database that would contain the settings of all these modules and who wants to see what where. Also this database would have to include security information on all the modules.

Then my colleague came up with the following solution. We do build everything we need to dynamically place any module on any screen as this is configured, but we don't solve the performance issue in the code. As customers experience issues, we simply build a new XAP file with the required modules and deploy it. We add it to our meta database and Bob's your uncle.
This does mean we have to build some algorithm that can figure out the most optimal set of XAP files possible with what's available, but that is actually doable if you design a decent entity structure.

After this was sorted I concluded that we didn't even have code yet that would load multiple modules from a single XAP file. So on I went and build that. It's basically a variation on the code I showed you all earlier in this post. Unfortunately I won't be able to upload all the code for this, but I can show you some highlights.


public void LoadModules(Collection<ModuleInfo> modules)
{
_modules = modules;
foreach (ModuleInfo module in modules)
{
GenerateTab(module);
}
var result = from ModuleInfo module in modules
group module by module.XapFileName;

foreach (IGrouping<string, ModuleInfo> xapFileGroup in result)
{
string xapFileName = xapFileGroup.Key;
WebClient client = new WebClient();
client.OpenReadCompleted += new OpenReadCompletedEventHandler(client_OpenReadCompleted);
client.OpenReadAsync(new Uri(xapFileName, UriKind.Relative), xapFileGroup);
}
}

void client_OpenReadCompleted(object sender, OpenReadCompletedEventArgs e)
{
if (e.Error != null)
{
return;
}
IGrouping<string, ModuleInfo> xapFileGroup = e.UserState as IGrouping<string, ModuleInfo>;
if (xapFileGroup == null)
{
return;
}

var assemblyNames = (from ModuleInfo module in xapFileGroup
select module.AssemblyFileName).Distinct();
string[] stringAssemblyNames = assemblyNames.ToArray();

var classNames = (from ModuleInfo module in xapFileGroup
select module.ClassName).Distinct();
string[] stringClassNames = classNames.ToArray();

Collection<UIElement> modules = XapHelper.LoadUIElementsFromXap(e.Result, stringAssemblyNames, stringClassNames);

foreach (UIElement module in modules)
{
PlaceModule(module);
}
}

These are basically the loading of the modules and receiving a stream. In the LoadModules method I get a collection of ModuleInfo structs, that tell me what XAP file the module is in, what assembly and which class. In a linq query I group the modules by XAP file and then I kick off a load for each of these.
In the completed event I pick up the grouping from the linq query and use it to get to the right assembly and class names. I then use an adjusted method from the XapHelper class I wrote earlier to load the UIElements I need from the XAP file:

public static Collection LoadUIElementsFromXap(Stream xapStream, string[] assemblyNames, string[] classNames)
{
Collection elements = new Collection();

string applicationManifest = ReadApplicationManifest(xapStream);
XElement deploymentElement = XDocument.Parse(applicationManifest).Root;
IEnumerable deploymentParts = from assemblyParts in deploymentElement.Elements().Elements()
select assemblyParts;

Collection assemblies = LoadAssemblies(xapStream, assemblyNames, deploymentParts);

foreach (Assembly assembly in assemblies)
{
foreach (string className in classNames)
{
UIElement element = assembly.CreateInstance(className) as UIElement;
if (element != null)
{
elements.Add(element);
}
}
}

return elements;
}

As you may notice, the only thing I changed is what I do with each assembly and class as I load them.

I hope this post was helpful for you and once again, if you have any questions or comments, please let me know.

Wednesday, February 4, 2009

Ole and accessing files embedded in Access part #2

This series seems to be plagued by delays. Unfortunately I don't have much time lately. Work is really busy and I'm moving to a new home! So please be patient with me. I'll try to post as often as I can.

In the last post in this series I've talked about some of the theoretical background of opening files that were stored by Access. We've looked at the different headers and at metafilepict blocks. Today it's time to look at some code.

Before we do dive into the code, I would like to point out that part of this code is the library written by Eduardo Morcillo. My code wouldn't run without it.

First order of business is to have structs that can hold my header information:

internal struct PackageHeader
{
public short Signature;
public short HeaderSize;
public uint ObjectType;
public short FriendlyNameLen;
public short ClassNameLen;
public short FrNameOffset;
public short ClassNameOffset;
public int ObjectSize;
public string FriendlyName;
public string ClassName;
}

internal struct OleHeader
{
public uint OleVersion;
public uint Format;
public int ObjectTypeNameLen;
public string ObjectTypeName;
}

As you can see we have two structs that contain information about the package header and the OLE header, based on the information we gathered in the last article. The types are based on the number of bytes each entry can store.

The next order of business is to define some constant values we need during the process.

private const int FixedPackageHeaderSize = 20;
private const int FixedOleHeaderSize = 12;
private const int MetaFileHeaderSize = 45;
private const int BufferSize = 1024;
private const string ContentsEntryName = "CONTENTS";
private const string WorkBookEntryName = "Workbook";
private const string MSPhotoFriendlyName = "MSPhotoEd.3\0";

As you can see some number are here for fixed sizes of headers, a buffer size which is arbitrary, and some string constants we need to identify what type of data we are dealing with.
Finally we also need some private fields in our class that can hold some data for us:

private System.IO.Stream _input;
private long _endOfHeaderPosition;
private int _dataLength;
private PackageHeader _packageHeader;
private OleHeader _oleHeader;

The constructor of our class actually has a parameter of type Stream that is the input of the class. From the constructor a method ReadHeader is called:

private void ReadHeader()
{
if (_input.Position > 0 && _input.CanSeek)
{
_input.Seek(0, SeekOrigin.Begin);
}

byte[] fixedPackageHeaderData = new byte[FixedPackageHeaderSize];
_input.Read(fixedPackageHeaderData, 0, FixedPackageHeaderSize);

PackageHeader packageHeader = new PackageHeader();
packageHeader.Signature = CalcShortFromBytes(new byte[] { fixedPackageHeaderData[0], fixedPackageHeaderData[1] });
packageHeader.HeaderSize = CalcShortFromBytes(new byte[] { fixedPackageHeaderData[2], fixedPackageHeaderData[3] });
packageHeader.ObjectType = CalcUIntFromBytes(new byte[] { fixedPackageHeaderData[4], fixedPackageHeaderData[5], fixedPackageHeaderData[6], fixedPackageHeaderData[7] });
packageHeader.FriendlyNameLen = CalcShortFromBytes(new byte[] { fixedPackageHeaderData[8], fixedPackageHeaderData[9] });
packageHeader.ClassNameLen = CalcShortFromBytes(new byte[] { fixedPackageHeaderData[10], fixedPackageHeaderData[11] });
packageHeader.FrNameOffset = CalcShortFromBytes(new byte[] { fixedPackageHeaderData[12], fixedPackageHeaderData[13] });
packageHeader.ClassNameOffset = CalcShortFromBytes(new byte[] { fixedPackageHeaderData[14], fixedPackageHeaderData[15] });
packageHeader.ObjectSize = CalcIntFromBytes(new byte[] { fixedPackageHeaderData[16], fixedPackageHeaderData[17], fixedPackageHeaderData[18], fixedPackageHeaderData[19] });

byte[] friendlyNameData = new byte[packageHeader.FriendlyNameLen];
_input.Read(friendlyNameData, 0, packageHeader.FriendlyNameLen);
packageHeader.FriendlyName = Encoding.UTF8.GetString(friendlyNameData);

byte[] classNameData = new byte[packageHeader.ClassNameLen];
_input.Read(classNameData, 0, packageHeader.ClassNameLen);
packageHeader.ClassName = Encoding.UTF8.GetString(classNameData);

_packageHeader = packageHeader;

byte[] fixedOleHeaderData = new byte[FixedOleHeaderSize];
_input.Read(fixedOleHeaderData, 0, FixedOleHeaderSize);

OleHeader oleHeader = new OleHeader();
oleHeader.OleVersion = CalcUIntFromBytes(new byte[] { fixedOleHeaderData[0], fixedOleHeaderData[1], fixedOleHeaderData[2], fixedOleHeaderData[3] });
oleHeader.Format = CalcUIntFromBytes(new byte[] { fixedOleHeaderData[4], fixedOleHeaderData[5], fixedOleHeaderData[6], fixedOleHeaderData[7] });
oleHeader.ObjectTypeNameLen = CalcIntFromBytes(new byte[] { fixedOleHeaderData[8], fixedOleHeaderData[9], fixedOleHeaderData[10], fixedOleHeaderData[11] });

byte[] objectTypeNameData = new byte[oleHeader.ObjectTypeNameLen];
_input.Read(objectTypeNameData, 0, oleHeader.ObjectTypeNameLen);
oleHeader.ObjectTypeName = Encoding.UTF8.GetString(objectTypeNameData);

_oleHeader = oleHeader;

for (int index = 0; index < 8; index++)
{
_input.ReadByte();
}

byte[] lengthData = new byte[4];
_input.Read(lengthData, 0, 4);
_dataLength = BitConverter.ToInt32(lengthData, 0);

_endOfHeaderPosition = _input.Position;
}

This method reads the header and decomposes it into entries. This allows us to get to the variable bits and read them correctly as well. It also gives us some important information, being the length of the data block and the end position of the header.

So on to the business end of this class, it's GetStrippedStream method:

public System.IO.Stream GetStrippedStream()
{
if (_input.Position != _endOfHeaderPosition && _input.CanSeek)
{
_input.Seek(_endOfHeaderPosition, SeekOrigin.Begin);
}
if (_packageHeader.ClassName.Equals(MSPhotoFriendlyName, StringComparison.OrdinalIgnoreCase))
{
_input.Seek(_dataLength + MetaFileHeaderSize, SeekOrigin.Current);
}

string tempFileName = Path.GetTempFileName();
FileStream tempFileStream = File.OpenWrite(tempFileName);

byte[] buffer = new byte[BufferSize];
int loadedBytes = _input.Read(buffer, 0, BufferSize);
while (loadedBytes > 0)
{
tempFileStream.Write(buffer, 0, loadedBytes);
loadedBytes = _input.Read(buffer, 0, BufferSize);
}
tempFileStream.Close();

System.IO.Stream outputStream;
bool isCompoundFile = Storage.IsCompoundStorageFile(tempFileName);
if (isCompoundFile)
{
Storage storage = new Storage(tempFileName);
Storage.StorageElementsCollection elements = storage.Elements();
// element.Name.Equals(WorkBookEntryName, StringComparison.OrdinalIgnoreCase)
var result = from StatStg element in elements
where (element.Name.Equals(ContentsEntryName, StringComparison.OrdinalIgnoreCase)
element.Name.Equals(WorkBookEntryName, StringComparison.OrdinalIgnoreCase))
&& element.Type == StatStg.ElementType.Stream
select element;
if (result.Any())
{
outputStream = storage.OpenStream(result.First().Name);
}
else
{
storage.Close();
outputStream = File.OpenRead(tempFileName);
}
}
else
{
outputStream = File.OpenRead(tempFileName);
}
return outputStream;
}

As you can see we first make sure we set the position of the stream to the right position, based on the header information. The then write the stream to a temp file. The temp file is used to work with Eduardo's library. We use it to determine if this file is actually a structured storage and if so we extract only the stream we need. If this is a Microsoft Word document, then it will not have the elements we look for in the structured storage. In this case we want to send the complete file as a result. If this was not a structured storage, then we want to send the complete file as well.

Warning:
The code may suggest this works for Microsoft Excel as well, but unfortunately it doesn't. The reason for this is that whenever Access embeds an Excel file, it will change the structured storage completely, until the point that the original file can not be recovered. I have managed to get the data out of it, but I've not managed to get a properly working Excel file. If anyone can provide me with some more insight into this I would be very greatful.

Below you can find the .cs file with the complete class.


This concludes the series on Ole. I hope you have found it helpful. Please leave any comments and/or questions below. I'm always happy to read and reply.

Update April 5th 2012: Reembedded the download as it was broken.