Thursday, December 9, 2010

Adventures while building a Silverlight Enterprise application part #39

In this post we look at an issue we had while implementing a specific form of data entry for our application. Basically there was an oversight in our reasoning about CRUD operations that needed fixing. It also shows the power of GUID’s.

The situation

We are currently in the final sprint towards delivering our first new product release to a group of customers. As one might expect, we are in the process of testing the functionality that was built so far and we ran into some issues. One issue we encountered was hidden deep down in the framework and hidden behind some other bug in the Silverlight UI.

To get a better understanding of the issue we found, it’s important to have some basic knowledge about how are framework works, especially in terms of creating and updating records. We decided early on that creating a record is basically the same as updating one. The only difference in our service call is that we provide an empty GUID instead of an existing GUID. The framework should then detect that there is a new record, create a GUID for it and insert it into the Entity Framework context and all works well (except for the incidental problem with insert).

Recently we added two modules to our UI, which consist of a data grid. The data grid is filled with a default set of rows from one table and once the user fills in certain values, records are created in another table. In order to make this work there is a view which is added to the entity framework to provide the correct rows. We then have some code in the service layer to make sure we don’t get double rows when there are records available in the second table.

The bug

A bug occurred when we tried to enter a value into a record that was not yet available in the second table, saved the modules data and then change that value. Now we ended up with two records in the second table instead of one. A short debugging session revealed the cause. Both update actions where send with an empty GUID, so both we’re interpreted as an insert.

The solution

Solving it was not as easy. In order to prevent these kind of bugs, we already return an updated (or new for that matter) business object from the service. However, for new records, there is no id known (it’s empty), so we have nothing to match to, meaning we can’t update the record in the data grid, which leads to our problem.

In order to fix it, we would want to create a new GUID in the Silverlight client, send it to the server and use it there as well, however then our service would not interpret this as an insert. What we can do is send this new GUID in the actual property of the business object which we want to update. This did require an update of the framework where, in case of an insert (the passed GUID is empty) we now check to see if the business object has a GUID as it’s key already available then we use that GUID instead of generating a new one.

So now, we can generate a GUID in the client, send it with the insert and know which object was updated as soon as it is received in the service call back. This allows us to match the rows in our data grid to the business object and update accordingly, meaning no more double records.

In the end, this his the code we created to make sure we get a GUID if one is available:

   1:              idPropertyName = string.Format("{0}{1}", idPropertyName, IdSuffix);


   2:              PropertyInfo idProperty = dataType.GetProperty(idPropertyName);


   3:              Guid id = Guid.NewGuid();


   4:              if (idProperty != null)


   5:              {


   6:                  if (changedData.PropertyDictionary.ContainsKey(idPropertyName))


   7:                  {


   8:                      PropertyObject idPropertyObject = (PropertyObject)changedData.PropertyDictionary[idPropertyName];


   9:                      Guid requestedId = (Guid)idPropertyObject.CurrentValue;


  10:                      if (!requestedId.Equals(Guid.Empty))


  11:                      {


  12:                          id = requestedId;


  13:                      }


  14:                  }


  15:                  idProperty.SetValue(data, id, null);


  16:              }


  17:              this.Id = id;


  18:              SetProperty(this.IdProperty, id);


Tuesday, November 16, 2010

Adventures while building a Silverlight Enterprise application part #38

In this post I want to look into a problem most of us will encounter while building a Line Of Business application in Silverlight. It involves UI paradigms combined with the asynchronous communication model.

The problem

We have all run into it at some point. You have a form that a user needs to fill out, which contains a couple of fields and something like a save button. When the user clicks the save button you validate their input and if the validation fails, you block the save action.

All is well. The user can correct the input and try again at will. However, validation has a downside. It’s limited to either true or false and no way in between.

Let’s consider a scenario where you would like to have confirmation from the user. Now you could hack this into the validation system by using a flag and if the user doesn’t change the value and saves a second time it must be right. However I don’t think this is a very good solution and it would confuse users.

I need a dialog!

So we want some other way of asking the user for input. The usual way of doing this would be to present the user with a dialog where the user can press a button to choose whether or not it’s ok to continue. We worked with them for many years and they are intuitive to use.

In Silverlight however they pose a problem. Of course you can use the dialog included in the framework. However this uses the browser to present the dialog and it’s not a good looking thing. More importantly it’s very limited in it’s functionality. Fortunately it’s not that big of a deal to just build a user control that provides the functionality we want. We can even make it modal by just placing a Canvas over the entire UI and then putting the user control in a Popup (or use a childwindow for that matter).

Then the issues start. If you are to build a user control to display dialogs, like we have done, you most likely implemented it using a callback to signal the application that the user has clicked a button in your dialog and you’re done. Although this is a classic asynchronous model, working with it can become problematic.

Splitting code

Let’s go back to our scenario. We have a form with some fields. Some fields get validated and some can lead to a dialog. The steps we would take to save our data before introducing the dialog look like this:

  1. Validate the input
  2. If all fields are correct start saving the data
  3. Once the save is completed, refresh the current data

Now if we introduce a single dialog that allows the user to choose between stop and continue the steps are like this:

  1. Validate the input
  2. If all fields are correct, check if the dialog is needed
  3. If the dialog is needed, show the dialog
  4. Once the user has made a choice and wants to continue, start saving the data. If the user does not want to continue, stop.
  5. Once the save is completed, refresh the current data

Looking at these steps, we go from one event handler for the save completion, to adding a callback for the dialog. So instead of having two methods with code for saving, we now have three methods. And with every dialog we add, we add another callback, thus another method containing code for our save action. You can see how this gets out of hand quickly.

So make it synchronous…

Well, that would solve our problem, however… Silverlight only has one dispatcher for the UI-thread. This means that as soon as you block the UI-thread, your application becomes unresponsive (basically it just hangs until you unblock). While displaying a dialog that is not a very practical thing to do, as the user would not be able to click a button (defeating the whole purpose, right?).

In effect this means there is no straight forward to provide synchronous UI interaction. It’s going to be event based and thus asynchronous. There is a way around this, however. You can’t block the UI thread, but you can block any other thread, so if we move all the code involved in saving to a background thread we can eliminate a callback for the dialog. Here is some example code to show you how calling such a dialog would look:

   1: private void saveButton_Click(object sender, RoutedEventArgs e)



   2: {

   3:     // Make sure to create your dialog while still running on the UI thread

   4:     _dialog = new DialogWindow();

   5:     

   6:     // Pass off the save logic to another thread

   7:     ThreadPool.QueueUserWorkItem(SaveData, this);

   8: }

   9:  

  10: private void SaveData(object stateInfo)

  11: {

  12:     if (DialogWindow.ShowDialog(_dialog, stateInfo))

  13:     {

  14:         // Actually save the data

  15:     }

  16: }



So basically you run anything involved with the save action (except for validation, perhaps) on a background thread. I also pass a reference to the calling UI control. The reason for that will become clear soon.

While running in the background thread, you can call my DialogWindow.ShowDialog method and it will block your thread. This means you can use its result right away.

Run code on the UI thread

In order for this to work we obviously need to be able to run some code on the UI thread and then block our thread while waiting for a result. Here is the code for the ShowDialog mehtod:



   1: public static bool ShowDialog(DialogWindow window, object stateInfo)

   2: {

   3:     bool result = false;

   4:     DependencyObject dependencyObject = stateInfo as DependencyObject;

   5:  

   6:     // If we got passed a dependency object and we have access 

   7:     // to it's dispatcher then we are on the UI thread and we can't block

   8:     if (dependencyObject == null  dependencyObject.Dispatcher.CheckAccess())

   9:     {

  10:         window.Show();

  11:         return false;

  12:     }

  13:  

  14:     Action action = new Action(window.Show);

  15:     // Run the Show method on the UI thread

  16:     dependencyObject.Dispatcher.BeginInvoke(action);

  17:     // Block this thread until we have a result

  18:     while (!window.DialogResult.HasValue)

  19:     {

  20:         Thread.Sleep(50);

  21:     }

  22:     result = window.DialogResult.Value;

  23:  

  24:     return result;

  25: }


First thing we do is check if we are actually on the UI thread. If we are, we don’t want to block. Be aware that the CheckAccess method is not shown by IntelliSense for some reason. It compiles without any problems though.

If we are not on the UI thread, we can actually use the dispatcher of the dependency object passed to us, to actually run the Show method on the UI thread. Then we simply wait for the result and pass it back.

Conclusion

In order to have a synchronous dialog we simply introduced a thread to run our code on which runs some code back on the UI thread and waits for it to complete. This should make it easier for us to compose more complex flows in our code.

You can download the code here.

Thursday, November 4, 2010

What’s the fuss around Silverlight about?

Since day one of PDC10 there has been a lot of press attention for what appears to be a change in strategy for Microsoft around Silverlight. As I’ve gotten several questions from people concerning the stories published by several respectable sources and how this would impact the business, I decided to write down my take on this.

The PDC10 debacle

So what are people on about? It basically comes down to two things. One is the “lack” of attention for Silverlight on the PDC10. The other is Microsoft announcing that they completely back HTML5. It was elaborated on by saying that there has been a shift in the strategy around platform independent web technology. Also, because there has not been an announcement on Silverlight 5, people are scared into believing there will not be a Silverlight 5.

Most people interpreted this by concluding that Microsoft will limit their investments in Silverlight to the Windows Phone platform. Let’s assume they are right in their conclusion. If that is true, then it’s devastating for all those developers and companies who have invested heavily in Silverlight in the past years.

My analysis on Silverlight

So how do I look at this? Well, there is a number of things I see differently from most people. Let’s have a look at some of those.

HTML5 vs Silverlight

A lot of people see HTML5 and think it replaces Silverlight for the browser completely. Having worked with both HTML4 and Silverlight and having looked into what HTML5 brings to the table so far (it’s not done yet), I have to disagree.

First of all most people that make this statement think of Silverlight as a platform for media and games, similar to Flash. However there is a lot more to Silverlight. For one it is a lot better suited for Line Of Business (LOB) applications. Another major advantage is that everything works and looks exactly the same in different browsers and on the desktop.

Another major advantage about using Silverlight for web developemt is that you can use one programming language across different tiers, complimented by one markup language. Compare that to HTML and you immediately spot the problem. Not only do you need HTML for markup, you need CSS for styling and Javascript for client interaction. To top it all off you need some server side language to work with data. That’s four languages for something that Silverlight does with two languages. Are the decision makers of IT-world paying attention? Less is definitely more in this case.

Because of all this, and a great IDE in the Visual Studio / Blend combination, productivity in Silverlight is much higher and more business logic focused than the classic web application. This allows developers to provide more value more quickly, making more money for the companies and their customers.

But Microsoft has given up on Silverlight, right??

Well, did they? Let’s look at some facts here.

First of all this statement is all based on Microsoft communicating their support for HTML as the only true platform independent technology. People think that this is change in strategy, but in fact it’s not. Let’s face it, that is just reality. The downside on Silverlight is that there is no complete support for all platforms. I’m not only talking about Linux here. It’s also about a lot of mobile platforms. And it’s hardly realistic to expect to get that support for all these platforms. It’s not going to be cost effective to do so. In my book, this statement is stating the obvious and it would be stupid to say anything else.

But will Microsoft stop investing heavily Silverlight because of HTML5? I don’t think so. First of all HTML5 is far from complete. It will take at least another ten years to make it to the main stream in it’s full glory. Second of all HTML5 will have trouble adapting, because it is a global standard and everyone wants to have their say about it. Silverlight doesn’t have this issue.

And then there is the business side of things. Microsoft has invested so heavily in Silverlight, it would not make sense for them to stop now. They have cranked out three major releases of Silverlight in the past two years, including tooling for Visual Studio and support in Blend. They have invested in the Silverlight Toolkit. They brought Silverlight to the Windows Phone platform. And recently they introduced Visual Studio LightSwitch, which can generate complete Silverlight applications. Really, they have hundreds of millions of dollars invested in Silverlight, with great success as a result. And frankly, there is no reason for them to stop doing so, for all of the reasons mentioned before. As long as it keeps selling software for them, they’ll keep working on it.

Conclusion

So we can conclude that Microsoft will not stop pushing Silverlight forward in the foreseeable future. However, it is up to decision makers to stick with the facts and not go with the press buzz, or they will be investing in the wrong technologies, loosing a lot of money in the process. And it’s up to developers to stand by their choice of technology. There really is no need to suddenly change anything.

Just remind yourselves, what made you choose Silverlight as a platform in the first place? Exactly.

Friday, October 15, 2010

Adventures while building a Silverlight Enterprise application part #37

In this post we are looking at Entity Framework and specifically at some performance quirks.

The story

Recently we were testing part of our new software. One thing we noticed is that, besides the fact that the first time we load data, things are slow because of services starting up and Entity Framework loading metadata, also the first time we save some data, it is also slow.

We decided that is was worth investigating why this was happening. The goal was to at least come up with an explanation and some more insight into when this was occurring.

Analysis

Some preliminary analysis I made:

  • This was not the same thing as starting up the service and Entity Framework loading metadata because of that. I knew that because I had just loaded data from the same service and model that I was now saving to.
  • It was caused by something in the service. I could tell because we have a busy indicator active whenever a service request is pending completion.
  • It was not caused by any specific code for one of our business objects as it didn’t matter which object I would save. It was always slow the first time.

As you may know, if you are following this series for some time, we have a bunch of complex generic code that’s involved when either loading or saving data. I decided my first task was to identify exactly what line or section of code was causing the delay the first time around. To accomplish this I placed some debug code which would log the exact time to the debug output. This way I could easily calculate where a delay would occur.

By following this strategy I quickly found out that the delay for the first save occurred when calling SaveChanges on our model. However, reflecting the code for the method did not reveal any obvious cause for this. I searched on the internet to find out if anyone had already cracked this problem, to find out that only a handful of people had seen the same issue, but non of them actually resolved it.

As I still did not pinpoint the exact cause, I decided to build a test application to demonstrate the problem and hopefully get a better understanding. As we are currently still using version 2.0 of Entity Framework and running in .NET framework 3.5, I used Visual Studio 2008 to build my test application on top of the AdventureWorks database. I needed it to do basically two things:

  1. Load a single object from the model to initialize the Entity Framework model
  2. Save a change to a single object from the model and time the operation

To make my tests more flexible I decided on a WPF UI that allows me to control the above steps and it presents me with the results instantly. Check out the screenshot below:

EFPerformanceTest

The times displayed are in milliseconds. Let me explain what you see here. On top are two rows of buttons, one for the full AdventureWorks model and one for a model with only the Customers table (both models do connect to the same database). Below that is a listbox with performance results for each save action. At the bottom there is a general log to see what is happening.

Basically what you see in the performance logs is this. The first save action is done after loading a customer from the complete model. It takes a whopping 177ms. The next save action, after loading the same record again, takes only 2ms! This reproduces the issue we’ve been having.

Now the next step is doing exactly the same thing with a model that contains only the Customers table, so there is a lot less metadata to be loaded. It should be noted that both models are placed in separate assemblies to make sure metadata of one model is not interfering with the other. The time differences are a lot smaller between the first save (28ms) and the second save (4ms). Note that as performance is reaching lower limits times can fluctuate because of other processes on the system, which can account for the 2ms increase between the large model and the small model.

Because of this we can now conclude that Entity Framework loads some metadata as the first save occurs. Unfortunately we can not do much about that, but at least we now have an explanation.

Taking the next step

But the research didn’t end there. One of my colleagues suggested testing this with .NET 4.0 and Entity Framework 4.0 to see if there would be an improvement there. So I converted the project to Visual Studio 2010 and recreated both models using Entity Framework 4.0. The times are like this:

  • Saving the first time on the complete model: 82ms
  • Saving the second time on the complete model: 7ms
  • Saving the first time on the single table model: 36ms
  • Saving the second time on the single table model: 3ms

So work has been done in Entity Framework 4.0 to improve performance while loading metadata. Please note that my measurements were made on a system that is not exactly high performance and thus the measurements are not very accurate, however the differences between EF 2.0 and EF 4.0 with the first save are still significant enough.

Downloads

You can download the source of both versions of my test application:

Please be aware that the first project is a Visual Studio 2008 project and the second one is a Visual Studio 2010 project. You can however upgrade the first project, without changing the results.

Conclusion

So, although we can’t improve the performance of the first save action on an entity framework model right away, at least we can now explain what happens and what is the actual impact. As this only happens when IIS recycles the Application Pool for our web service, the impact is minimal. Also we can soften the pain as soon as we migrate our service to Entity framework 4.0.

I hope you have learned as much as I have writing this post. If you have any questions, please leave a comment below and I’ll be happy to help you out.

Friday, October 8, 2010

Developers 42 around for two years!

Today we celebrate the second anniversary of the Developers 42 Blog. In this article we look back at the past years and we look forward into the future.

Looking back

Wow, it’s been two years. Part of me feels like it’s been forever and part of me feels like I started yesterday. If I think about what I started this blog for, I guess it comes down to sharing my experiences as a developer with anyone who’s interested. However, if I now look back at what I got from this, I feel I got a lot more from building and maintaining Developers 42.

A learning experience

I guess I was a little ignorant when I first started out. I figured it would mostly come down to discipline to maintain a reasonable blog, but I found that it takes knowledge and skill to promote a blog and to keep everything running smoothly.

But also I found out that writing for a technical blog like this one, brings with it that you have to adapt your workflow to facilitate blog posts. One thing I find myself doing over and over, is generalizing scenario’s I run into in my day to day work. It can be very helpful in getting a better feel for the problems you work on, but it can also be a time consuming challenge to come up with good examples of something that is very specific to the field I work in.

Another thing I learned that surprised me, is the way referrals work. I never thought that putting a link in a signature for a forum would generate the results it did. Nor did I think republishing to CodeProject would bring me much hits but it does bring reasonable results.

Opportunity

Another thing I got from running Developers 42 is opportunity. Because of running a blog I get the occasional job offer, including offers to join interesting startups (unfortunately all kinds of reasons prevented me from jumping on to any of these so far). I also got invited to write for SilverlightShow.com, which I still enjoy doing from time to time.

I’ve also met new people and got the opportunity to help people from around the globe with issues they encountered, which is rewarding in itself.

Setbacks

Obviously is wasn’t all a big party. I’ve had to take a step back in posting at more then one occasion. Personal life and work life both interfered with my goals for Developers 42. This resulted in me only posting 18 times (not including this post) over the past year.

Defeat was also part of the deal. I decided to scrap the CodeEmbed4Web project, as it was clearly not going to a success. In fact I have lots of unfinished projects laying around, which I’m not likely to continue.

But enough of that. Let’s move on to more positive things.

The numbers

In fact the numbers might not all seem so positive. There was a reduction in both visitors and page views of around 27,5%. But if you compare that to the reduction of posts (around 35% less content), this suddenly becomes promising. If I would have posted the same amount as last year, hits could have doubled!

Looking at the number of subscribers reflects this positive image. There was actually an increase in subscribers of the past year. Check out the graph below!

RssGraph2010

I’m really happy with the progress made in this area. It it really motivating to know that people are interested enough to actually subscribe to my RSS feed.

Looking to the future

So what about the future? Well, if there is one thing I learned in the past year is that you can be to ambitious. I do have some plans for new things, but overall my main goal is to keep posting regularly. However if there is a topic you would like me to write about, even if it is only remotely connected to software development, I’d love to hear about it.

Further more, I hope to welcome more new readers in the coming year, both as subscribers, but also as loose visitors.

Thank you!

The last thing that remains is to thank you, the user of this blog, for sticking with me here. Without you this wouldn’t be much fun. I hope you have enjoyed yourself the past year as much as I have and that you have learned as much as I have.

A special thank you goes out to Dave Campbell from Wynapse, for so patiently following and linking to my blog. Another thanks goes out to Svetla from SilverlightShow.net for helping me out with getting published on their website.

I really value your feedback, especially if there is anything you would like to see different or added, or if you have any suggestions about topics to discuss. I hope to see you around this coming year and I hope we can share some insights.

Tuesday, September 21, 2010

Adventures while building a Silverlight Enterprise application part #36

This article is about appreciation. It may sound a bit out there, but please hang on. It is about my job of being a lead developer.

The story

Roughly two months ago we where confronted with a deadline, where one of the product managers told us he scheduled a demo for a large group of our customers. He included a list of the functionality he needed to demo in order for it to be successful. He also mentioned that some of the functionality currently planned would not be needed.

I immediately started planning towards this demo, to see if it would be manageable to reach the deadline. I ended up with a plan so tight, I felt it was doomed from the start. I discussed this with management and we agreed that in order to get this to work, the whole team would have to make structural overtime for four weeks in order to have a change of making it. All in all I was worried, but figured that working hard and keeping the right focus would actually be crucial in achieving the goal set for us.

The next thing I did was take a week off right after the deadline (including the day of the demo). You might think I’m crazy for doing this, but let me explain. The demo would take place on Tuesday. I needed to make sure that by Friday we would have a working demo. If not we should fall back to a PowerPoint presentation. To force that GO/NO GO moment, I figured simply not being there would be a good way.

After working completely focused on this deadline for six weeks and the whole team doing overtime for at least four of them, we managed to deploy a working demo on Friday late in the afternoon. It was a complex piece of work which integrated our Silverlight application with various services, Excel and Reporting Services using a complex flow of events.

So what’s wrong, then?

So far so good, but as I checked my email after a week of vacation on Sunday night, I missed something. No email reporting on the presentation, no note to the team telling them they did a good job, or a horrible one for that matter. I was disappointed. The whole team worked so hard to achieve something great and managed to do so against the odds and it appears to go unnoticed.

Obviously something must be wrong here. I talked to several people and found out that the product manager became ill the day after the demo. My manager and my managers manager (yes, I know it sounds silly, but it does work) both didn’t consider sending an email. I also found out that the total presentation, of which the demo was only a small part, didn’t go as planned, which greatly reduced the impact of the demo unexpectedly. It makes it understandable that non of the managers involved figured to send out a note as they were disappointed about the presentation, however it doesn’t form an excuse in my book and they should have send out an email.

Why is that important?

So what’s the big deal? We all have to work overtime every now and then? Well, the reason for doing overtime is important. If you failed to deliver on schedule because you misjudged the amount of work, or you messed up something, it’s fine to spend some of your private time fixing that, so you don’t get behind. However, if someone imposes some nearly impossible deadline, things should not be taken for granted.

Sending out a simple email stating your appreciation for the effort and flexibility of a team of people can make all the difference. This is especially true the next time around (yes, there is always a next time), but it also keeps people motivated to keep at it under normal circumstances.

What are your thoughts on this? Let us know by leaving a comment.

Thursday, August 19, 2010

Microsoft introduces LightSwitch: Game changer or gadget showoff?

On VSLive 2010 Microsoft announced Visual Studio LightSwitch. In this article I want to take a short look at what it is, based on the information provided by Microsoft so far. Then I’ll share some of my thoughts on this with you about how this might affect us as developers.

So what is this LightSwitch thingy about?

Well, basically it is a new edition of Visual Studio 2010 that enables people to quickly build data oriented applications (are there any other types?). ‘Oh,’ I hear you say, ‘another Oracle Forms clone. That’s not going to be any good.’ Well, first of all I’d say go and take a look at the VSLive 2010 Keynote. Also there is a great introduction video here. And for more of a deep dive you can check out this video.

‘Ok, so it’s just the next Microsoft Access?’, I hear you ask. Well, not quite either. The big difference with the previously mentioned platforms is the fact that you get an application with a multi-tier architecture and you get to choose what your client is and what your data source is. You can even have multiple data sources and link them together. And if you need to take your application to the next level, there are all kinds of extensibility points in both the underlying framework and the IDE itself.

On the UI end they have figured a way to do better as well. In the IDE you don’t have a WYSIWYG editor, but a sort of a tree view so you can better see and understand what you’re doing. The generated UI is also based on templates, so you can easily update the look and feel of your application at one point. And then when you do want a WYSIWYG editor you just start up your application, click a button and your on your way.

All in all it seems that Microsoft has succeeded in building a tool attempted and failed by a lot of companies.

The holy grail of database oriented systems?

For decades a lot of companies have attempted to build a system where you would simply define a data model, press a button and presto, there is your new software. Most companies failed and some partly succeeded, but no one was able to build a system that would actually work in a broad spectrum of cases and still provide enough customizability to implement the very specific features needed in very specific applications.

Most of the time failure occurred because the software would be to generic and the resulting applications would not meet enough of the users needs. And when teams would attempt to implement code to satisfy these needs, they would either become to specific or to expensive.

But is it really that simple? I don’t think so. Most database oriented systems I’ve worked on/with always have some form of extra processing, connect to external systems in special ways or need some very specific UI. In practice I think that most application that professional developers work on will not be built with LightSwitch all of a sudden. There may be some, but not many. There will be many reasons not to build an application in LightSwitch, but the main reason will be that developers feel they might not have enough control to actually build was is required. Another important reason is that you don’t get to choose what technology is used to build your application.

‘So it’s not the holy grail, then? It must fail miserably’ I hear you say. Well, no.

LightSwitch as a game changer

LightSwitch will be a game changer. We will see less complicated systems being build in LightSwitch all over the place. For Microsoft I guess it is all about getting the smaller companies to go with them, instead of going open source. And they do give them an important advantage in allowing them to cheaply and easily build their own software and still keep that software scalable. So in that respect it is a game changer.

Another effect will not be seen until we have moved on a couple of years. Right now we see a lot of applications build in Microsoft Access, which have reached their limits and need a ‘professional’ rebuild. Usually that means bringing in a team of developers to get the job done. Changes are these applications will not be rebuild in LightSwitch. However, what will happen with the LightSwitch applications in a couple of years, when they have reached their limits? Exactly, developers will need to be extending those systems. Mind you, I do think this will greatly reduce the number of rebuilds of applications, as I expect Microsoft to come up with a good upgrade strategy for future versions of LightSwitch, making sure they are always running on the latest and greatest technology.

And finally I think we will be seeing more and more ‘secondary’ applications. What I mean by that is applications which are used to fill in some gaps in our current processes. For example, if you are building a LOB application and you have some tables in your databases, that your company needs to maintain for your customers, why would you not build the UI for that in LightSwitch? Or if you need to collect metadata for generating code, why not do that in LightSwitch? And there will be many other uses like this for LightSwitch where it can add to existing systems as well.

Conclusion

As with a lot of the new technologies that we’ve seen in the past decade, LightSwitch does provide us with another useful tool in our toolbox. It again takes away some of the repeating work we as developers do, so we can focus on what makes our software unique. This again will be a game changer, just not as a lot of people expect.

So what are your thoughts on LightSwitch? Let us know by leaving a comment.

Friday, August 6, 2010

Adventures while building a Silverlight Enterprise application part #35

In this post we’ll look into something I ran into while doing rework on our framework. It involves the order in which code gets executed when using events and such.

The situation

We’re currently in the process of getting our first deliverable out the door on time. One of the things that needed to be addressed before we can actually start testing this code properly, is to get a backlog of issues out of the way. I decided to do that myself as it gives me a better feel for what are common problems and what is the quality of work we as a team put out. Also I realized that issues may had arisen because of wrinkles in the framework architecture, where we’ve had to take shortcuts to get to our first deliverable and I wanted to be able to address these as needed.

Some architecture insight

For you to make sense of what I did to solve one of the problems I encountered, it is important to have some understanding of how our application works (at least at the client side). We host several ‘applications’ insight our Silverlight client. In the end we will have probably about six of these applications. Each application follows the same design. On the left there is navigation, consisting of a ‘menu’ and some UI to select records. Depending on the type of record the user selects and whatever authorization the user has we load modules to work with the data.

To create new records, we use a concept we call a task. This is basically a wizard like UI, which runs as a popup on top of the other UI. It consists of some of the same modules as used when the user navigates to a record, only in a different visual state. This makes for some interesting scenario’s as it is likely that there are two instances of the same module using and updating resources that only have a single instance.

One of these scenario’s is updating the navigation after creating a record through a task. As the task does not have any knowledge on what the module does, it is the modules responsibility to update navigation after creating a new record. To do that, it updates something we named ViewState, which in turn updates both the navigation and the modules that are needed to present this new record to the user.

However, creating the record is usually done in the first module of the task, before other modules can save their data (because they might need to create a relationship with the new record). This means that whenever the modules get the signal to load new data, that most of them can’t because the data is not there yet. It has everything to do with the order in which things are executed, especially because we use events to directly update our modules.

A design oversight

The fix is actually simple once you realize we had a design oversight in our ViewState. Why should you update navigation and load data into modules, when you have a popup presented to the user and the user can’t interact with the newly loaded data? It doesn’t make any sense. So the fix was to get the ViewState to hold any updates to the rest of the application if a popup is being presented and trigger the events once the popup is closed.

The first step to do that was to create a generic way to trigger the events in the ViewState. This is what I came up with:

   1: private void InvokeEvent<T>(MulticastDelegate target, T args) where T:EventArgs



   2: {



   3:     if (target != null)



   4:     {



   5:         if (PopupBridge.Bridge.IsPopupOpen)



   6:         {



   7:             EventHandlerQueueItem<EventArgs> queueItem = new EventHandlerQueueItem<EventArgs>();



   8:             queueItem.EventHandler = target;



   9:             queueItem.EventHandlerArguments = args;



  10:             _eventQueue.Enqueue(queueItem);



  11:         }



  12:         else



  13:         {



  14:             target.DynamicInvoke(new object[] { this, args });



  15:         }



  16:     }



  17: }




The InvokeEvent<T> method takes a MulticastDelegate (which represents the actual event instance) and an args argument based on T where T should always be an EventArgs or derive from it. To make this as easy to use as possible I first check to see if there is actually a valid target (which is null if no handlers are available). Next I check to see if there is any popup open. In our case there is a PopupBridge that has that information for us. Now if there is no poup open I simply call DynamicInvoke on the target and pass in a reference to the ViewState and the args argument.



If there is a popup open, though, I create a EventHandlerQueueItem. This is how that struct looks:





   1: internal struct EventHandlerQueueItem<T> where T:EventArgs



   2: {



   3:     public MulticastDelegate EventHandler;



   4:     public T EventHandlerArguments;



   5: }






As you can see it is a simple struct that has both a MulticastDelegate and can contain arguments to pass off to any handlers. Once we had that, we could now declare a Queue<T> to hold these instances. As we can’t write something like Queue<EventHandlerQueueItem<T>> I decided to use covariance and go with Queue<EventHandlerQueueItem<EventArgs>>.



Now all that remains is to trigger the events once the popup is closed. In our case the PopupBridge knows about this and triggers an event for it. The handler looks like this:





   1: void Bridge_PopupClosing(object sender, PopupClosingEventArgs e)



   2: {



   3:     if (!PopupBridge.Bridge.IsPopupOpen)



   4:     {



   5:         while (_eventQueue.Count > 0)



   6:         {



   7:             EventHandlerQueueItem<EventArgs> item = _eventQueue.Dequeue();



   8:             if (item.EventHandler != null)



   9:             {



  10:                 item.EventHandler.DynamicInvoke(new object[] { this, item.EventHandlerArguments });



  11:             }



  12:         }



  13:     }



  14: }






As there can be multiple popups open I still need to check if there is a popup open at that point. If they are all closed, we can just keep calling Dequeue and invoke the handlers until the queue is empty.



Now that we have solved this design oversight, modules that need to load data that was created in a task, will always be able to get that data from the server and everything works as expected.



I hope you find these pieces of code useful. If you have any questions or comments, please leave them below.

Monday, July 26, 2010

Adventures while building a Silverlight Enterprise application part #34

In this post we’ll look into analyzing performance in a Silverlight Line Of Business application, including multiple service layers and we come across a common problem using ADO.NET Entity Framework including a not so obvious solution.

The story

At the moment we are working on getting out a deliverable to a group of customers as soon as possible. However, one of the issues that keeps coming up is performance. A general response by a lot of developers is to just scour the web for performance tips on different technologies used in the stack and apply as many of them as they possibly can. However I find it much more useful to analyze the problem first.

Getting some data

To get a better feel for the problem, I first needed to get some numbers. When starting out with a performance issue the first question to get answered is “where is most time being spend?” This usually leads to the point where most time is being wasted as well.

At first I started out with profiling our application with Visual Studio 2010, but this wasn’t very useful to us as it didn’t show us any data on communicating with the service. I do feel compelled to link to some useful information on profiling your Silverlight application as it isn’t available through the IDE just yet and it requires you to jump through some hoops to get it to actually show you any performance data on your code. You can find a decent post by Oren Nachman on this here.

I should mention that profiling our application revealed one small performance issue with creating dynamic objects, which we use to instantiate modules in our application based on authorization and application settings. However, looking at the big picture it is hardly relevant to us, while hard to remedy it in a good way for our application.

The next step for me was to actually get data on communication with our services. As we designed our application in a way that we abstracted our service connections in seperate generic classes and we use generic service calls to get to our data, the easiest way for me was to simply record the DateTime.Now.Ticks. To keep the impact of this as low as possible on any production scenarios, I decided to write this information to the VS debug window, through Debug.WriteLine. Also to make sure this can be turned off easily to allow for other verbose debugging information, I introduced a conditional compilation symbol called DEBUGGINGPERFORMANCE and wrapped all my recording code with it. The code for me looked something like this:

   1: #if DEBUGGINGPERFORMANCE



   2:             Debug.WriteLine("BusinessLayerConnection.GetDataAsync<{0}>({1}): {2}", typeof(T).Name, id, DateTime.Now.Ticks);



   3: #endif


Running the application with this debugging code gave me some interesting data pointing me towards a specific operation, which gets all the data for a particular module at once. Within that set of modules one stood out that was taking far longer then the other modules, so I decided to investigate that further. I decided to use Fiddler to make sure none of this is actually caused by bandwidth issues. The statistics clearly showed my that the problem was with the service taking up a lot of time before actually starting to send a response.

This led me to put similar code in the webservice. Note that Debug.WriteLine doesn’t support the format approach by default and you should wrap that in string.Format. To actually make for a better performance test, I decided to build a small test tool to just make the same service calls several times in a row. This makes sure we’re working with correct numbers instead of chasing some random external factor.

As I started testing like this, I found out there where a lot of messages in the debug output stating that ‘A first change exception of type ‘ had occurred. In this case a large part of these exceptions where related to another service we use, which couldn’t access one of our databases. This was obviously an easy fix and it did improve performance considerably.


Fixing a common Entity Framework problem

After getting one exception out of the way, I found there where some more exceptions being thrown. The exact message that was displayed was: A first chance exception of type 'System.NotSupportedException' occurred in mscorlib.dll. Performance data indicated that this took roughly 3 seconds for six exceptions in each request!

I found it weird to see an exception like this. A web search didn’t return anything useful, except that it could be related to Entity Framework somehow. A forum thread pointed to this bug report about dynamic assemblies, which in itself wasn’t very useful information either.

I also found people asking questions about this message all over the internet, but without any real solutions. I turned on the System.NotSupportedException in Visual Studio, attached it to the service for debugging and ran my test again. As expected it halted in the debugger on the point where the exception was thrown. As expected it halted in mscorlib.dll, in this case in the System.Reflection.Emit.AssemblyBuilder class in the method GetManifestResourceNames(). This does confirm that it is actually caused by the bug I mentioned earlier, however it doesn’t really help us with a solution yet. The question now is, “why is someone calling this method on a dynamic assembly?”.

Browsing up the call stack makes it very clear that this code is being called from the Entity Framework. In fact it is called from the System.Data.Objects.ObjectContext constructor, but why? Taking a closer look at the call stack there is some indication that it must have something to do with initializing the metadata needed for the Entity Framework. It also reveals that the exception occurred in the System.Data.EntityClient.EntityConnection class in a method called SplitPaths. Looking at the code in that method and the state it was in as the exception occurred actually showed me what caused the problem. In the SplitPaths method there is a string[] called results (what’s in a name, right?) which contained three strings: res://*/Model.csdl, res://*/Model.ssdl and res://*/Model.mdl.

Looks familiar? That’s because these url’s are found in the default connection string for Entity Framework and they should point to the three files that make up the Entity Framework metadata. For some more documentation on that look here.

What was actually happening here was that a dynamic assembly was asked for it’s resources as the Entity Framework code was looking for a metadata file in all the loaded resources, as indicated by it’s connection string. The dynamic assembly that was loaded in this case was the binary for the WCF web service that hosts our DAL. The fix was simple. Just point to the exact assembly by it’s full name in the connection string for each metadata file and it’s fixed. This is how the metadata entry in our connection string looks now (obviously with some name changes):


   1: metadata=res://OurApp.Data, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null/OurModel.csdlres://OurApp.Data, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null/OurModel.ssdlres://OurApp.Data, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null/OurModel.msl;


I hope you found this search for this fix as useful as I did and it helps you out. Please leave me a comment if you have any questions.

30-07-2010 EDIT: Added the metadata part of the connection string to make the solution more clear. Thanks to greg-joyal for pointing that out to me.

Sunday, July 18, 2010

We’re getting it done

You might wonder what I mean by that. I’m referring to us moving into our new house. After about a month of hard labor, we’re finally seeing the results. We’re down to the details indoors and outdoors will be progressing in the coming week. I now have a decent place to work with a large desk and my PC right there.

That means it’s time to start going back to normal routine and blog a little. Last week I saw this keynote recording where Scott Hanselman talks about why all developers need a blog and how to make a blog suck less. Although I can come up with reasons why not all developers should have a blog, I do agree with him on the fact that you should engage in some form of social networking over the internet in order to be successful as a developer these days. And the pointers he gives on improving your blog inspired me to finally handle the spam issue I’ve been having here at Developers 42.

I already announced this earlier, but today I have finally turned on comment moderation. This will surely prevent any spam from reaching you as a reader. Unfortunately this also means having a delay between you posting an awesome comment and someone else being able to read that comment. I guess that’s just the way things are, the good having to suffer because of the evil spammers.

Still I hope this doesn’t prevent you from leaving your comments here so we can all interact and learn from one another.

Tuesday, May 25, 2010

Adventures while building a Silverlight Enterprise application part #33

In this post I want to look back and reflect. We’ve run into some issues with our project, so it seems like a good moment to learn some stuff from what we have and have not done.

Some housekeeping first

Before getting into all that though, I need to share some other stuff with you, dear reader. As you might have noticed I haven’t been posting as much as I used to. There are some reasons for that. One of them is that I’m busy building a new house (it’s almost finished) and it takes up a lot of time, getting everything organized, etc..

Another is the changes that went on in our company. I’m now heading up a team of developers and it takes more time and effort, leaving me with less time and energy to spend on writing posts.

And finally there is SilverlightShow.net. I’ve been writing some specific Silverlight articles for them. You can find all of them here. Writing for SilverlightShow means I have a more narrow format and there are some restrictions as to what I can write, but I do have a larger audience over there. This made me decide to write my specific Silverlight content for SilverlightShow and keep my other content here at Developers 42.

Something else I wanted to swing by you all is that I’m planning on enabling comment moderation. The reason being,  I’m receiving spam through the comments feature on a regular basis now. I do need to wait until I have moved into my new house and have a decent internet connection, so I can moderate any comments in a timely fashion.

Project issues

On to more important matters. Obviously I’m not going to go into detail about our internal affairs concerning the project. The goal here is to share some more generic pitfalls with you and think about what we need to do differently in the future to prevent it from happening again.

So what kind of issues are we running into? Let’s name some:

  1. Longer then anticipated Time-To-Market
  2. Organizational changes impacting the project
  3. Framework features don’t always map to implementation features

Let’s see how these issues manifested themselves and how we deal with them.

Longer then anticipated Time-To-Market

To have a better understanding as to how this happened, it’s important to know the history of the company and the team of people involved in the project. The company I work for already exists for over forty years. Obviously in the IT business this is forever. There are people working here for over twenty years (including developers). This company went from mainframe software to client-server and now to service oriented architecture. How is this relevant?

They calculated the amount of time it takes to build a client-server application with current tools, instead of looking into the more complex service oriented architecture and the extended time lines that come with it. This made for an unfounded estimate that is far from realistic.

Another issue contributed to this. As I started working for the company, there was no one dedicated to the project and priority was given to the current product. This meant that people with intimate knowledge of the current product where unavailable at critical points during the early stages of the project. To overcome this, I should have done more investigation on getting the scope of the project right, instead of taking other peoples word for it.

Changes have been implemented to reduce the problems caused by our project running for a longer time. At first there where plans to release the entire product suite at once and migrate our customers over a short period of time after that. It has become clear that this scenario will not work and we are working on a method for customers to migrate for a part of the software, running old and new next to each other (on the same data). Also a higher priority was given to our project so we can make more progress. And finally we’ve implemented some changes in how the project is organized, which leads me to the next topic.

Organizational changes impacting the project

Now, this wasn’t all bad, but it started out with some negative impact. Unfortunately we felt the impact of the financial crisis, which resulted in the loss of several developers. Most we’re reassigned to other departments, but some had to leave the company.

Also the project was reorganized several times, before the company reorganized altogether, impacting the project once more. Now we have a structure that works well, where we have only a couple of teams working on different parts of the product suite.

The problem with reorganizing is that it requires new procedures and reinventing the way people work together in a project. You need new forms of meetings and it takes time to get used to that. It can also create situations that take time to resolve. We’re now in the process of refining the current structure. So far we feel we can go forward with how everything works now.

Framework features don’t always map to implementation features

Because of the problems at the start of the project, we now have some mismatches between framework features and implementation features. This does slow us down. So why not just fix, them? Well, we could do this, but right now priority is given to get out the first deliverable version to a small customer group. This means most of the improvements we would like to make to the framework will have to wait until after we’ve delivered this first piece.

So what should we do differently to prevent this? Well, getting a better understanding of what the project entails, before actually starting to design a framework helps. Also a better analysis on how the bulk of the functionality is going to be built should help out here. We’re seeing several cases where using the tools developed is not working as we’d hoped. I guess knowing the users of a framework is every bit as important as knowing what features to include.

So did we do things right?

Absolutely! One of the things that keeps presenting itself is the way we have designed our generic business classes and how we have abstracted our service communication from the rest. This works great. As features are added to the generic code, we keep seeing how powerful this concept really is. The same goes for how we abstracted parts of the UI. We can add features in places we couldn’t have imagined as we started building the framework.

Conclusion

Just a short recap on what to look for when starting work on a framework:

  • Know the scale (both in terms of functionality as well as in data) of the software you’re building the framework for
  • Know the users of your framework
  • Analyze the way people need to be working in order to use your framework
  • Make sure you consult different types of users before committing to a certain work method
  • Make sure all the stakeholders understand what kind of effort is needed to build a decent framework

I hope you found it useful to read some of our experiences online. Please leave any comments below. They are greatly appreciated.