Wednesday, January 12, 2011

WebSite vs WebApplication

The only similarity between a web site and a web application is that they both access HTML documents using the HTTP protocol over an internet or intranet connection. However, there are some differences which I shall attempt to identify in the following matrix:
Web SiteWeb Application
1Will usually be available on the internet, but may be restricted to an organisation's intranet.Will usually be restricted to the intranet owned by a particular organisation, but may be available on the internet for employees who travel beyond the reach of that intranet.
2Can never be implemented as a desktop application.May have exactly the same functionality as a desktop application. It may in fact be a desktop application with a web interface.
3Can be accessed by anybody.Can be accessed by authorised users only.
4Can contain nothing but a collection of static pages. Although it is possible to pull the page content from a database such pages are rarely updated after they have been created, so those pages can still be regarded as static.Contains dynamic pages which are built using data obtained from a central data store, which is usually a RDBMS.
5May be updatable by a single person with everyone else having read-only access. For example, a web site which shows a pop star's schedule can only be updated by that star's agent, but anyone can visit the site and view the schedule.Any authorised user may submit updates, subject to his/her authorisation level, and these updates would immediately be available to all other users.
6May have parts of the system which can only be accessed after passing through a logon screen.No part of the system can be accessed without passing through a logon screen.
7Users may be able to self-register in order to pass through the logon screen.Users can only be registered by a system administrator.
8All users may have access to all pages in the web site, meaning that there may be no need for any sort of access control.The application may cover several aspects of an organisation's business, such as sales, purchasing, inventory and shipping, in which case users will usually be restricted to their own particular area. This will require some sort of access control system, such as a role Based Access Control (RBAC) system.
9May need URLs that can be bookmarked so that users can quickly return to a particular page.Bookmarks are not used as each user must always navigate through the logon screen before starting a session.
10May need special handling to deal with search engines.As no URLs can be bookmarked (see above) then all aspects of Search Engine Optimisation (SEO) are irrelevant.
11Has no problems with the browser's BACK and FORWARD buttons.The use of the browser's BACK and FORWARD buttons may cause problems, so may need code to detect their use and redirect to a more acceptable URL.
12It is not possible to have more than one browser window open at a web site and to maintain separate state for each window. State is maintained in session data on the server, and the session identity is usually maintained in a cookie. As multiple browser windows on the same PC will by default share the same session cookie they will automatically share the same session data and cannot be independent of one another.It may be beneficial to allow separate windows to have separate state as this follows the standard behaviour of most desktop applications which allow multiple instances, each with different state, to exist at the same time. This will allow the user to access one part of the application in one window and another part in another window.
13Execution speed may need to be tuned so that the site can handle a high number of visitors/users.As the number of users is limited to those who are authorised then execution speed should not be an issue. In this case the speed, and therefore cost, of application development is more important. In other words the focus should be on developer cycles, not cpu cycles.

Wednesday, January 5, 2011

Are cloud storage providers good for primary data storage?

Why not use a cloud storage provider?
The most persuasive argument against using cloud storage for primary storage is application performance. Application performance is highly sensitive to storage response times. The longer it takes for the application's storage to respond to a read or write request, the slower that application performs. 


Public cloud storage by definition resides in a location geographically distant from your physical storage when measured in cable distance. Response time for an application is measured in round-trip time (RTT). There are numerous factors that add to that RTT. One is speed of light latency, which there is no getting around today. Another is TCP/IP latency. Then there is a little thing called packet loss that can really gum up response time because of retransmissions. It is easy to see that for the vast amount of SMB(small mid sized business) primary applications, public cloud storage performance will be unacceptable. 


When do cloud storage services make sense?
If an SMB is using cloud computing services such as Google Docs, Microsoft Office 365, or SalesForce.com, then it makes sense to store the data from those apps in a cloud storage service. In those cases, the data storage is collocated with the applications. Response time between the application and storage is the same as if the application and storage were in the SMB's location. The key issue here is the response time between the cloud application and the SMB user. In this scenario, the collocated storage is not the bottleneck to user response time. Therefore, if the cloud application performance is adequate, so too is the cloud storage.
If the cloud storage and the application that's using it are collocated, then it makes sense to use cloud storage as SMB primary storage. Otherwise, slow application performance would make using a cloud data storage provider a poor choice for your SMB environment.

Sunday, December 26, 2010

Encapsulation: Local change - Local effect principle

One of the central principles of object oriented programming is Encapsulation. Encapsulation states that the implementation details of an object are hidden behind the methods that provide access to that data. But why is encapsulation a good idea? Why bother to do it in the first place? Just stating that it's "good OO design" isn't sufficient justification.

There is one primary justification of encapsulation. It's a principle I call "Local Change - Local Effect". If you change code in one spot, it should only require changes in a small neighborhood surrounding the original change. When used properly, encapsulation allows software to change gradually without requiring bulk changes throughout the system (Change of code in one place requires code change in many places is known as Domino effect).

Encapsulation helps follow this principle by allowing changes in the representation of an object's state. The methods for the object may be affected, but callers of those methods shouldn't be. The effects of the change are localized.

Polymorphism helps by allowing us to add new objects without changing existing code to know about them. You only need to add the new classes and new methods. You shouldn't need to change existing code.

Inheritance helps by providing one place to put common code for many similar objects. Changes to this code can be isolated to the superclass and may require no changes to subclasses in order to make them work.

There are many coding practices that tend to work against the local change/local effect principle. They include:
  • Copy and Paste Code- by making more copies of code, you have more things that need to be changed for any change in design.
  • Public instance variables - by making instance variables public, more people can use them directly and require more changes if you need to change the representation.
  • Manifest types - the type information for variables and parameters often causes domino effect changes. When you change the type that a method accepts, you may have to change its callers and their callers and so forth.
In any software system, the one thing you can count on is change. The local change/local effect principle makes change possible. Without it, as a system gets larger, it becomes more brittle and eventually becomes unmaintainable.
Think about your design principles. If they don't support local change/local effect, you may be building a system that will become too brittle to ever change again.

courtesy: http://www.simberon.com/domino.htm

Monday, October 11, 2010

List of processes running on Remote/Local Computer using C#

ArrayList alist = new ArrayList();

// From remote machine
Process[] processes = Process.GetProcesses("RemoteComputerName"); 

// From local machine
Process[] processes = Process.GetProcesses();    

foreach (Process process in processes)
{
       alist.Add(process.ProcessName);
}

Friday, October 8, 2010

Limitations of COM Interop

Following is the list of some shortcomings:

  • Static/shared members: COM objects are fundamentally different from .Net types. One of the differences is lack of support for static/shared members.
  • Parameterized Constructors: COM types don't allow parameters to be passed into a constructor. This limits the control you have over initialization and the use of overloaded constructors.
  • Inheritance: One of the biggest issues is the limitations COM objects place on the inheritance chain. Members that shadow members in a base class aren't recognizable, and therefore, aren't callable or usable in real sense.
  • Portability: Operating Systems other than Windows don't have registry. Reliance on Windows registry limits the number of environments a .Net application can be ported to.

Why Visual Studio hangs

Every once in a while, VS seems to take forever to display a screen to the point that it seems to hang. Most of the time, it hangs, while accessing Fonts and Colors page in Tools/Options dialog. The issue is not that there is some weird code that executes very slowly. It happens that this page is implemented using .NET components. Now the majority of VS is built with native code and during most of its execution,, the CLR is never loaded. However, when the user accesses one of these features, the CLR must be loaded, before we can begin executing the relevant IL. It is this process that is time-consuming and annoying to the user. There are two problems for the users here: first, there is no feedback during loading of the CLR; second: the problem can occur multiple times within a single session of VS.


I am trying to figure out the reason for this second issue. Let me know, if any of you knows.

Optional Parameter issue with COM and C#/VB

As we all know, C# doesn't support optional parameters(till framework 3.5) whereas VB does.In the same way, COM components don't support parameter overloading, so for each value in a parameter list, we've got to pass in something, even if it does nothing. Moreover, COM parameters are always passes by reference, which means we can't pass NULL as a value.


In VB 2005, this is not really as issue because it supports optional parameters and we can just leave them out. But C# doesn't support this, so one have to create object variables and pass them in.


See following code sample:
using Microsoft.Office.Core;
using Microsoft.Office.Interop.Excel;  // Must have office installed
Application NewExcelApp = new Application;
NewExcelApp.Worksheets.Add();       // This will not compile


So, as a workaround, the Type.Missing field can be used and this field can be passed in with the C# code and the application will work as expected. 


Check it in below code snippet:


using Microsoft.Office.Core;
using Microsoft.Office.Interop.Excel;  // Must have office installed
private Object OptionalParamHandler = Type.Missing;
Application NewExcelApp = new Application;
NewExcelApp.Worksheets.Add(OptionalParamHandler ,OptionalParamHandler ,OptionalParamHandler ,OptionalParamHandler ); 

This approach allows your code to work in C# :)









Wednesday, September 15, 2010

Making assembly visible to a COM component

Following steps are necessary to make an assembly visible to a COM component:

  • Set the Register for COM option under the build configuration
  • Set the ComVisible attribute to true for each class you want exposed
  • Set the ComVisible attribute to false for any class members you want hidden
  • Set the ComVisible attribute to true for any class members that you want visible

Friday, September 10, 2010

Which is fastest While, Do-While, Foreach

Foreach will be faster as it usually maintain no explicit counter, like while and do-while. Foreach essentially say "do this to everything in this set", rather than "do this x times". This will potentially avoid off-by-one errors and make code simpler to read.

Using frequently build assembly in another applications (.Net)

Assume that you are creating a strong named MyAssembly that will be used in several applications. The assembly will be rebuilt frequently during the development cycle. But one must ensure that every time MyAssembly is rebuilt, it works properly with all applications that use it. So, in order to obtain this, we are required to configure the computer on which we develop the assembly such that each application uses the latest build of MyAssembly. To accomplish the above task, take the following action:


  • To point to the build output directory for the strong named assembly, create a DEVPATH environment variable
  • Add the following element to the machine configuration file: <developmentMode developerInstallation="true">. This element tells the CLR to use DEVPATH to locate assembly.