Saturday, May 14, 2011

Improving application start up time having signed assemblies

I guess all are aware that signed assemblies need verification from CA. This verification can create panic, when certification authority is not present on the same machine. In that case, assemblies require internet access. Situation can be more problematic, if there is no internet or network access on that machine. In absence of network access .Net thread might timeout waiting to connect. This issue can be avoided by adding following setting in machine.config:
<configuration>
    <runtime>
        <generatePublisherEvidence enabled="false"/>
    </runtime>
 </configuration>

Tuesday, April 19, 2011

Security concerns with serialization

Serialization can allow other code to see or modify object instance data that would otherwise be inaccessible. Therefore, code performing serialization requires the SecurityPermission attribute from System.Security.Permissions namespace with the SerializationFormatter flag specified. The GetDataObject method should be explicitly protected to help protect your data.

Sunday, April 17, 2011

Inner workings of Deserialization

Within the runtime, deserialization can be a complex process. The runtime proceeds through the deserialization process sequentially, starting at the beginning and working its wasy through to the end. The process gets complicated if an object in the serialized stream refers to another object.

If an object references another object, the Formatter queries the ObjectManager to determine whether the referenced object has already been deserialized (a backward reference), or whether it has not yet been deserialized ( a forward reference). If it is a forward reference, the Formatter registers a fixup with the ObjectManager . A fixup is the process of finalizing an object reference after the referenced object has been deserialized. Once the referenced object is deserialized, ObjectManager completes the reference. 

Limiting Threads in a ThreadPool

The ThreadPool class supports methods for setting the number of minimum and maximum thread in the thread pool. In most circumstances, yhe number of threads is the pool is set at optimum numbers. If you find that your application is being constrained by the threads in th thread pool, you can set the limits yourself.
There are two types of situations where you will want to change the thread pool thread limits: thread starvation and startup thread speed.

In thread-starvation scenario, your application is using the thread pool but is being hampared because you have two many work items and you are reaching the maximum number of threads in the pool. To set the high watermark of threads for your application, you can simply use ThreadPool.SetMaxThreads.

In cases where the startup costs of using the thread pool are expensive, increasing the minimum number of threads can improve performance. The minimum number of threads dictates how many threads are created immediately and set to wait for new work to do. Typically, the ThreadPool limits the number of new threads to be created during the running of a process to two per second. If your application need more threads created faster, you can increase this size. Setting minimum numbers of threads can be done by using ThreadPool.SetMinThreads

Deploying COM-Enabled Assemblies

Although an assembly can be created visible to COM, one should follow below guidelines to ensure that things work as planned:
  • All classes must use a default constructor with no parameters.
  • Any type that is to be exposed must be public.
  • Any member that is to be exposed should be public.
  • Abstract classes will not be able to be consumed.
After these criteria are met, the assembly is essentially ready to be exported. There are two mechnisms to do so. One can use VS or a command line utility(TlbExp.exe). First you need to compile the type through Visual Studio's build mechanism or through command line compiler as
              csc /t:library ComVisiblePerson.cs
Next you need to use Type Library Exporter Utility. This should be done from VS command prompt:
              tlbexp ComVisiblePerson.dll /out:ComVisiblePersonlib.tlb
Next oyu need to create  resource script (ComVisiblePerson.res) with the following Interface Definition Language (IDL) definition:
              IDR_TYPELIB1 typelib "ComVisiblePersonlib.tlb"
then you recompile the application with the new resource file added as:
              csc /t:library ComVisiblePerson.cs /win32res:ComVisiblePerson.res


Tuesday, January 18, 2011

Moving a type (i.e. class) to another assembly

In .NET, one often refers to other assemblies that contain specific modules that you can use in your application. Say, you reference a DLL that contains some classes that you will use in your application. Suppose the application is deployed. Now, suppose you want to move one class of the DLL to another assembly. What can you do in this situation with old coding methodoligies? The old methodologies say,
  • Remove the class from the existing DLL.
  • Create another DLL (assembly) using that class.
  • Recompile both the DLLs.
  • From your application, add a reference to the new DLL.
  • Recompile the application.
  • Re-deploy the whole thing.
Wouldn't it be nice to leave the deployed application untouched, and make whatever changes are needed in the DLLs? Obviously, that would be nicer. That's where theTypeForwardedTo attribute comes into the scene. By using this, you can move your necessary classes out to a new assembly. Now, when your application looks for the class in the old DLL, the old DLL tells your application (the JIT compiler—to be precise), "Well, the person (class) who lived here has moved to another location; here is the address." It gives in the address, your application follows the address, there it finds the class, and things go on as is.
So, simply you will create another assembly (DLL) and move the class to the new assembly. Then, you have to compile the new DLL and add a reference to it from the previous DLL. Then, you will add a TypeForwardedTo attribute in the previous DLL to mean a specified type is forwarded to some other assembly. Then, you have to recompile the old DLL and, as a result, you will have two DLLs in the release folder of the previous DLL. Now, you just have to place both the DLLs in the root of the deployed application (or wherever the old DLL is).

Article taken from here

Saturday, January 15, 2011

Running one process on multiple processors

A thread in a process can migrate from processor to processor, with each migration reloading the processor cache. Under heavy system loads, specifying which processor should run a specific thread can improve performance by reducing the number of times the processor cache is reloaded. The association between a processor and a thread is called the processor affinity.

Each processor is represented as a bit. Bit 0 is processor one, bit 1 is processor two, and so forth. If you set a bit to the value 1, the corresponding processor is selected for thread assignment. When you set the ProcessorAffinity value to zero, the operating system's scheduling algorithms set the thread's affinity. When the ProcessorAffinity value is set to any nonzero value, the value is interpreted as a bitmask that specifies those processors eligible for selection.

The following table shows a selection of ProcessorAffinity values for an eight-processor system.

Bitmask
Binary value
Eligible processors
0x0001
00000000 00000001
1
0x0003
00000000 00000011
1 and 2
0x0007
00000000 00000111
1, 2 and 3
0x0009
00000000 00001001
1 and 4
0x007F
00000000 01111111
1, 2, 3, 4, 5, 6 and 7

Wednesday, January 12, 2011

WebSite vs WebApplication

The only similarity between a web site and a web application is that they both access HTML documents using the HTTP protocol over an internet or intranet connection. However, there are some differences which I shall attempt to identify in the following matrix:
Web SiteWeb Application
1Will usually be available on the internet, but may be restricted to an organisation's intranet.Will usually be restricted to the intranet owned by a particular organisation, but may be available on the internet for employees who travel beyond the reach of that intranet.
2Can never be implemented as a desktop application.May have exactly the same functionality as a desktop application. It may in fact be a desktop application with a web interface.
3Can be accessed by anybody.Can be accessed by authorised users only.
4Can contain nothing but a collection of static pages. Although it is possible to pull the page content from a database such pages are rarely updated after they have been created, so those pages can still be regarded as static.Contains dynamic pages which are built using data obtained from a central data store, which is usually a RDBMS.
5May be updatable by a single person with everyone else having read-only access. For example, a web site which shows a pop star's schedule can only be updated by that star's agent, but anyone can visit the site and view the schedule.Any authorised user may submit updates, subject to his/her authorisation level, and these updates would immediately be available to all other users.
6May have parts of the system which can only be accessed after passing through a logon screen.No part of the system can be accessed without passing through a logon screen.
7Users may be able to self-register in order to pass through the logon screen.Users can only be registered by a system administrator.
8All users may have access to all pages in the web site, meaning that there may be no need for any sort of access control.The application may cover several aspects of an organisation's business, such as sales, purchasing, inventory and shipping, in which case users will usually be restricted to their own particular area. This will require some sort of access control system, such as a role Based Access Control (RBAC) system.
9May need URLs that can be bookmarked so that users can quickly return to a particular page.Bookmarks are not used as each user must always navigate through the logon screen before starting a session.
10May need special handling to deal with search engines.As no URLs can be bookmarked (see above) then all aspects of Search Engine Optimisation (SEO) are irrelevant.
11Has no problems with the browser's BACK and FORWARD buttons.The use of the browser's BACK and FORWARD buttons may cause problems, so may need code to detect their use and redirect to a more acceptable URL.
12It is not possible to have more than one browser window open at a web site and to maintain separate state for each window. State is maintained in session data on the server, and the session identity is usually maintained in a cookie. As multiple browser windows on the same PC will by default share the same session cookie they will automatically share the same session data and cannot be independent of one another.It may be beneficial to allow separate windows to have separate state as this follows the standard behaviour of most desktop applications which allow multiple instances, each with different state, to exist at the same time. This will allow the user to access one part of the application in one window and another part in another window.
13Execution speed may need to be tuned so that the site can handle a high number of visitors/users.As the number of users is limited to those who are authorised then execution speed should not be an issue. In this case the speed, and therefore cost, of application development is more important. In other words the focus should be on developer cycles, not cpu cycles.

Wednesday, January 5, 2011

Are cloud storage providers good for primary data storage?

Why not use a cloud storage provider?
The most persuasive argument against using cloud storage for primary storage is application performance. Application performance is highly sensitive to storage response times. The longer it takes for the application's storage to respond to a read or write request, the slower that application performs. 


Public cloud storage by definition resides in a location geographically distant from your physical storage when measured in cable distance. Response time for an application is measured in round-trip time (RTT). There are numerous factors that add to that RTT. One is speed of light latency, which there is no getting around today. Another is TCP/IP latency. Then there is a little thing called packet loss that can really gum up response time because of retransmissions. It is easy to see that for the vast amount of SMB(small mid sized business) primary applications, public cloud storage performance will be unacceptable. 


When do cloud storage services make sense?
If an SMB is using cloud computing services such as Google Docs, Microsoft Office 365, or SalesForce.com, then it makes sense to store the data from those apps in a cloud storage service. In those cases, the data storage is collocated with the applications. Response time between the application and storage is the same as if the application and storage were in the SMB's location. The key issue here is the response time between the cloud application and the SMB user. In this scenario, the collocated storage is not the bottleneck to user response time. Therefore, if the cloud application performance is adequate, so too is the cloud storage.
If the cloud storage and the application that's using it are collocated, then it makes sense to use cloud storage as SMB primary storage. Otherwise, slow application performance would make using a cloud data storage provider a poor choice for your SMB environment.

Sunday, December 26, 2010

Encapsulation: Local change - Local effect principle

One of the central principles of object oriented programming is Encapsulation. Encapsulation states that the implementation details of an object are hidden behind the methods that provide access to that data. But why is encapsulation a good idea? Why bother to do it in the first place? Just stating that it's "good OO design" isn't sufficient justification.

There is one primary justification of encapsulation. It's a principle I call "Local Change - Local Effect". If you change code in one spot, it should only require changes in a small neighborhood surrounding the original change. When used properly, encapsulation allows software to change gradually without requiring bulk changes throughout the system (Change of code in one place requires code change in many places is known as Domino effect).

Encapsulation helps follow this principle by allowing changes in the representation of an object's state. The methods for the object may be affected, but callers of those methods shouldn't be. The effects of the change are localized.

Polymorphism helps by allowing us to add new objects without changing existing code to know about them. You only need to add the new classes and new methods. You shouldn't need to change existing code.

Inheritance helps by providing one place to put common code for many similar objects. Changes to this code can be isolated to the superclass and may require no changes to subclasses in order to make them work.

There are many coding practices that tend to work against the local change/local effect principle. They include:
  • Copy and Paste Code- by making more copies of code, you have more things that need to be changed for any change in design.
  • Public instance variables - by making instance variables public, more people can use them directly and require more changes if you need to change the representation.
  • Manifest types - the type information for variables and parameters often causes domino effect changes. When you change the type that a method accepts, you may have to change its callers and their callers and so forth.
In any software system, the one thing you can count on is change. The local change/local effect principle makes change possible. Without it, as a system gets larger, it becomes more brittle and eventually becomes unmaintainable.
Think about your design principles. If they don't support local change/local effect, you may be building a system that will become too brittle to ever change again.

courtesy: http://www.simberon.com/domino.htm