Saturday, December 31, 2011

Problem with Primary key as an Integer

We all know that any good database design has a unique primary key. Now point is how to decide, whether our primary key should be integer or not ? Well, primary key as an Integer works well with local systems and easy to use and also works great while writing manual SQL statements. But what if one is not working on local system and working in a distributed environment where one has to deal with replication scenarios. In such scenarios, Integers can't be primary key as they have state (sequencing) and can become a major security threat. Now here system demand for something unique apart from integers. And here comes GUID into picture, which provides globally unique id. You might be thinking that, is it a good choice to use 16 bytes of primary key instead of 4 bytes, then my answer will be definitely YES only when sync'ing is required. Using GUID as a row identity feels more truely unique and databse guru Joe seems to agree. But again performance issues arises in various scenarios. Just wait a while for my next post on more performance impacts


Advantages of using GUID:

  • Unique across every server, every database, every table 
  • GUID's can be generated from anywhere, without doing round trip to database
  • Provides easy merging and distribution of databases among multiple servers

Thursday, November 24, 2011

Finally I'm back to blogging world

Oh, finally I am back. First I would like to apologize for the lack of recent posts. As finally I am settled, soon I'll again start posting in a week or two.

Sunday, June 5, 2011

Reducing flicker, blinking in DataGridView

One of my project requirement was to create a Output Window similar to Visual Studio. For that I used a DataGridView. But when I start my application , I found that there is lot of blinking, flicker, pulling...After badly hitting my head with google, I found a very easy way. We just need to create a extension method to DataGridView and it's all done:

public static void DoubleBuffered(this DataGridView dgv, bool setting)
{
    Type dgvType = dgv.GetType();
    PropertyInfo pi = dgvType.GetProperty("DoubleBuffered",
          BindingFlags.Instance | BindingFlags.NonPublic);
    pi.SetValue(dgv, setting, null);
}

Saturday, May 14, 2011

Overview of CDN

CDN (Content Delivery Network)
  • A computer network which has multiple copies of data stored at different points of the network.
  •  The end user connected to a CDN will access the data from the nearest server (middle) instead of connecting to a Central server.
  •  Few of the applications include media distribution, multiplayer gaming, and distance learning.
  •  The end user can be a wired or wireless unit, which tries to access the content.
The middle servers (or several servers forming a cluster) store the images of the content from the central (main) server. They are located at the edge of the ISP network and may be geographically separated from each other.

Elements of CDN

  • Request: A request for a specific content (for e.g. a webpage) is made from the End user, which is redirected to the nearest image server. This is done by the use of a protocol known as Web Cache Communication Protocol (WCCP).
  •  Distribution: Once the request is received, a distribution element in the CDN forwards the request based on the point of origin, content availability, location and servers' global load.
  •  Delivery: Delivery of the requested content is made by this element by using routing and switching protocols.
 Algorithms/Protocols used in Request Routing
  • A variety of algorithms are used for this purpose. These include Global Server Load Balancing, DNS-based request routing, Dynamic metafile generation etc.
  • Global Server Load Balancing (GLSB) enables the content to be obtained from a server pool in a sequential manner using round-robin method and redirect the request in case of inactive server sessions.
  • DNS-based request routing: Here when a request is made (URL), the local DNS server provides the IP address of the nearest matching CDN node. If the Local DNS is not able to resolve the URL, it forwards the request to the Root DNS server, which then provides the nearest possible CDN server IP.
  • Dynamic metafile generation includes creation of a metafile, which has an ordered hierarchy of CDN domains connected to a Main server and helps in the load balancing on each of CDN nodes connected to it.
  • ICAP (Internet Content Adaptation Protocol), OPES (Open Pluggable Edge Services) and ESI (Edge Side Includes) are the protocols used for accessing data through a request in CDNs.
  • ICAP is a high level protocol that helps in generating http requests and delivers contents from the CDN servers.
  • OPES uses a Processor in order to share contents to the end users. This processor duplicates the content at each CDN node and traces the route followed by each request made by the user and notifies the user once the content is found.
  • ESI avoids back end processing delays hence providing dynamic contents with ease. It breaks web content into fragments and delivers dynamic contents to end users.

Benefits of CDNs

  • Accelerates web-based applications
  • Low connectivity latency
  • Optimization of capacity per user
  • Faster and reliable access to contents
  •  Low network loads

Improving application start up time having signed assemblies

I guess all are aware that signed assemblies need verification from CA. This verification can create panic, when certification authority is not present on the same machine. In that case, assemblies require internet access. Situation can be more problematic, if there is no internet or network access on that machine. In absence of network access .Net thread might timeout waiting to connect. This issue can be avoided by adding following setting in machine.config:
<configuration>
    <runtime>
        <generatePublisherEvidence enabled="false"/>
    </runtime>
 </configuration>

Tuesday, April 19, 2011

Security concerns with serialization

Serialization can allow other code to see or modify object instance data that would otherwise be inaccessible. Therefore, code performing serialization requires the SecurityPermission attribute from System.Security.Permissions namespace with the SerializationFormatter flag specified. The GetDataObject method should be explicitly protected to help protect your data.

Sunday, April 17, 2011

Inner workings of Deserialization

Within the runtime, deserialization can be a complex process. The runtime proceeds through the deserialization process sequentially, starting at the beginning and working its wasy through to the end. The process gets complicated if an object in the serialized stream refers to another object.

If an object references another object, the Formatter queries the ObjectManager to determine whether the referenced object has already been deserialized (a backward reference), or whether it has not yet been deserialized ( a forward reference). If it is a forward reference, the Formatter registers a fixup with the ObjectManager . A fixup is the process of finalizing an object reference after the referenced object has been deserialized. Once the referenced object is deserialized, ObjectManager completes the reference. 

Limiting Threads in a ThreadPool

The ThreadPool class supports methods for setting the number of minimum and maximum thread in the thread pool. In most circumstances, yhe number of threads is the pool is set at optimum numbers. If you find that your application is being constrained by the threads in th thread pool, you can set the limits yourself.
There are two types of situations where you will want to change the thread pool thread limits: thread starvation and startup thread speed.

In thread-starvation scenario, your application is using the thread pool but is being hampared because you have two many work items and you are reaching the maximum number of threads in the pool. To set the high watermark of threads for your application, you can simply use ThreadPool.SetMaxThreads.

In cases where the startup costs of using the thread pool are expensive, increasing the minimum number of threads can improve performance. The minimum number of threads dictates how many threads are created immediately and set to wait for new work to do. Typically, the ThreadPool limits the number of new threads to be created during the running of a process to two per second. If your application need more threads created faster, you can increase this size. Setting minimum numbers of threads can be done by using ThreadPool.SetMinThreads

Deploying COM-Enabled Assemblies

Although an assembly can be created visible to COM, one should follow below guidelines to ensure that things work as planned:
  • All classes must use a default constructor with no parameters.
  • Any type that is to be exposed must be public.
  • Any member that is to be exposed should be public.
  • Abstract classes will not be able to be consumed.
After these criteria are met, the assembly is essentially ready to be exported. There are two mechnisms to do so. One can use VS or a command line utility(TlbExp.exe). First you need to compile the type through Visual Studio's build mechanism or through command line compiler as
              csc /t:library ComVisiblePerson.cs
Next you need to use Type Library Exporter Utility. This should be done from VS command prompt:
              tlbexp ComVisiblePerson.dll /out:ComVisiblePersonlib.tlb
Next oyu need to create  resource script (ComVisiblePerson.res) with the following Interface Definition Language (IDL) definition:
              IDR_TYPELIB1 typelib "ComVisiblePersonlib.tlb"
then you recompile the application with the new resource file added as:
              csc /t:library ComVisiblePerson.cs /win32res:ComVisiblePerson.res


Tuesday, January 18, 2011

Moving a type (i.e. class) to another assembly

In .NET, one often refers to other assemblies that contain specific modules that you can use in your application. Say, you reference a DLL that contains some classes that you will use in your application. Suppose the application is deployed. Now, suppose you want to move one class of the DLL to another assembly. What can you do in this situation with old coding methodoligies? The old methodologies say,
  • Remove the class from the existing DLL.
  • Create another DLL (assembly) using that class.
  • Recompile both the DLLs.
  • From your application, add a reference to the new DLL.
  • Recompile the application.
  • Re-deploy the whole thing.
Wouldn't it be nice to leave the deployed application untouched, and make whatever changes are needed in the DLLs? Obviously, that would be nicer. That's where theTypeForwardedTo attribute comes into the scene. By using this, you can move your necessary classes out to a new assembly. Now, when your application looks for the class in the old DLL, the old DLL tells your application (the JIT compiler—to be precise), "Well, the person (class) who lived here has moved to another location; here is the address." It gives in the address, your application follows the address, there it finds the class, and things go on as is.
So, simply you will create another assembly (DLL) and move the class to the new assembly. Then, you have to compile the new DLL and add a reference to it from the previous DLL. Then, you will add a TypeForwardedTo attribute in the previous DLL to mean a specified type is forwarded to some other assembly. Then, you have to recompile the old DLL and, as a result, you will have two DLLs in the release folder of the previous DLL. Now, you just have to place both the DLLs in the root of the deployed application (or wherever the old DLL is).

Article taken from here

Saturday, January 15, 2011

Running one process on multiple processors

A thread in a process can migrate from processor to processor, with each migration reloading the processor cache. Under heavy system loads, specifying which processor should run a specific thread can improve performance by reducing the number of times the processor cache is reloaded. The association between a processor and a thread is called the processor affinity.

Each processor is represented as a bit. Bit 0 is processor one, bit 1 is processor two, and so forth. If you set a bit to the value 1, the corresponding processor is selected for thread assignment. When you set the ProcessorAffinity value to zero, the operating system's scheduling algorithms set the thread's affinity. When the ProcessorAffinity value is set to any nonzero value, the value is interpreted as a bitmask that specifies those processors eligible for selection.

The following table shows a selection of ProcessorAffinity values for an eight-processor system.

Bitmask
Binary value
Eligible processors
0x0001
00000000 00000001
1
0x0003
00000000 00000011
1 and 2
0x0007
00000000 00000111
1, 2 and 3
0x0009
00000000 00001001
1 and 4
0x007F
00000000 01111111
1, 2, 3, 4, 5, 6 and 7

Wednesday, January 12, 2011

WebSite vs WebApplication

The only similarity between a web site and a web application is that they both access HTML documents using the HTTP protocol over an internet or intranet connection. However, there are some differences which I shall attempt to identify in the following matrix:
Web SiteWeb Application
1Will usually be available on the internet, but may be restricted to an organisation's intranet.Will usually be restricted to the intranet owned by a particular organisation, but may be available on the internet for employees who travel beyond the reach of that intranet.
2Can never be implemented as a desktop application.May have exactly the same functionality as a desktop application. It may in fact be a desktop application with a web interface.
3Can be accessed by anybody.Can be accessed by authorised users only.
4Can contain nothing but a collection of static pages. Although it is possible to pull the page content from a database such pages are rarely updated after they have been created, so those pages can still be regarded as static.Contains dynamic pages which are built using data obtained from a central data store, which is usually a RDBMS.
5May be updatable by a single person with everyone else having read-only access. For example, a web site which shows a pop star's schedule can only be updated by that star's agent, but anyone can visit the site and view the schedule.Any authorised user may submit updates, subject to his/her authorisation level, and these updates would immediately be available to all other users.
6May have parts of the system which can only be accessed after passing through a logon screen.No part of the system can be accessed without passing through a logon screen.
7Users may be able to self-register in order to pass through the logon screen.Users can only be registered by a system administrator.
8All users may have access to all pages in the web site, meaning that there may be no need for any sort of access control.The application may cover several aspects of an organisation's business, such as sales, purchasing, inventory and shipping, in which case users will usually be restricted to their own particular area. This will require some sort of access control system, such as a role Based Access Control (RBAC) system.
9May need URLs that can be bookmarked so that users can quickly return to a particular page.Bookmarks are not used as each user must always navigate through the logon screen before starting a session.
10May need special handling to deal with search engines.As no URLs can be bookmarked (see above) then all aspects of Search Engine Optimisation (SEO) are irrelevant.
11Has no problems with the browser's BACK and FORWARD buttons.The use of the browser's BACK and FORWARD buttons may cause problems, so may need code to detect their use and redirect to a more acceptable URL.
12It is not possible to have more than one browser window open at a web site and to maintain separate state for each window. State is maintained in session data on the server, and the session identity is usually maintained in a cookie. As multiple browser windows on the same PC will by default share the same session cookie they will automatically share the same session data and cannot be independent of one another.It may be beneficial to allow separate windows to have separate state as this follows the standard behaviour of most desktop applications which allow multiple instances, each with different state, to exist at the same time. This will allow the user to access one part of the application in one window and another part in another window.
13Execution speed may need to be tuned so that the site can handle a high number of visitors/users.As the number of users is limited to those who are authorised then execution speed should not be an issue. In this case the speed, and therefore cost, of application development is more important. In other words the focus should be on developer cycles, not cpu cycles.

Wednesday, January 5, 2011

Are cloud storage providers good for primary data storage?

Why not use a cloud storage provider?
The most persuasive argument against using cloud storage for primary storage is application performance. Application performance is highly sensitive to storage response times. The longer it takes for the application's storage to respond to a read or write request, the slower that application performs. 


Public cloud storage by definition resides in a location geographically distant from your physical storage when measured in cable distance. Response time for an application is measured in round-trip time (RTT). There are numerous factors that add to that RTT. One is speed of light latency, which there is no getting around today. Another is TCP/IP latency. Then there is a little thing called packet loss that can really gum up response time because of retransmissions. It is easy to see that for the vast amount of SMB(small mid sized business) primary applications, public cloud storage performance will be unacceptable. 


When do cloud storage services make sense?
If an SMB is using cloud computing services such as Google Docs, Microsoft Office 365, or SalesForce.com, then it makes sense to store the data from those apps in a cloud storage service. In those cases, the data storage is collocated with the applications. Response time between the application and storage is the same as if the application and storage were in the SMB's location. The key issue here is the response time between the cloud application and the SMB user. In this scenario, the collocated storage is not the bottleneck to user response time. Therefore, if the cloud application performance is adequate, so too is the cloud storage.
If the cloud storage and the application that's using it are collocated, then it makes sense to use cloud storage as SMB primary storage. Otherwise, slow application performance would make using a cloud data storage provider a poor choice for your SMB environment.