Monday, April 23, 2012

Finalize in .Net


We implement the Finalize method to release the unmanaged resources. First let’s see, what is managed and unmanaged resources. Managed ones are those, which we write in .Net languages. But when we write code in any non .Net language like VB 6 or any windows API, we call it as unmanaged. And there are very high chances that we use any win API or any COM component in our application. So, as managed resources are not managed by CLR, we need to handle them at our own. So, once we are done with unmanaged resources, we need to clean them. The cleanup and releasing of unmanaged is done in Finalize(). If your class is not using any unmanaged resources, then you can forget about Finalize(). But problem is, we can’t directly call Finalize(), we do not have control of it. Then who is going to call this. Basically GC calls this.
And one more thing to remember is, there is no Finalize keyword that we will write and implement it. We can define Finalize by defining the Destructor. Destructor is use to clean up unmanaged resourced. When u will put ~ sign in front of class name, it will be treated as destructor. So, when code is compiled, the destructor is going to convert that into Finalize and further garbage collector will add it to the Finalize queue. Let’s take this sample code:

class A    {
       public A()
       { Console.WriteLine("I am in A"); }
       ~A()
       { Console.WriteLine("Destructor of A"); }
   }

   class B : A    {
       public B()
       { Console.WriteLine("I am in B"); }
       ~B()
       { Console.WriteLine("Destructor of B"); }
   }

   class C : B   {
       public C()
       { Console.WriteLine("I am in C"); }
       ~C()
       { Console.WriteLine("Destructor of C"); }
   }

Now using Reflector, we will see, if Destructor, really converted to Finalize:










And WOW, it’s really done. Here we can see that there is nothing like destructor. Basically destructor is overriding the Finalize method.

Hope it helps !!!

Saturday, April 21, 2012

Memory Leak Analysis for .Net application


Memory leaks in .Net applications are always proven to be the nightmare for developers. Many times we get “OutOfMemoryException”, which is nothing but due to memory leak only. There are many reasons, which lead to memory leak situation. For example, sometimes we forget to release unmanaged resources, dispose heavy objects (i.e., drawing objects), even holding reference of managed objects, longer than necessary can also lead to memory leaks.

So, if the application is small, one can analyze the code and figure it out, which object is causing memory leak. But when it comes to a large application, it is not at all possible to figure out manually. In that case, we need some tool, which can help us to figure out the area or object, which is causing memory leak. So, today I surf internet and came up with a tool called .Net Memory Profiler, which can do analysis for us and give us the statistics of all the instances.

Ok, instead of getting more into theory, let’s jump quickly to the demo. I have a windows form application named “MemoryLeakAnalysis”. Now I open memory profiler, which comes up with the below screen. Profiler can be run in two different modes as interactive (normal mode with UI shown below) and non-interactive mode (can only be used for automated testing as part of script. It will not show any window).
Click on ‘Profile application’ and select the exe of your application, as shown below. If require, command line argument can also be provided
On click of next, you can decide the profiling level as Very low, Low, Medium, High, etc. Moving further, you can also decide, whether you want to enable unmanaged resource tracker (collects information about handles, GDI handles, etc), and finally click on start. Clicking on start will launch your application (here my application name is Test Leakage)
In the right hand side, you can see various options as Collect snapshot, Stop profiling, Show real-time data. And just below that, we have ‘Investigate memory leaks’. On clicking of ‘Investigate memory leaks’, you will get the list of major steps, which needs to be taken up, in order to analyze leakage.





















Now the actual investigation starts.
1)     Perform initial operation - Perform the operation you suspect is memory leaking (e.g., open a document, work with it and then close it). Performing an initial operation will make sure that instances that are only created once are not included in the memory leak investigation. In my case, I’ll click on ‘Start Memory Leak’ button and after a while, I’ll click on ‘Stop Memory Leak’
2)     Collect base snapshot - The base snapshot will be used as a reference when looking for unexpected new instances that are created by the operation. Once the snapshot is taken, we will come up with below screen, with some statistics.
















3)     Perform operation again - Again we will perform the operation, which we suspect are leaking memory.    Because, this operation will give us new snapshot for comparison. In my application, I will again click on ‘start Memory Leak’ button:









4)     Collect primary snapshot - The primary snapshot will be used when investigating new instances that might be part of memory leak.
5)     Identify the types with New instances - Instances shown under Overview tab (highlighted one), are the one’s, which are not garbage collected.








6)   Identify the types which are not expected to have New instances - For those instances, we will find that value of New column is 0, which clearly states that that object is already collected by GC.
7)    Investigate root path - Root path can be extremely useful for identifying memory leaks. Shortest path provide information about why instances are not garbage collected. You can use browse buttons to locate a root path that you’d like to investigate further












8)     Determine whether root path instance is part of memory leak - Instance graph and Allocation call stack will provide information about how the instance is used, why it has not been garbage collected, and how it was created. This information can be used to determine, whether instance is part of memory leak or not













9)     Steps from 6 to 8 can be used to analyze another types.

So, by looking at the instance graph and red arrows shown above, will help us the identify, where exactly leak is happening.




Saturday, March 31, 2012

Computer performance & Clock speed


Many people use clock speed as a measure of a computer’s total computing power, but that term can be very misleading for a couple of reasons.

The computer keeps all its devices synchronized by using its clock. This isn’t a regular clock—it’s a “clock in a chip,” which keeps highly accurate time and ticks much more rapidly than a wall clock. The faster the computer’s clock ticks, the more quickly the device can move on to a new task. The central processing unit  needs a certain number of clock ticks to  execute each of its instructions. Therefore, the faster the clock ticks (that is, the “clock speed”), the more instructions the CPU can execute per second.

 However, that’s not the end of the story. Different processors use different instruction sets, each of which can require a different number of ticks. That means different kinds of processors may execute different numbers of instructions per second, even if they have the same clock speed. You can use clock rate to compare two of the same kinds of processor (for example, a 2.93-gigahertz (GHz) Intel Pentium 4 and a 3.0-GHz Intel Pentium 4) but not as an accurate comparison between two processors of different types (for example, a 3.0-GHz Intel Pentium 4 and a 3.0-GHz AMD Athlon II).

Even if you could fgure out which processor executes more instructions per second, that figure alone doesn’t necessarily tell you which computer will be faster for your program. Many programs - most, in fact—are limited by factors other than sheer processor speed, including amount and speed of memory, disk space, network speed, graphics or floating-point processor speeds, and bus speed.

Many modern computers have multiple processors or multiple cores (execution areas within a processor), so they can perform more than one task at the same time. Whether the computer gets a significant beneft from multiple cores depends on whether the tasks it is performing can be easily split into separate pieces—and whether the program was written to take advantage of multi-core hardware. Many programs are limited by disk drive speed. Disk drives spin at anywhere from 3,000 RPM to 15,000 RPM (speeds between 4,200 RPM and 7,200 RPM are most typical), so the time it takes to read and write data can vary dramatically.

Which of these factors is most important for your application depends on what that application does. If your program uses a local database heavily, disk speed will be a big factor. If the database is on a remote server, then the speed of the server and the network’s speed are probably bigger performance factors for your application than the speed of your local CPU.

To get an idea of how well the program will run ahead of time, focus on the system’s overall performance, running a wide variety of tests rather than looking just at clock speed. To look at one set of tests in the most recent versions of Microsoft Windows, open the computer’s Start menu, right-click the Computer entry, and select Properties to see the basic information display shown in Figure:




To get more detail, click the Windows Experience Index link to see the display shown in Figure below. This display shows performance scores for several different system features.



By given figure, we can see that the graphics scores are the lowest, so this system may not give the best performance for high-end graphics programs, such as three-dimensional games. But the processor, RAM, and disk scores are higher, so this computer may be just fine for applications that are not graphics-intensive.The Windows Experience Index still doesn’t consider your program’s particular needs. For  example, it doesn’t know what kinds of instructions your program will perform the most (such as integer calculations, foating-point calculations, string operations, and so on) and it doesn’t consider network bandwidth, but at least it provides a reasonably consistent value that can help you compare different systems.





Friday, March 30, 2012

Laptops vs Notebooks vs Netbooks vs Tablets


A laptop is a computer that is intended to run anywhere as it is portable. Laptops have integrated screens and keyboards and run on batteries. Heavy use to some hardware such as GPU, DVD drives can quickly drain the batteries. Laptops have a touchpad, pointing stick, trackball, or other pointing device. Nowadays, we can add external devices like mouse, keyboard etc.

Notebooks are stripped-down laptops. They are thin and have relatively small screens and are ultra-light. They rarely have CD-ROM or DVD and also have very limited graphics capabilities. As notebooks doesn’t have external media (DVD, etc), they typically have integrated network connection hardware so one can load software on to them. Network hardware can be used to access the internet.

Netbooks are even more stripped down than notebooks. They typically have less powerful processors and are primarily used for networked applications as web browsers, where most of the processing happens on a remote server.

A tablet is similar to laptop that uses touch screen or stylus as its primary input device. Tablets may display virtual keyboards on their screens and may use handwriting recognition for text input.Lap

Sunday, March 4, 2012

WPF: Significance of x:Key attribute


Each object declared as resource must set x:Key property. This property will be used by another elements to access the resource. But for Style objects, there is an exception. Style objects that set the TargetType property do not need to set x:Key explicitly because it is set implicitly behind the scenes.

Scenario1: When x:Key is defined
<Style x:Key="myStyle" TargetType="Button">
      <Setter Property="Background" Value="Yellow"/>
</Style>
In above example, x:Key property is used, so, style will be visible on Button, only when Style property is used explicitly on element, as shown in below snippet:
<Button Style="{StaticResource myStyle}" Width="60" Height="30" />

Scenario2:When x:Key is not defined
<Style TargetType="Button">
    <Setter Property="Background" Value="Yellow"/>
</Style>
In above example, style will be applied by default on all the buttons (due to TargetType) as no x:Key is defined in Style resource. Code for the button is shown below:
<Button Name="btnShow" Width="60" Height="30" />

Saturday, March 3, 2012

WPF: StaticResource vs DynamicResource


Logical resources allow you to define objects in XAML, which are not part of visual tree but can be used in your user interface. One of the examples of logical resource is Brush, which is used to provide a color scheme. Generally those objects are defined as resource, which are used by multiple elements of the applications.

   <Window.Resources>
        <RadialGradientBrush x:Key="myGradientBrush">
            <GradientStop Color="Green" Offset="0"/>
            <GradientStop Color="Blue" Offset="2"/>
        </RadialGradientBrush>      
    </Window.Resources>

Now, above declared resource could be used as either static or dynamic resource. One point to remember is that, when using static resources, it should be first defined in XAML code, before it can be referred. Static and Dynamic resources can be used as:

<Grid Background="{StaticResource myGradientBrush}"></Grid>
or
<Grid Background="{DynamicResource myGradientBrush}"></Grid>

The difference between StaticResource and DynamicResource lies in how the resources are retrieved by the referencing elements. StaticResource are retrieved only once by the referencing element and used for entire life of the resource. On the other hand, DynamicResource are acquired every time the referenced object is used.
Putting it in simpler way, if the color property of RadialGradientBrush is changed in code to Orange and Pink, then it will reflect on elements only when resource is used as DynamicResource. Below is the code to change the resource in code:

RadialGradientBrush radialGradientBrush = new RadialGradientBrush( Colors.Orange, Colors.Pink);
this.Resources["myGradientBrush"] = radialGradientBrush;

The demerit of DynamicResource is that it reduces application performance because resources are retrieved every time they are used. The best practice is to StaticResource use until there is a specific reason to use DynamicResource.

Saturday, December 31, 2011

Problem with Primary key as an Integer

We all know that any good database design has a unique primary key. Now point is how to decide, whether our primary key should be integer or not ? Well, primary key as an Integer works well with local systems and easy to use and also works great while writing manual SQL statements. But what if one is not working on local system and working in a distributed environment where one has to deal with replication scenarios. In such scenarios, Integers can't be primary key as they have state (sequencing) and can become a major security threat. Now here system demand for something unique apart from integers. And here comes GUID into picture, which provides globally unique id. You might be thinking that, is it a good choice to use 16 bytes of primary key instead of 4 bytes, then my answer will be definitely YES only when sync'ing is required. Using GUID as a row identity feels more truely unique and databse guru Joe seems to agree. But again performance issues arises in various scenarios. Just wait a while for my next post on more performance impacts


Advantages of using GUID:

  • Unique across every server, every database, every table 
  • GUID's can be generated from anywhere, without doing round trip to database
  • Provides easy merging and distribution of databases among multiple servers

Thursday, November 24, 2011

Finally I'm back to blogging world

Oh, finally I am back. First I would like to apologize for the lack of recent posts. As finally I am settled, soon I'll again start posting in a week or two.

Sunday, June 5, 2011

Reducing flicker, blinking in DataGridView

One of my project requirement was to create a Output Window similar to Visual Studio. For that I used a DataGridView. But when I start my application , I found that there is lot of blinking, flicker, pulling...After badly hitting my head with google, I found a very easy way. We just need to create a extension method to DataGridView and it's all done:

public static void DoubleBuffered(this DataGridView dgv, bool setting)
{
    Type dgvType = dgv.GetType();
    PropertyInfo pi = dgvType.GetProperty("DoubleBuffered",
          BindingFlags.Instance | BindingFlags.NonPublic);
    pi.SetValue(dgv, setting, null);
}

Saturday, May 14, 2011

Overview of CDN

CDN (Content Delivery Network)
  • A computer network which has multiple copies of data stored at different points of the network.
  •  The end user connected to a CDN will access the data from the nearest server (middle) instead of connecting to a Central server.
  •  Few of the applications include media distribution, multiplayer gaming, and distance learning.
  •  The end user can be a wired or wireless unit, which tries to access the content.
The middle servers (or several servers forming a cluster) store the images of the content from the central (main) server. They are located at the edge of the ISP network and may be geographically separated from each other.

Elements of CDN

  • Request: A request for a specific content (for e.g. a webpage) is made from the End user, which is redirected to the nearest image server. This is done by the use of a protocol known as Web Cache Communication Protocol (WCCP).
  •  Distribution: Once the request is received, a distribution element in the CDN forwards the request based on the point of origin, content availability, location and servers' global load.
  •  Delivery: Delivery of the requested content is made by this element by using routing and switching protocols.
 Algorithms/Protocols used in Request Routing
  • A variety of algorithms are used for this purpose. These include Global Server Load Balancing, DNS-based request routing, Dynamic metafile generation etc.
  • Global Server Load Balancing (GLSB) enables the content to be obtained from a server pool in a sequential manner using round-robin method and redirect the request in case of inactive server sessions.
  • DNS-based request routing: Here when a request is made (URL), the local DNS server provides the IP address of the nearest matching CDN node. If the Local DNS is not able to resolve the URL, it forwards the request to the Root DNS server, which then provides the nearest possible CDN server IP.
  • Dynamic metafile generation includes creation of a metafile, which has an ordered hierarchy of CDN domains connected to a Main server and helps in the load balancing on each of CDN nodes connected to it.
  • ICAP (Internet Content Adaptation Protocol), OPES (Open Pluggable Edge Services) and ESI (Edge Side Includes) are the protocols used for accessing data through a request in CDNs.
  • ICAP is a high level protocol that helps in generating http requests and delivers contents from the CDN servers.
  • OPES uses a Processor in order to share contents to the end users. This processor duplicates the content at each CDN node and traces the route followed by each request made by the user and notifies the user once the content is found.
  • ESI avoids back end processing delays hence providing dynamic contents with ease. It breaks web content into fragments and delivers dynamic contents to end users.

Benefits of CDNs

  • Accelerates web-based applications
  • Low connectivity latency
  • Optimization of capacity per user
  • Faster and reliable access to contents
  •  Low network loads