Managed Debugging Assistant !!!

The Loader Lock is a synchronization object that hepls to provide mutual exclusion during DLL loading and unloading. It helps to prevent DLLs being re-entered before they are completely initialized [in the DLLMain].

When the some dll load code is executed, the loader lock is set and after the complete intialization it is unset. But there is a possibility of deadlock when threads do not properly synchronize on the loader lock. This mostly happens when threads try to call other other Win32 APIs [LoadLibrary, GetProcAddress, FreeLibrary etc] that also require the loader lock. Often this is evident in the mixed managed/unmanaged code, whereby it is not intentional but the CLR may have to call those APIs like during a call using platform invoke on one of the above listed Win32 API.

For instance, if an unmanaged DLL’s DllMain entry point tries to CoCreate a managed object that has been exposed to COM, then it is an attempt to execute managed code inside the loader lock.

MDA – Managed Debugging Assistant, facility available in .NET 2.0/VS 2005 helps to find out this situation while debugging and pops up a dialog box. Then we can break into the code, have a look at the stack trace and resolve it. The feature can be disabled if not needed.

So what could be the effect of this deadlock ? It saved me whole of time and effort that I would have wasted when such a box poped up in my project, and I do not know if I would have found the reason. If the thread that deadlocks happens to be the GC thread or any thread that loads and unloads my assemblies, I do not have explain further the disasterous effect. And for a programmer like me, new to the .NET environment, who has not yet gotten out of the fascinating external features, will not ponder into the internals.

Advertisements

Do not delete [] a scalar pointer !!!

Recently I got tangled into this problem in my code – Calling a vector dtor for a scalar pointer. We all know that it is perfectly illegal to do that. For example, if we allocate something like this:-

OurClass *p = new OurClass();

and try to delete like this:-
delete []p;

then we are going to end up in trouble. Ofcourse we know that we will end up in trouble. But I have really not given a thought HOW ?

When we allocate an array of items eg. OurClass pa[] = new[5] pa(), the compiler actually allocates the necessary amount of memory, calls the ctors for each allocated class and also prefixes the block of memory of the ‘n’ items allocated with the number of items allocated.

NumItems | OurClassObject1 | OurClassObject2 | …… | OurClassObjectn

But pa always points to the first item in the allocation, thereby the item count prefix remains hidden. When we call delete[] pa, the compiler uses the item count prefix to delete the allocated objects and call the dtors.

Now i think i don’t need explain any further as what happens when i use delete []p, and what junk value will the compiler take from the memory location just before the memory location p believing it to be the item count.

I learnt this interesting information from OldNewThing Blog. Adam Nathan has explained it well with the compiler generated assembly and a bit of excellent code for the dtor.

And what if we do a scalar delete on a vector pointer, there is less harm, you do not unallocate the memory completely, you leave behind remnants of your allocated memory which you cannot reclaim.

Either way, it is better to be disciplined while programming.

Where do you QueryInterface ???

For an ATL class, the QueryInterface is implemented in CComObject. The figure below is the inheritance hierarchy for a class generated by the wizard representing an ATL-COM object.

CComObjectRootBase has an InternalQueryInterface method, which uses the interface map built by the BEGIN_COM_MAP macro to resolve IID -> interface pointer. The BEGIN_COM_MAP macro also defines a method _InternalQueryInterface, which passes the map on to InternalQueryInterface. CComObject implements QueryInterface, and calls _InternalQueryInterface.

NOTE:
CComObjectRootEx: Provides methods to handle object reference count management for both nonaggregated and aggregated objects.

CComObject: Implements IUnknown for a nonaggregated object. It is a template class that takes a class like CSomeClass derived from CComObjectRootEx.

CComObjectNoLock: Implements IUnknown for a nonaggregated object, but does not increment the module lock count in the constructor. ATL uses CComObjectNoLock internally for class factories.

CComCoClass: Defines the object’s default class factory and aggregation model.

Use Of Class Factories !!!

To understand quickly and to explain in the simplest way, Class Factories are the factory classes that create a COM object. A class factory may be responsible for creating one or more COM objects. In the case of COM OutOfProc servers, the server registers the class factories for objects that it can create in a system-global table using CoRegisterClassObject. Whenever a client does CoGetClassObject for a CLSID, the COM run-time can look it up in the system global table, and return the factory instance. The case with InProc servers is also similar but through the DLLGetClassObject.

The point here is that class factories are required [irrespective of how they exist physically in the servers, either as seperate instances or the COM object itself behaving as a factory for the objects of that type], they abstract the creation of the COM object through IClassfactory::CreateInstance.

Unsafe Operations with STL !!!

It is UNSAFE to do any operation on an STL container that will modify its size while holding a reference to one of its exisiting element. What could happen is, when you do an operation, say push_back on a vector, it determines if there is enough space available to add a new element. If there is not sufficient space available, it allocates new space for whole of the data structure and deletes the old buffer. At this point, any reference to one of its elements created prior to push_back would have gotten corrupted.

For example, the following usage of code is dangerous when used within a single scope.

SomeClass &sc = m_Vector.back();
m_Vector.push_back(someotherobject);
.
.
.
sc.SomeMethodCall(); // Code might crash here.

Consoles for Mr.GUI !!!

Learnt something new, a small one but very useful.
Many times I have seen GUI applications accompanied by console windows that show logs or trace information of the application. How do we do that for our application ?

Any GUI application can create its own console window just by calling AllocConsole Win32 API. Actually any process can use that API to allocate a new console. And the application must also learn to be disciplined enough to FreeConsole. Ok, fine. I used that in my small MFC application and was happy to see the console. But I did not see anything displayed on the console. As we know, each process has its own stdin, stdout and stderr. So redirect the console output of your parent application to the console. How do you do that ?

Use the FILE *freopen(const char *path, const char *mode, FILE *stream); API. The freopen function closes the file currently associated with stream and reassigns stream to the file specified by path. By that way, call freopen as follows:-

FILE *fpStdOut = freopen (“CONOUT$”, “w”, stdout);

This means that I want to reassign the standard console output stdout with the console output of the parent application CONONUT$. So any printf calls will print the characters on the console. Cool !!!

Setting Environment Variables !!!

Need to change or set the value of an environment variable programmatically and without the need to restart/log off the machine. I need the change to reflect for all processes, ie, I need to change the global environment value and not the one in the PEB [Process Environment Block] of a process. Frustated with setting the value of an environment variable !!!

For getting the set of environment variables or to get the value of an environment vaible from your C# program, there is the GetEnvironmentVariables/GetEnvironmentVariable API in the System.Environment class. But there is no API for setting the value of an environment variable.

The system environment variables are stored in the registry under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Environment.

The [current] user environment variables are stored in the registry under HKEY_CURRENT_USERE\Environment.

When the system boots up, the environment is built from this list in the registry. If we change the value directly in the registry, the change does not effect. For example, change the value of TEMP variable that specifies the temporary files directory, in the registry and check with the set command in the command prompt, you won’t see the change you made. Or just create a new entry in the registry under the one of the above mentioned registry paths, you won’t see the change. Also you can verify that programmatically with GetEnvironmentVariable API.

But the changes you made will be reflected after a log off or restart. After some research, I found the Win32 SDK API SetEnvironmentVariable. But unfortunately, it just the changes the variable value in the PEB of that process alone, it does not effect the global environment values. Pathetic.

There is definitely a solution for this simple and primary problem. All we have to do is to update the registry as we discussed before, and also notify that the global enviroment variable list has been modified. Ok, how do we do that ?

Simple, one line of code.

// Broadcast the WM_SETTINGCHANGE message for Enviroment

SendMessageTimeout(HWND_BROADCAST, WM_SETTINGCHANGE, 0,
(LPARAM) “Environment”,
SMTO_ABORTIFHUNG,
5000, &dwReturnValue);

Of course, this is C++ code. Not a big deal to do that in C# or whatever.