Debugging. The CLR exposes debugging hooks that can be used to debug or profile your assemblies.
Performance and Scalability Issues
This section is designed to give you a high‐level overview of the major issues that can impact the performance and scalability of managed
code. Subsequent sections in this chapter provide strategies, solutions, and technical recommendations to prevent or resolve these issues.
There are several main issues that impact managed code performance and scalability:
Memory misuse. If you create too many objects, fail to properly release resources, preallocate memory, or explicitly force garbage
collection, you can prevent the CLR from efficiently managing memory. This can lead to increased working set size and reduced
performance.
Resource cleanup. Implementing finalizers when they are not needed, failing to suppress finalization in the Dispose method, or
failing to release unmanaged resources can lead to unnecessary delays in reclaiming resources and can potentially create resource
leaks.
Improper use of threads. Creating threads on a per‐request basis and not sharing threads using thread pools can cause
performance and scalability bottlenecks for server applications. The .NET Framework provides a self‐tuning thread pool that should
be used by server‐side applications.
Abusing shared resources. Creating resources per request can lead to resource pressure, and failing to properly release shared
resources can cause delays in reclaiming them. This quickly leads to scalability issues.
Type conversions. Implicit type conversions and mixing value and reference types leads to excessive boxing and unboxing
operations. This impacts performance.
Misuse of collections. The .NET Framework class library provides an extensive set of collection types. Each collection type is
designed to be used with specific storage and access requirements. Choosing the wrong type of collection for specific situations
can impact performance.
Inefficient loops. Even the slightest coding inefficiency is magnified when that code is located inside a loop. Loops that access an
object's properties are a common culprit of performance bottlenecks, particularly if the object is remote or the property getter
performs significant work.
Design Considerations
The largest contributing factor to application performance is the application architecture and design. Make sure performance is a
functional requirement that your design and test performance takes into account throughout the application development life cycle.
Application development should be an iterative process. Performance testing and measuring should be performed between iterations and
should not be left to deployment time.
This section summarizes the major design considerations to consider when you design managed code solutions:
Design for efficient resource management.
Reduce boundary crossings.
Prefer single large assemblies rather than multiple smaller assemblies.
Factor code by logical layers.
Treat threads as a shared resource.
Design for efficient exception management.
Design for Efficient Resource Management
Avoid allocating objects and the resources they encapsulate before you need them, and make sure you release them as soon as your
code is completely finished with them. This advice applies to all resource types including database connections, data readers, files,
streams, network connections, and COM objects. Use finally blocks or Microsoft Visual C#® using statements to ensure that resources
are closed or released in a timely fashion, even in the event of an exception. Note that the C# using statement is used only for resources
that implement IDisposable; whereas finally blocks can be used for any type of cleanup operations.
Reduce Boundary Crossings
Aim to reduce the number of method calls that cross remoting boundaries because this introduces marshaling and potentially thread
switching overhead. With managed code, there are several boundaries to consider:
Cross application domain. This is the most efficient boundary to cross because it is within the context of a single process. Because
the cost of the actual call is so low, the overhead is almost completely determined by the number, type, and size of parameters
passed on the method call.
Cross process. Crossing a process boundary significantly impacts performance. You should do so only when absolutely necessary.
For example, you might determine that an Enterprise Services server application is required for security and fault tolerance reasons.
Be aware of the relative performance tradeoff.
Cross machine. Crossing a machine boundary is the most expensive boundary to cross, due to network latency and marshaling
overhead. While marshaling overhead impacts all boundary crossings, its impact can be greater when crossing machine