
Content provided by OSR Open Systems Resources, Inc.
Oplocks on Windows NT
This article was originally written based on information from Windows NT 3.51. As such, it does not
refer to the newer "filter oplocks" which were added in NT 4.0.
A standard technique in developing operating systems has been to cache data in order to improve access
times. This approach works well because data that has been recently accessed will be accessed again soon.
For a system where there is only ever one program accessing data, caching works fine. However, once a
second program attempts to access the same data the issue of cache consistency becomes critical.
Because there are now potentially many copies of the data – each one in a different cache – there must be
some mechanism to keep these copies in sync with one another. If not different users of the data will have
different views and thus there are now many copies of the data, with each one different than the others.
When accessing files across a network, the simplest scheme is to always store all data back on the file
server. Thus, whenever an application program reads data, that read request is satisfied from the file server
– across the network. This ensures that there is a consistent view of data since there is only a single copy of
the data in existence – on the file server.
Unfortunately, the performance characteristics of such systems are not ideal. While file servers can be built
to work very fast there are numerous bottlenecks between the client accessing the data and the file server
storing the data, not to mention the added latency necessary to fetch the data from the file server each time.
Studying this problem at length reveals that the majority of data being retrieved from the file server is never
modified – it is only read. Data that is being modified is almost always being modified by a single program
– like a word processing document. Data that is being modified by multiple programs represent an
incredibly tiny amount of the total data traffic. In spite of this, users do expect their file systems to ensure
that any data they access is correct – not most of the time, but all of the time.
The typical access characteristics for such data make network file system clients ideal candidates to cache
data – and many of them do so, using a variety of different techniques to ensure cache consistency. For
example, the venerable NFS file system protocol utilizes a scheme of checking file system time stamps on
the remote file server to detect when data in its cache may become stale. This solution is not a perfect one
since there is a window in which old data may be cached by the NFS client, it has worked well for many
years.
For Windows NT, network file system caching is implemented by the LanManager redirector (the "client")
and file server (the "server"). In order to ensure correctness of the cached data, LanManager implements a
basic cache consistency scheme which covers the entire file contents. Where files are being simultaneously
accessed across the network by multiple users for both read and write access, caching is disabled – clients
must fetch data from the file server each time it is read, and must store it back immediately each time it is
written. However, in the vast majority of cases, the client will cache data locally. This minimizes network
traffic and vastly improves performance for most file access on Windows NT.
This is implemented by Windows NT using a cache consistency scheme known as opportunistic locking.
An opportunistic lock is known as an "oplock" in the parlance of Windows NT file systems. Further, the
implementation of oplocks by Microsoft impacts both their network and local file systems. Because the
details of the local implementation are tightly coupled to how oplocks are used by network file systems, we
describe the network implementation initially and then return to discussing issues associated with their
local implementation for NT file systems.
- 1
- 2
前往页