Page File


Windows works with virtual memory – the sum of physical RAM installed and the current size of the pagefile(s) in use.

The Memory Manager is constantly checking that the working sets of the processes on the system, the kernel and paged pool allocations have been recently used – if they have not then they can be considered as taking up space that could be used by something else.

The “something else” could be a request for dynamic memory from a driver or a process, or it could be Superfetch preloading file sections based on usage patterns – increasing the memory used by the system cache.

This is in an effort to reduce the intermittent disk I/Os when memory is suddenly needed (paging on demand) and to use idle time to have data that might be requested (including .EXE, .DLL, .SYS images) already ready for provision by the Cache Manager.
Least Recently Used (LRU) pages consuming physical memory pages go to the pagefile, by default this is C:\PAGEFILE.SYS and is dynamically sized (also reset to the minimum size at system startup).

There have been many recommendations about the location and size of the pagefile(s), and even questions about if it should be present at all.
Size… is it important?

The classic “pagefile should be 1.5x the amount of physical RAM installed” statement is from many years ago and makes no sense given the amount of memory available in machines today (even workstations) – in fact the more RAM you have, the less need you have for the pagefile to be huge.

However, this is still the default “recommendation” seen from within the virtual memory configuration in Windows – as it can’t determine what you are going to use the system for in order to calculate the most appropriate size for the pagefile.
Location, location, location

It is argued that moving the pagefile to a separate physical disk, or placing onto a striped RAID set will dramatically improve performance.

In my opinion, if you notice a significant change by relocating the pagefile this way, then you are using that pagefile way more than you ought to be, implying you need more RAM.

Firstly, if the pagefile is being accessed due to a demand for more physical memory by a running process, then the disk I/O is unavoidable and it makes no difference where it is located.

If the demand for physical memory comes from a disk read operation, then there will be a slight improvement by having the disk read and paging operations able to work concurrently because they are on separate devices (ideally separate channels too).
However, this performance gain will be for the duration of the disk read, it won’t affect the performance of the applications or OS (unless the application is designed to do intensive disk I/O, in which case it will be down to its design to utilize its memory correctly – such as bypassing the cache if it is just scanning data).
“I have loads of RAM – can I run without any pagefile at all?”

Windows can operate without a pagefile configured, but then virtual memory == physical memory so once it is all committed (and not by the cache) then you will run into problems.

Also, some applications will check for the presence of a pagefile and refuse to launch without one.
“What is the downside of not having a pagefile, or changing the defaults?”

If your system bugchecks, you NEED:
1. a pagefile on the boot drive with a minimum size at least as large as RAM+50MB
2. the same amount of free disk space on the boot drive

If you don’t have this, you get no dump file generated.
The bugcheck procedure is as follows:

1. A bugcheck occurs (KeBugCheckEx is invoked)
> This could be a critical process terminating, a memory access violation in kernel mode, an NMI button press, etc.

2. Processor IRQL is raised to the highest level (31 on x86, 15 on x64)
> This prevents every thread from executing and masks out all other possible interrupts, bar NMIs (hence disabling ASR is essential)

3. The dump settings are read from the registry to determine the level of debugging information to attempt to write
> HKLM\System\CurrentControlSet\Control\CrashControl\CrashDumpEnabled (REG_DWORD)

4. Disk subsystem drivers are bypassed and the pagefile.sys has the relevant populated physical pages written to it
> CrashDumpEnabled==1 – complete memory dump this is every valid page in RAM, so includes user-mode
> CrashDumpEnabled==2 – kernel dump, just pages mapped by the kernel
> This is why you need the boot volume pagefile minimum size to be large enough to hold all the relevant valid pages, and a little extra for a header

[As the disk is being written to without any optimized drivers, this process can take a while and if it is interrupted, the dump is corrupted]

5. A registry value (REG_DWORD) is set
> This records the fact that a bugcheck occurred so that it can be handled during the next startup

6. HKLM\System\CurrentControlSet\Control\CrashControl\AutoReboot (REG_DWORD) is checked – if this is 1 then the system restarts
> This is a warm boot, not a graceful system shutdown

7. During startup, the boot manager checks to see if the pagefile was cleanly closed at the end of the previous session – if not, the boot menu is presented
> This covers the situations where Windows is crashing during startup, bugchecking during operation or simply has the power pulled

8. If the pagefile on the boot volume contains a valid memory dump, it is renamed and a new pagefile reinitialized

9. winlogon.exe reads the registry value written in (5) and starts process savedump.exe to extract the memory contents from the renamed pagefile to a temporary file (DUMPxxxx.tmp) on the same drive
> This is why you need enough free disk space on the boot drive to create this temporary file

10. Once the dump is extracted, the temporary file is moved to the final destination specified in the registry (default is %systemroot%\MEMORY.DMP)
> HKLM\System\CurrentControlSet\Control\CrashControl\DumpFile (REG_EXPAND_SZ)

11. The registry value set in (5) is deleted
> This avoids checking the pagefile for a dump file on the next startup

12. An event is logged in the System event log (once the service starts) to indicate the dump file was saved following the bugcheck

13. If the tickbox is checked, an administrative alert is sent

14. The boot process continues

 

“What about ReadyBoost?”

ReadyBoost is purely an extension for system cache, it is not a replacement for, or addition to the pagefile – nor is it “extra RAM”.

It is used to contain file sections that have been read from disk (slow) into RAM (fastest) to store in a cache that is accessible over a fast, independent bus (USB 2.0) without going back to disk (pagefile or re-read from original location).

As USB devices can be removed without warning (“surprise removal”), we cannot have those devices holding data that has not been committed or does not exist elsewhere – so ReadyBoost cannot be used for paging out memory contents.

If the ReadyBoost device is removed, there is no data loss as the original copy of the data is still available – the disk.

In short, cache memory should be treated as volatile and should cause no unrecoverable error if it gets flushed.

Advertisements

Capturing network traffic in Windows 7 / Server 2008 R2


Previously a capture filter driver had to be loaded in order to intercept and record all the packets passing through network interfaces (think WinPcap & NetMon filter drivers).

Now, the ability to create a network trace is in-box with Windows 7 & Server 2008, without even a reboot required!

It is covered in detail over at the Network Monitor blog, but the key bits I will cover here as it’s so simple…

 

In the most basic form, this is how you start capturing all network traffic on the machine with the default settings:
netsh trace start capture=yes

An example of the output from this command:
Trace configuration:
——————————————————————-
Status:             Running
Trace File:         C:\Users\padams\AppData\Local\Temp\NetTraces\NetTrace.etl
Append:             Off
Circular:           On
Max Size:           250 MB
Report:             Off

As you can see, the default here is a 250MB circular buffer and the file is stored in a a temp folder in the user profile.

 

To later stop recording:
netsh trace stop

This performs some cleanup operations and then reports something like this:
Correlating traces … done
Generating data collection … done
The trace file and additional troubleshooting information have been compiled as
“C:\Users\padams\AppData\Local\Temp\NetTraces\NetTrace.cab”.
File location = C:\Users\padams\AppData\Local\Temp\NetTraces\NetTrace.etl
Tracing session was successfully stopped.

The .CAB file produced contains various configuration diagnostics files, and the .ETL file is the trace file… with a little extra.

 

NetMon 3.2 and later is able to open the .ETL file, but in order to make sense of the data you need to tweak a couple of things…

With NetMon installed, download the Network Monitor Open Source Parsers package and install it.

Launch NetMon, then click on Tools / Options and select the Parser tab.

Select the Windows parser, click the Stubs button (to toggle “Stub” to “Full”).

Click the up arrow then the down arrow, then click Save and Reload Parsers, then click OK.

 

Now you can load your .ETL files created with netsh and the conversations should be readable – if you want to save the file as a regular NetMon .CAP file, you can of course do so.

The ETL format trace will give you a system configuration summary in the first conversation, and the process name and PID associated with each frame, so it provides more than just a pure traffic trace and takes some of the guesswork out of network trace analysis.

If you need to take a trace of the system starting up, you can add “persistent=yes” on the netsh line starting the trace – as soon as you log on you can stop the trace and save the file.

Hyper-V Virtual Networks


The most common questions that I get on Hyper-V setups relates to the networking configuration, and it seems to be common thing to get wrong, so I’ll try to go through the 3 types of virtual network we have, and how they differ.

 

A private network can only be used by the child partitions, so consider this a “switch for a purely virtual environment”.

An internal network is the same as a private network, except that the parent partition (physical host) acquires a virtual network adapter which is automatically connected to this virtual switch.

Neither of these 2 types of virtual network require a physical network adapter – so if you are working with a lab or test environment then it’s perfectly fine to have no NICs whatsoever.

 

The third type of virtual network is the external network – this requires a physical NIC which Hyper-V will now take ownership of and unbind every protocol except the “Microsoft Virtual Network Switch Protocol”.

This network type allow communication between any partition connected to it and physical machines on the network connected to the physical NIC.

The physical NIC has now effectively been converted into a switch, which is why there should be nothing other than the “Microsoft Virtual Network Switch Protocol” bound to it – a common mistake here is to think “ah, the host doesn’t have any IPv4 settings, I’ll manually re-enable this…” – do not do this.

Similar to the internal network, when you create an external network the parent is given a new virtual network adapter which is connected to this virtual switch.

The automatically-created virtual network adapters for external networks will be given the original protocol configuration of the physical NIC, so this respect the network adapter as seen by the parent partition OS has been “virtualized”.

If the external network is dedicated for child partitions (recommended configuration – the host should have its own management interface which is not associated with Hyper-V at all) then it is perfectly safe to disable the virtual network adapter associated with the external network (note, do not disable the adapter which is the external network).

 

Take a look at the following diagram, with some explanatory text below:

Hyper-V Virtual Networks

The 1 physical server here, Doc, is the Hyper-V host/parent partition which has 2 physical NICs present: NIC1 and NIC2.

NIC1 has IP settings bound on it and it is not used with any external network – this would be the management interface so we can communicate with the parent partition even if we are reconfiguring Hyper-V or have the hypervisor not running.

NIC2 has the icon of a switch because it has been taken over by Hyper-V and now (just like a regular switch) does not have any IP protocol bound to it – the parent partition has had a virtual NIC created which is connected to this network (this is the one safe to disable if the interface should be for the child partitions only).

Virtual machines Sneezy and Happy are able to communicate with the real world through the external network.

 

In addition to the external network, the blue switch represents an internal network – this creates a new virtual network adapter in the parent partition which is used to allow communication between Sleepy, Bashful, Doc and Dopey (as I decided to multi-home Dopey on an internal and private network in this example).

 

The red switch is for a private network – Doc does not get to connect to this switch so direct communication with Grumpy is not possible except from Dopey.

If Dopey doesn’t have any kind of routing or proxy service present, nothing other than Dopey can talk to Grumpy, and vice versa.

This means the parent partition OS sees a total of 4 network adapters – the 2 representing the physical NICs, 1 for the external network and 1 for the internal network.

 

Now for some screenshots from my home (W2K8R2) setup, where the host has 2 NICs: 1 for management of the host and 1 dedicated for the child partitions:

fig1 - Summary of NICs in parent

Note that I gave the network adapters in the host OS meaningful names BEFORE I created any virtual networks – this is personal taste but makes administration so much easier than dealing with “Local Area Connection #4” and trying to figure out what that is and if it’s safe to disable it.

You can see I disabled the virtual NIC which Hyper-V created for the parent when I made the external network, as it’s dedicated for child partition use.

fig2 - Hyper-V Virutal Network Manager

When I created the external network, I named it based on the NIC which is associated with the network (luckily the machine has 2 different brand NICs onboard to make this easier).

fig3 - Properties of NIC1

Here you can see the physical NIC owned by Hyper-V for the external network should have nothing other than the “Microsoft Virtual Network Switch Protocol” bound to it – this is “just a switch” now.

fig4 - Properties of NIC2

Here are the properties of the other physical NIC which is used for management of the host.

 

Do not toggle the binding for the “Microsoft Virtual Network Switch Protocol” on any interface manually – the Hyper-V Virtual Network Manager UI uses this flag to see if a physical NIC is already associated with an external network before attempting to create one – this has tripped up several people in the past, and what you get is something like the following error:

Error applying new virtual network changes
Setup switch failed
The switch could not bind to {interface name} because it is already bound to another switch.

If you get the option to create an external network on an interface but get this error, this is the only time you should remove that binding manually and retry.

 

This is all just about the creation of virtual networks – you still have to go into the settings of each of the virtual machines and give them a network adapter (legacy or synthetic) for each of the virtual networks it should connect with simultaneously, selected through a drop-down list.

Neither Hyper-V or Virtual Server before it allow “host drive connection”, “USB device redirection” or “drag & drop of files”, even with Integration Services (or VM Additions) installed – this is a potential security hole, so you need to configure a common network between the parent and child partitions if you want to transfer files in & out.

 

Possible workarounds if you really don’t want to set up a network to transfer files between the parent and child partitions:

– create an ISO on the parent with the files to copy into the child, and mount it as a virtual DVD

– with the VM shut down, mount the VHD file from the child partition on Windows 7 /Server 2008 R2 directly through Disk Management (this is useful for getting MEMORY.DMP files out of VMs that are bugchecking during boot too)

 

A final note on Hyper-V networking – there is no virtual DHCP server, so you need to either set up your own, use an existing one (if using an external network) or assign IPv4 addresses manually.

I tend to use the 3 different private subnets to easily identify which type of network the machine is meant to connect with, and also avoid potential disasters if I accidentally connect to the wrong one:
172.16.0.1 thru 172.31.255.254 for PRIVATE networks
192.168.0.1 thru 192.168.255.254 for INTERNAL networks
10.0.0.1 thru 10.255.255.254 for EXTERNAL networks

 

A final note: just as with physical machines, if you multi-home a VM then check your protocol bindings, network adapter order and gateway/route settings to make sure you avoid performance or security issues.

User-mode dump creation-vista


For processes that are hung or consuming lots of CPU time, you can use Task Manager to create hang mode dumps – on the Processes tab you simply right-click on the process and from the context menu select “Create Dump File” and wait for the message to appear telling you where the dump was created.

Just like ADPlus in hang mode, this does not terminate the process – the threads are suspended whilst a copy of the process’ user-mode virtual address space is dumped to disk, then they are resumed.

Create Dump File on Win7

 

Is there a Dr (Watson) in the house?

Windows Error Reporting (WER) has replace Dr Watson as the default user-mode post mortem debugger, and it is configured through the registry (or group policy).

Here are example registry values to make WER retain up to 25, complete application crash dumps in C:\Dumps:

Path: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\Windows Error Reporting

Name: DumpCount
Type: REG_DWORD
Data: 25

Name: DumpType
Type: REG_DWORD
Data: 2

Name: DumpFolder
Type: REG_EXPAND_SZ
Data: C:\Dumps

Registry


“The Registry” is something most, if not all Windows users have heard of – but I would guess that only a fraction understand what it is or does, so I shall give a high-level overview here to enlighten those people…

The registry is essentially a hierarchical database – at the top level we see a collection of HKEY objects, for example from my W2K8R2 server the Registry Editor tool show:
> HKEY_CLASSES_ROOT
> HKEY_CURRENT_USER
> HKEY_LOCAL_MACHINE
> HKEY_USERS
> HKEY_CURRENT_CONFIG

Clicking on the marker to the left of any of these items will expand it and show the keys underneath – much like browsing a folder structure through Explorer.

 

A common mistake when discussing the registry is to incorrectly call those things we assign data to as “keys”.

If a key is analogous to a FOLDER on disk, then a value is a FILE (and actually holds data).

The keys are just containers, forming part of the path to one or more values.

Keys have no type, they just “are”, whereas values can be of the following types:
String Value (REG_SZ)
Binary Value (REG_BINARY)
DWORD (32-bit) Value (REG_DWORD)
QWORD (64-bit) Value (REG_QWORD, 64-bit Windows only)
Multi-String Value (REG_MULTI_SZ)
Expandable String Value (REG_EXPAND_SZ)

NOTE: When querying the registry, the type of the value must match or we get no result, so when you see the summary of details to manually add a value, make sure you get it correct.

Each key has an unnamed REG_SZ value when it is created, which cannot be renamed or deleted – this is displayed as “(Default)” in the Registry Editor but is referenced with a null name.
(If you look at an exported .reg file of a key where the “(Default)” value has been given “xyz”, you will see under the key: @=”xyz”.)

 

If you have ever used Process Monitor to log registry I/O for troubleshooting, there are a few things to be aware of regarding the Result field…

“NAME NOT FOUND” is not always an error – code often probes for the existence of a value in order to make a decision about how it will behave next, but if there is no such value (resulting in the return result which looks like an error) then we assume the default behaviour, whatever it may be.

“BUFFER OVERFLOW” is also not an error, despite the prevalence of “buffer overflow exploits” by malware.  In this context we are querying the registry for the data held in a value which does not have a fixed length, so the first query we make we say our buffer is zero characters long – the registry API will report “you don’t have enough space to hold this data” (buffer overflow), and how many characters you are short… now the call can be made again with a buffer exactly the right size to hold the data – thus avoiding potential for overflowing the buffer.
Ref: MSDN ZwQueryValueKey API

 

There are some special cases to be aware of when browsing the registry online…

HKEY_CURRENT_USER is a pointer to the user profile hive user HKEY_USERS which is used by the processes running in that particular session – if you consider a Remote Desktop Server with 10 users logged on, each of them has their own concept of “current user” but would like to be independent of each others’ sessions whilst using a standard path.
For my user account, it is easier to refer to HKEY_CURRENT_USER than to HKEY_USERS\S-1-5-21-1721254763-462695806-1538882281-2548689.

[ NOTE: HKEY_CURRENT_USER is abbreviated to HKCU ]

 

Under HKEY_USERS you will also see an extra key per (non-BUILTIN) user that is (or has been) logged onto the server, that ends with _Classes – this is the part of the user profile that never roams as it is machine-specific.
This is mounted as HKCU\SOFTWARE\Classes, and also merged with HKEY_LOCAL_MACHINE\SOFTWARE\Classes to be presented as HKEY_CLASSES_ROOT – the per-user definitions overriding the system default ones where a collision occurs.

[ NOTE: HKEY_LOCAL_MACHINE is abbreviated to HKLM, whilst HKEY_CLASSES_ROOT is abbreviated to HKCR ]

For this reason, HKCR is really only useful to read the “effective setting” of a particular object, and a decision needs to be made when wanting to update a value as to whether it should be per-machine (HKLM\SOFTWARE\Classes) or per-user (HKCU\SOFTWARE\Classes) – the result is seen instantly under HKCR.

 

Under HKLM\SYSTEM there are a number of ControlSetxxx keys, and one CurrentControlSet key – the latter is just a reparse point to one of the other keys… but how to know which one?
If you look at the value Current under the key HKLM\SYSTEM\Select then you can see which is being used (you can also see from the other values which was the last control set that failed and which was the last known good).

Traditionally we had just ControlSet001 and ControlSet002 to toggle between – but from Vista onwards we can go beyond this whenever a system is restored.

As with HKCU, we only care about “effective” settings under HKLM\SYSTEM\CurrentControlSet, and any changes here will be reflected in the corresponding ControlSetxxx key only – so if you have a problem that occurs “every other boot” then here is where you want to take a look (identify which is the bad control set and rename the key when it is not in use).

 

HKEY_CURRENT_CONFIG is a pointer to HKLM\SYSTEM\CurrentControlSet\Hardware Profiles\Current, and is an indicator to the hardware profile in use.

 

HKLM\HARDWARE is a key that is dynamically built when the system starts up, an enumeration of the buses and devices that comprise the system – it would make little sense to have this stored as non-volatile data as this way we know the key should reflect the underlying hardware more reliably.

 

There is a SOFTWARE\Policies sub-key under both HKCU and HKLM – this is where group and local policy settings are applied in order to have an effect whilst they are active, per-user and per-machine respectively.

This is also why you will see a lot of queries for non-existent keys and values under these 2 locations when running Process Monitor when doing certain activities – the process is checking to see if there is a setting defined by a system administrator which will take preference over any locally-defined setting.
Given that there are thousands of potential settings for many component parts of Windows, there is a lot of this kind of checking going on all the time, most getting “not found” as a result, but as explained this is not an error.

 

Note that I said “online” above – if you mount a raw registry hive from the %systemroot%\SYSTEM32\CONFIG folder of a system that is not currently booted, you will not see the merged, dynamic or reparsed registry keys.

It is useful to be aware of this path in case you end up with an unbootable system due to registry corruption – there is a RegBack sub-folder which contains a backup of the hives from the time the system was installed, should it be needed in an emergency.

The files on disk that comprise the registry:
DEFAULT
SAM
SECURITY
SOFTWARE
SYSTEM

Note that there are no file extensions, they are memory-mapped at system startup and are not accessed other than through the APIs.

Windows Server 2008R2 Server Core Installation & Setup Notes


After setting up a W2K8R2 “Server Core” Hyper-V host recently, I thought it a good idea to jot down some notes as to how to navigate the command prompt (mostly) so it can be configured – a flashback to the days of MS-DOS in some ways 😉

NOTE: All of the following commands are on 1 line, annotated with C-style comment before.

// Rename the computer to SERVER1
netdom renamecomputer %computername% /newname:SERVER1

// Join domain DOM1 using user account DOMUSER1 (prompt for password)
netdom join %computername% /domain:DOM1 /userdd:DOMUSER1 /passwordd:*

// Graceful shutdown & restart with no delay
shutdown /r /t 0

// Add DOM1\DOMUSER1 to the local Administrators group
net localgroup Administrators /add DOM1\DOMUSER1

// Allow remote admin connections through the firewall
netsh advfirewall firewall set rule group=”Remote Administration” new enable=yes

// Enable Remote Desktop connections
cscript c:\windows\system32\scregedit.wsf /ar 0

// Enable WinRS (Windows Remote Shell) connections
winrm quickconfig

// NOTE: All ocsetup (Optional Component Setup) commands are case sensitive
// Install Hyper-V role
start /w ocsetup Microsoft-Hyper-V

// Install BitLocker feature
start /w ocsetup BitLocker

// Install the Server Core flavour of the .NET Framework (required for PowerShell)
start /w ocsetup NetFx2-ServerCore

// Install the PowerShell feature
start /w ocsetup MicrosoftWindowsPowerShell

// Enable TPM (required for BitLocker, requires confirmation from the BIOS)
manage-bde -tpm –t

// Encrypt drive C: with BitLocker using a randomly-generated password
manage-bde -on C: –recoverypassword

// View the current status of BitLocker encryption
manage-bde -status

 

Post-install, most of the management you want to do will actually be remote (either using a GUI, or PowerShell scripts), so as long as the firewall is allowing the right type of traffic and the security policies allow it then there should be no problems…

// Allow Device Manager access remotely (note this is read-only, device installation should be done using pnputil.exe locally)
> Run MMC.EXE
> Click File > Add/Remove Snap-in…
> Select Group Policy Object Editor, click Add, click Browse
> Select Another computer, enter the name of the Server Core server, click OK, click Finish
> Click OK
> Drill down to Computer Configuration > Administrative Templates > System > Device Installation
> Enable the policy “Allow remote access to the Plug and Play interface”, click OK

// Open a Windows Remote Shell (WinRS) command prompt on SERVER1
winrs –r:SERVER1 cmd

// Use the Computer Management UI remotely (this is not unique to Server Core and is fairly commonly done)
> Launch COMPMGMT.MSC
> Right-click “Computer Management (Local)”, click “Connect to another computer…
> Enter the name of the Server Core server, click OK

 

Many of the MMC snap-ins that allow connections to remote machines do so either directly, as with Computer Management, or if the snap-in is loaded into a clean MMC.EXE process, as with the Group Policy Object Editor.

Bear in mind that the snap-in is loaded on the machine where MMC.EXE is running so in the case of policy editing, for example, you need to ensure that you are using the OS/SP level matches or exceeds that of the remote machine being managed.
If the client machine (running the MMC) is using “version 1” snap-ins then it won’t be aware of any “version 2” features of which the remote machine is capable.

If you find yourself with many different MMC windows for managing a server remotely, consider creating your own .MSC file for that server with all the commonly-used snap-ins loaded and configured to point to the server (a kind of custom Server Manager to suit your needs).

Other management consoles are designed to allow connections to multiple servers, such as the Hyper-V Manager where you can simply right-click “Hyper-V Manager” and “Connect to Server…” to add a server to the list.

USER Account Control… but I’m an ADMIN!


User Account Control (UAC) has now been with us for almost 4 years, and still it is a mystery to a lot of people – what it does, why it does it and what value it adds… so I shall try to shed some light on this for those that want “complete control” of a system when logged on as an administrator.

I do not, under any circumstances, recommend disabling UAC.
If too many prompts are being thrown, look at what you are doing and why (if!) the application needs administrative access.

The most recognized UAC feature is the dimmed screen with the popup like this:

01_uac_ots_prompt

This is known as the ”Over The Shoulder” (OTS) prompt – the process starting has a flag that indicates it requires administrative privileges, this could be from any of the following:
– the built-in manifest for the executable
– a manifest created for the executable, for example by the Application Compatibility Toolkit (ACT)
– the shortcut used to launch the process has “run as administrator” checked

This is a user awareness feature – to inform the user that starting this process may potentially change the system if not used correctly, or so that they can determine whether it ought to be running (e.g. silently launched malware).

The above screenshot was taken when logged on as an administrator – yes, admins get to enjoy UAC too… and if you think about it, this makes more sense than for a standard user without privileges who has make a mess of the entire system.

The one and only exception to UAC for interactive users is the built-in Administrator user account – even members of the Administrators group are subject to UAC.

If a non-admin user launches a process that requires/requests elevation, the OTS prompt would also ask for credentials for an administrator to authorize the  action.

 

NOTE: From here onwards I am referring to the UAC model from Windows 7/Server 2008 R2 onwards, as it has been revised a little since Windows Vista, so some features may not apply to earlier versions of Windows.

Q: Can the OTS prompt be enabled/disabled on a per-application basis?
A: Yes, using manifests – look into the Application Compatibility Toolkit and set the requested privilege level accordingly:
asInvoker = no elevation, use the user token of the parent process
highestLevel = use the highest security token of the logged-on user
requireAdministrator = elevation will always occur
Of course, if the application does require administrative privileges and you force it to launch non-elevated, it probably won’t work correctly

Q: Can OTS prompts be auto-accepted for specific applications?
A: No, UAC behaviour is consistent for all applications

Q: Can OTS prompts be auto-accepted for non-admins?
A: No, if a process requires elevation then an administrator must enter credentials to authorize the action (otherwise this would be  massive security hole) – they can, however, be auto-denied through a policy:
User Account Control: Behavior of the elevation prompt for standard users

Q: Can OTS prompts be auto-accepted for admins?
A: Yes, there is a policy to configure this:
User Account Control: Behavior of the elevation prompt for administrators in Admin Approval Mode
I do not recommend using “Elevate without prompting” as this seriously diminishes the value of UAC

 

For years we have been trying to convince people to log on as standard users, not administrators (and definitely not THE Administrator) but it has caused too many issues with legacy applications not working correctly, or failing to even start due to “access denied” problems because of what the application was trying to do.

UAC was introduced as a stop-gap to help end users work out what applications believe they need admin rights (as opposed to those that have just assumed them), and also for developers to finally take into account user permissions when writing their applications.

If an application launched by a standard user tries to write into an area considered as protected by the system (i.e. BUILTIN\Users does not have write permission there) then before Vista this would have likely crashed, hung or thrown an error message.

However, with UAC the write I/O is actually virtualized to a location under the user’s profile (%userprofile%\AppData\Local\VirtualStore) which is what allows it to believe it succeeded – see my blog entry from 2008, Virtualization in Vista for an example of how this works.

In this respect, UAC is a global “shim” for many applications that works perfectly so they work and do not require (or even prompt for) elevation.

 

Some users want to log on as a user with administrative privileges for day-to-day use, and this is where UAC also plays a part, allowing them to be treated as a standard user unless they (or a process they run) request elevation.

This is achieved by admins getting a “split token”, which is really 2 security tokens associated with their logon session.
The first is their “full” token with all group memberships and privileges assigned, used when a process is elevated.
The second is their “standard user” token which has the Administrators group and administrative privileges stripped.

When a process is started by an administrator, unless elevation was triggered then the standard user token is used – so attempts to make changes to the system will fail (or get virtualized) for this process.
Any I/O done by this process will conveniently ignore the fact that the user is a member of Administrators except for any explicit DENY settings.

 

Consider the following scenario:
Alan is a member of the Administrators group, Bob is a member of the Users group
UAC is enabled with the default settings
Folder C:\FOO exists, with inheritance turned off and only Administrators having Full access
File C:\FOO\FOO.TXT was created by the Administrator user account

If Administrator logs on and starts a Command Prompt process (always elevated due to the UAC exception, remember?) then TYPE C:\FOO\FOO.TXT will result in success.

If Alan logs on and starts a Command Prompt process elevated (triggers an OTS prompt, as Administrators are still subject to UAC by default) then the same command will also succeed.

If Alan starts a Command Prompt without elevation then the command will fail with “access denied”.
This is the one that catches people out – they assume as Alan is a member of Administrators this should work.

If Bob logs on and starts a Command Prompt, the command will also fail.

If Bob wanted to launch an elevated Command Prompt then Alan would have to enter his own credentials at the OTS prompt, granting that specific process administrative privileges – from this process the command would succeed.

 

In the above scenario we can allow Alan to access the file from a non-elevated process by adding an Access Control Entry (ACE) in the Access Control List (ACL) on the file which is either Alan’s specific user account or a another security group to which Alan belongs – then his standard user token will have this when he logs on.

(This is why I mentioned explicitly disabling inheritance in the scenario, to avoid having permissions that have filtered down from the root or parent folder, such as BUILTIN\Users having read access.)

 

Another quirk of UAC to be aware of – when a member of the Administrators group creates an object using their admin token, the owner is set to the Administrators group, but if it is created using their standard user token it would be owned by the user specifically.

This needs to be taken into account when enumerating permissions if CREATOR OWNER is mentioned in the ACL – owning an object does not grant you any permission other than to change the permissions, so taking ownership of an object is only step 1 of 2 if you are trying to get write access (or delete it) – you then have to grant yourself the necessary permissions.

 

So be aware of using the standard user token when logged on as a member of Administrators when trying to resolve “access denied” issues – don’t remove your user from the Users group just because you are in the Administrators group or you may remove way too many required (read) permissions and be more restricted than a regular user!

Be aware also of the unique nature of the Administrator user – it’s disabled by default on the client versions of Windows for a reason, and being a member of Administrators is not quite the same as being the Administrator.

%d bloggers like this: