Introduction:
In my initial article of this series, I mentioned that there were a few asterisks, footnotes, limitations, and caveats to understand with this solution. Luckily, this is again much less of a concern than it was with the System Usage & Authentication Monitoring series as this collector isn’t targeting nearly the volume of event types as that collector had.
In any case, this article explains more about how this works, the details of the events we capture, what we don’t/can’t capture for one reason or another, and what expectations to have with this solution overall. By the end of this, you should understand why this data is still beneficiary to have but, should not be the sole monitoring solution you have.
This will be very detailed and possibly quite technical.
In this section, we will cover…
- Time Tracking – Avoiding Duplicates
- Filtrations
- Formatting
- Event Deep Dive (Further breakdown in this section)
- How Real-Time is the Data?
- Conclusion
Time Tracking – Avoiding Duplicates:
First, let’s talk about how this thing works, mainly what I mean when I say “Time Tracking.”
Generally speaking, and obviously glossing over even more technical information, we are using PowerShell to query the specific event IDs we want to find and upload. That begs a very central question – “How do we know what events have and have not been already uploaded?” We have no way to mark events, nor to query Log Analytics to see if it’s already there.
The answer is quite simple. If we know what time we last ran the query, then we only need to pull the new events that have happened since then. Easier said than done, and there are some unexpected issues with this solution I ran into.
Breaking “Anything newer than X” logic:
Let’s say we start a computer up at midnight.
- Proactive Remediations soon kicks off our collector at 12:10:00 AM
- Our script notes it’s running at 12:10:00 AM
- Our collector pulls everything since it last ran up until right now as it’s running.
- It then marks its last run time as 12:10:00 AM.
- An hour later, the collector runs again. It checks the current time and sees 1:10:00 AM and, it checks when it last ran a capture which was 12:10:00 AM.
- Thus, it then runs a capture from 12:10:00 AM until now and marks the new last run time as 1:10:00 AM.
Here’s the problem – it takes a few seconds to run the query itself, one or two more to parse through the data, a few seconds to run the smaller 2nd and 3rd queries, and a few more seconds to upload it. What happens if an event is generated during those few seconds? The answer is one of two things. It either generated after the time was checked but before the query was run, and thus gets found and uploaded. Or it is generated after the query runs and does not get captured and uploaded until the next time.
In the case of the former, it will be found again (because it was generated after the time check) and uploaded again the next time the script runs. This is how we get duplicates.
While this requires some specific timing, it most certainly can and will happen when you have 10’s of thousands of endpoints going.
The solution:
The solution is just as simple. Don’t capture everything newer than the previous run time. Instead, only capture from the previous run time to the start of the current run time. Thus, if an event generates after the run time is checked but before our query is run, it won’t fall in that range and thus won’t get uploaded. Instead, it falls into the range of the next run and gets uploaded then. Ta-da!
Now, the real deal is using UTC and captures down to the thousandths of a millisecond to avoid any weirdness due to same-second events. If there is a situation where an event is generated in the same thousandth of a millisecond that the script pulled the current time – well, we could get a duplicate, but that I do think is rare enough to ignore.
Filtration:
That said, when we talk about events like creation and termination, there is a LOT of noise. Frankly, this makes the logon/logoff events of the System Usage & Authentication Monitoring series look small. You will find events of the system account creating processes, various local service accounts, the DWM and UMFD accounts, and other processes such as built-in Windows processes like the RunTimeBroker, BackgroundTaskHost, dllhost, etc. There are a lot of logs made for these events and frankly, we don’t care about it nor want to pay to ingest it.
Luckily, the PowerShell commandlets I use (Get-WinEvent) either can directly tell fields like the user or, we can pull it from the events extended properties. This allows us to easily filter out events that come from accounts we don’t like, programs we don’t care about, etc. I will get into the specifics of exactly what we filter and why later, but for now just understand that it’s both absolutely possible, already in place, and easy to customize and change.
On the note of being easily customizable, this is another use for the main dashboard of the Workbook. If you recall from the last article, the Dashboard has a large focus on finding outliers. This can be particularly useful for identifying users or applications which are generating large amounts of data that you do not want to ingest and can thus add to the filter to avoid uploading at all.
Formatting:
Before we get started, I want to touch on the formatting of how all the events get stored in Log Analytics. These are all the columns/fields I use and what gets stored in them. For instance, when an event makes mention of a user who caused that event, you will find that data pulled out of the main message and stored on its own in the “User” field. All of the fields here could be incorporated into sorting out events you do or don’t want.
- ManagedDeviceName:
This value is the Intune management device name, not to be confused with the actual device name. Think something like “Josh_Windows_1/1/2024_12:00 AM”. - MangedDeviceID:
This value is the Intune Managed ID of the device. - ScriptVersion:
This value is the version of the collector that uploaded this date. This is primarily used for tracking update deployment and/or building queries which can only work with data from certain versions of the collector. - ComputerName:
This value is the literal Windows device name. - TimeOfLogUTC:
This value is the time the event occurred on the machine translated into UTC by the machine. - EventID:
This value will always be 4688 (A new process has been created) or 4689 (A process has exited). - EventType:
This value is a translation of the above event ID to either “Creation” or “Termination” for more human viewing. - Program:
This value is the name of the executable that was created or terminated. For instance, Winword.exe, Visio.exe, GreenshotOCRCommand.exe, Acrobat.exe, etc. Other captured types include .scr, .com, .upd, and .tmp. Why the Windows event itself (which we can’t control) chooses to capture some of these, I can’t say. - ProgramPath:
This value is the path to the executable above. For instance, “C:\Program Files\Greenshot\Plugins\GreenshotOCRPlugin\GreenshotOCRCommand.exe”, “C:\Windows\System32\mobsync.exe”, “C:\Program Files (x86)\Teams Installer\Teams.exe”, “C:\Users\Bob\Desktop\fakeapp.exe”, etc.
User:
This value is the logged user who executed or terminated the above program. - Message:
This is the full Windows event “Message” where many of the above values were pulled out from. An example of this will be shown in the event deep dive screenshots, the formatting really doesn’t want to paste here.
Event Deep Dive:
Now we will dive into the specifics of these events, what they are, what we filter out, and what issues they have (if any). Luckily, this section is again much shorter given this collector only targets two events.
- Security Event ID 4688 – A new process has been created
- Security Event ID 4689 – A process has exited
4688 – A New Process has been Created:
Overview: Click here to learn more about this event ID.
This log comes from the Security log and is an indication of, as the name would suggest, the creation of a new process.

Filtering Event ID 4688 – A New Process has been Created:
This is how we filter these events. By default, we will not ingest events where…
- The user is the same as the Windows computer name
- The user is “-“
- The user is “LOCAL SERVICE”
- The user is like “DWM-*”
- The user is like “UMFD-*”
- The Program Path is like “*RuntimeBroker.exe*”
- The Program Path is like “*backgroundTaskHost.exe*”
- The Program Path is like “*SearchProtocolHost.exe*”
- The Program Path is like “*AgentExecutor.exe*”
- The Program Path is like “*conhost.exe*”
- The Program Path is like “*dllhost.exe*”
- The Program Path is like “*taskhostw.exe*”
This all works together to ignore events we don’t have an interest in. Primarily, those which are not created by the actual employees’ actions and/or those which are extreme in quantity and of little value.
Additionally, every single Creation event comes with this wall of text explanation and obviously, we don’t want to pay for it either, so it’s stripped off from the Message field.

Known issues with Event ID 4688 – A New Process has been Created:
Importantly, I mentioned earlier that this event captures some strange file types and not things I wish it did. I know it captures EXE which is our primary concern, but it sadly doesn’t seem to capture things like .BAT or .PS1 files executing. This is unfortunately a limitation of the event itself. I have looked but have been unable to find a complete list of executables/file types it does capture or any way to alter that behavior.
4689 – A Process Has Exited:
Overview: Click here to learn more about this event ID.
This log comes from the Security log and is an indication of a process ending on the machine.

Filtering Event ID 4689 – A Process Has Exited:
This is how we filter these events. By default, we will not ingest events where (this is the same as Creation events)…
- The user is the same as the Windows computer name
- The user is “-“
- The user is “LOCAL SERVICE”
- The user is like “DWM-*”
- The user is like “UMFD-*”
- The Program Path is like “*RuntimeBroker.exe*”
- The Program Path is like “*backgroundTaskHost.exe*”
- The Program Path is like “*SearchProtocolHost.exe*”
- The Program Path is like “*AgentExecutor.exe*”
- The Program Path is like “*conhost.exe*”
- The Program Path is like “*dllhost.exe*”
- The Program Path is like “*taskhostw.exe*”
Again, this all works together to ignore events we don’t have an interest in. Primarily, those which are not created by the actual employees’ actions and/or those which are extreme in quantity and of little value.
Luckily, process terminations do not involve any tail-end bloat text/explanation.
Known issues with Event ID 4689 – A Process Has Exited:
Similarly, with this event, I mentioned earlier that this event captures some strange file types and not things I wish it did. I know it captures EXE which is our primary concern, but it sadly doesn’t seem to capture things like .BAT or .PS1 files executing. This is unfortunately a limitation of the event itself. I have looked but have been unable to find a complete list of executables/file types it does capture or any way to alter that behavior.
How Real-Time is the Data?
Obviously, we ideally want our data to be live. Unfortunately, while we will deploy this via Proactive Remediations to run as often as possible, the fastest you can run a collector currently via Intune Proactive Remediations is hourly. So, depending on timing, the data could be ingested an hour after it occurred. However, the reality is that you should run your collectors with the random delay function enabled meaning that they will be up to 110 minutes delayed instead of up to 60. Again, that is an up-to, not an always. If an event happens and the collector happens to run 5 seconds later, and the random delay is another 5 seconds, your data will reach you in 10 seconds.
That said, if an event is generated and I shut down my machine, you’re not going to get that data until I turn it back on and the collector runs. If I don’t have internet, you won’t get it until I am back online, etc.
Conclusion:
Hopefully, it is now clear why I say this solution does not replace a real-time solution, nor should what it is able to collect be considered absolute and complete due to the limitations of the data in the events themselves as dictated by Microsoft.
That said, in my next article I will discuss the cost which will further solidify why it is still worth collecting all the data that we can reach. Spoiler – The answer is it’s cheap! — Although, I’ll admit now, this is technically the most expensive collector of mine you can run, especially without any filtering. However, “the most expensive” doesn’t make it “Expensive. “
The Next Steps:
See the index page for all new updates!
Log Analytics Index – Getting the Most Out of Azure (azuretothemax.net)
I will be putting the Application Usage guides on the Log Analytics Index page under the System Usage series.
Disclaimer:
The following is the disclaimer that applies to all scripts, functions, one-liners, setup examples, documentation, etc. This disclaimer supersedes any disclaimer included in any script, function, one-liner, article, post, etc.
You running this script/function or following the setup example(s) means you will not blame the author(s) if this breaks your stuff. This script/function/setup-example is provided AS IS without warranty of any kind. Author(s) disclaim all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall author(s) be held liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the script or documentation. Neither this script/function/example/documentation, nor any part of it other than those parts that are explicitly copied from others, may be republished without author(s) express written permission. Author(s) retain the right to alter this disclaimer at any time.
It is entirely up to you and/or your business to understand and evaluate the full direct and indirect consequences of using one of these examples or following this documentation.
The latest version of this disclaimer can be found at: https://azuretothemax.net/disclaimer/
