Microsoft Sentinel Live Attack Demonstration Home Lab

This is a walkthrough of how I used Microsoft Azure and created a virtual machine in the cloud running Windows 10. I exposed a VM to the internet and used Azure Log Analytics Workspace, Microsoft Defender for Cloud, and Azure Sentinel to collect and aggregate the attack data and display it on a map in Microsoft Sentinel. This project will display the use of a few different tools and resources. I will be using PowerShell to scan Event Viewer in the exposed VM, specifically EventID 4625 which is a failed login attempt, and send that data to a log file. The PowerShell Script also sends the IP address of any failed logins to IPgeolocation.io via an API, so later that information can be used by Microsoft Sentinel to map where the logon attempts originated from. This project was done to gain experience with SIEMs, cloud concepts and resources, APIs, and Microsoft Azure. I learned how to provision and configure resources in the cloud, how to read SIEM logs, and much more. This was a fun project and I hope anyone reading this appreciates the work that went into this project.

Utilities Used

  • Microsoft Sentinel (SIEM)
  • Log Analytic Workbooks
  • Microsoft Defender for Cloud
  • Virtual Machines
  • Remote Desktop
  • PowerShell
  • API’s
  • Event Viewer
  • Firewalls

Environments Used

  • Microsoft Azure
  • Windows 10 (21H2)

Links

Program walk-through

The first thing I am going to do is create a Microsoft Azure account, this will be the cloud environment I’ll use to provision my resources. I will take advantage of the $200 credit I’ll receive to do this project. The resources I’m using are not very resource heavy, so my credit can be used for future projects. Also included is the website I will be using for my IP geolocation data.

The next thing I’ll do is start the process of creating my virtual machines. Provisioning a VM can be a lengthy process so while I move on to the next step, my VM can be provisioned in the background.

At this point in the VM creation process, I need to make sure that I create a new Resource Group that all of my future resources will be under. A resource group in Azure is a logical grouping of tools, services, configurations, and more that exist under one banner so they can be created and deleted at the same time (they share the same lifespan). If I have resources outside of a particular resource group, if I delete that resource group the ones outside of it will still exist. It makes it easier to manage your resources if they’re all in the same place. I decided to name this resource group “HoneyPot_Lab” and I name the virtual machine “HoneyPot-VM”.

I scroll down on that same page, and I now must choose the size of the VM I am going to provision. In the photo, I initially chose Standard_B1s, as circled in green. Later I decided to upgrade it to Standard_B2s which gave 2 CPU cores instead of 1 and more RAM. The first one I chose was just too slow. PowerShell kept crashing and the VM was lagging a lot. After choosing the size of the VM, I created the admin account. I chose a unique admin name and a 30-character password made up of special characters, numbers, and a mix of lowercase and uppercase letters. Since I knew people would be trying to log into the exposed VM through brute forcing and dictionary attacks, a strong password was a necessity. The public inbound port rules, essentially which ports will be open to be able to connect to the VM. At this step, you can set multiple authentication methods like SSH, but I chose to only allow RDP. RDP uses port 3389.

The next step is to create a new Network Security Group (NSG). An NSG is a Firewall that can create and enforce rules on inbound and outbound traffic to Azure resources. For this project, we don’t want any rules on traffic. We want to allow anyone and everyone to be able to communicate with the honeypot VM. There is a default inbound rule, so we’ll delete that one and create a new inbound rule that will allow EVERYTHING into the VM. On the Destination Port Ranges box, I wrote an asterisk (*) for Anything. This will allow for any port ranges. We’ll allow any protocol. For Priority, I set it at 100 because the lower the number the higher the priority. So if there is a rogue rule somewhere, this rule will have priority over it. With these rules set it will allow all traffic into our VM. I would NEVER EVER do this in a real production environment.

I review, create, and deploy the VM as the last step.

While my VM is deploying, I can get started on setting up Log Analytics Workspace. These can all be found in the Azure home dashboard, or you can search for them in the search bar. When I create a Log Analytics Workspace, I make sure to put it in my HoneyPot_Lab resource group so it can be deleted when I delete that resource group. I name the instance LAW-HoneyPot.

I now go into Microsoft Defender for Cloud. I do this because I have to provision enroll in some plans to be able to collect and aggregate data for Microsoft Sentinel to be able to use later on. I also need to connect my HoneyPot-VM to Microsoft Defender so it can collect data. At this point, my VM has been created so it can be connected to these services. We call these Data Connectors. In the third picture, I only turn on the plans for Foundational CSPM and Servers. I’m not running any SQL servers, so it doesn’t need to be turned on.

Since my VM was created and deployed, I can go back into Log Analytics Workspaces and connect my VM to that service as well.

I can now create a Microsoft Sentinel resource and connect it to my VM.

Now that everything is set up in the Azure dashboard, I can go into my VM and set things up there. The first thing I need to do is get my VM’s public IP address so I can Remote Desktop (RDP) into it. I go into the Virtual Machines tab in Azure and navigate into the HoneyPot VM. Highlighted in my VM’s Public IP.

From here I can log into my VM via Remote Desktop. I open up Remote Desktop on my native PC, enter the public IP address and credentials needed and connect! I configure the HoneyPot a bit.

After I’m successfully logged into the VM via RDP, I navigate to Event Viewer. Event Viewer logs everything that goes on in a windows system. It gives each action an EventID so it can be more easily navigated or browsed by the EventID. For this project, we are concerned specifically with EventID 4625, which is Failed Logon Attempts. These logs can be found in the Security tab. In the pictures below, I run another instance of Remote Desktop and try to log into the VM with the wrong password. This creates a failed logon attempt which is then logged and recorded in Event Viewer.

The next step is the ping the VM from my native computer. I do this to see if I can make a direct connection to the VM and see if it is currently discoverable. It is not. That is because the VM has Windows Defender Firewall activated. The firewall is blocking ICMP Echo Requests, making the VM undiscoverable. I know this because I pinged the VM’s public IP address which is 74.235.173.155 and I see Request timed out a few times. The firewall is dropping the ICMP request packets.

To make sure that everyone on the internet can discover my VM I need to disable the windows firewall. I do this by going into the windows search bar and searching wf. Msc. Once inside the windows firewall, I begin disabling everything. I then open up CMD on my native computer and try to ping the VM again. This time it receives replies from the VM because the firewall is no longer blocking ICMP requests.

Now that my VM is exposed to the internet I can begin the setup of my PowerShell script. It is the heart of this project. This PowerShell script will parse Event Viewer specifically looking for EventID 4625. It will then send the IP address from the failed logon attempts to the website IPgeolocation.io via an API. The reason I did this is that the IP address in the event viewer does not contain any geographical location. It was easier to send the data to a system dedicated to pulling that information out and sending it back to myself rather than building it from scratch. The PowerShell script will then receive all that geographical data and save it as a string in a log file named failed_rdp.log. I will use this log file later on in the project to be able to map the attacks live in Microsoft Sentinel.

After I open PowerShell, I paste my script that was written before the start of this project. I then save that script to the desktop as Log_ExporterAt this point, you should go and make a profile on IpGeolocation to get your API. You’ll paste your API into the PS script. You’ll get 1000 free calls, but they can go fairly quickly, so I recommend going back into the Windows Firewall settings and turning them back ON until you’re done setting up your Log Analytics Workbooks Custom Fields later on in the lab. After those are configured, you can then turn off the Firewall.

I run my Log_Exporter script. So that it was easier to read I made it so that the script outputs in pink and black. The API_KEY you see has been changed after I finished the project. In the first photo, you can see that my script is working just fine. The output you see is the first failed login that I did earlier. The second photo shows how the data is saved into the failed_rdp logfile in string format. I included some sample data in this file because later it will be needed to train the AI in Log Analytics Workbooks and Microsoft Sentinel. More data equals more precision.

At this point to test if the PowerShell script is working, I failed another login attempt. As you can see someone has already found my VM and started to try and brute force it. This person was in Tunisia. They found it so fast, it was a bit annoying. I could have blocked his IP or enabled the firewall again until I was finished completely with my setup, but this data was perfect to train the AI in Azure, so I let it go at the time. In hindsight, it was a bad move. The free API from IPgeolocation.io only allowed for 1000 calls. This person in Tunisia hit that limit very quickly, ruining my project. I had to pay $15 for an extra 150k API calls to save my project.

Now that I know my PowerShell script is working as it should, I head over to Log Analytics Workbooks to create a custom log so that I can bring my failed_rdp log into Log Analytics. In Log Analytics I navigate to my VM and create a Legacy Custom Log. It asks for a sample log, which is inside the VM. I can’t download the log file from the VM to my native computer, so I have to open the logfile inside the VM, copy the contents, go back to my native computer and open Notepad, paste the copied contents in, and save the file to my desktop. From there I can import it into Log Analytic Workbooks. This sample data will be used to train Log Analytics.

Next, it asks for the collection path. The collection path is where the log lives in the VM, so it asks for a path that Log Analytics can take to reach that log file. The path to that file is C:\ProgramData\failed_rdp.log. If this path is wrong, Log Analytics wouldn’t be able to collect the log information. Next, we have to name our custom log. I decided to name it FAILED_RDP_WITH_GEO and the .CL (Custom Log) will automatically be appended to it. When querying the database later this will be the name of the table. We then create the custom log.

While that is being provisioned, the creation will be instant, but the data won’t be synced from the VM to Log Analytics for a while. I decided to query the Event Viewer, which should have already been synced. You can see in picture 1 that it is indeed showing all the logs. After a little while I decided to query the newly created FAILED_RDP_WITH_GEO custom log, and it is indeed showing information meaning that the VM and Log Analytics as synced and are sending/receiving data.

Now I have to go in and extract the fields my log uses. This will allow me to later use those fields in Microsoft Sentinel. I right-click a failed rdp login log that has all the raw data in it from my PowerShell script and highlight the data I want. I then name the field that data is going to be in. Once that extraction happens, the Log Analytics AI looks at all of my other sample data and actual logs that were generated and sees if it can pull the correct data. This part is where I have to go in and correct any errors the AI has by again highlighting the correct data point it needs to look for. To do this, I have to scroll through the list (higlighted in blue in picture 4), right click any that is wrong and re-highlight the correct information I want it to pull. The custom fields you’re going to create at this stage is: latitude, longitude, destinationhost, username, sourcehost, state, country, label, timestamp.

I waited a little while to see if the fields would populate properly. They all do except sourcehost.CL and I couldn’t figure out why. I deleted that field and extracted it multiple times but no matter what I did it would not populate. I could not use an unpopulated field in my sentinel live map, so in the end, I decided to delete it and not use that data point at all.

The next step taken was to begin setting up my geomap that will pinpoint and map out where the attacks, or login attempts were coming from. I do this by navigating to Microsoft Sentinel. In the first picture, we can see that the SIEM has been collecting data properly and categorizing it. I did not set any alerts for this project but it was certainly possible, maybe for a future video. We can see in this picture that there are nearly 10k events and 6.9k security events, coming from Event Viewer in the VM with 2.3k failed RDP attempts. I haven’t even finished setting up this project but the person from Tunisia was hard at work trying to brute force into my VM. Good luck ha ha ha!

Moving on. To create the map, I want I’ll need to create a new workbook in Sentinel. After clicking into workbooks, there are some default graphs or widgets in there. I want to delete those. After deleting those I then get started on creating a new workbook.

To create the map, I need to add a query. Remember, I need to query the data and the fields from Log Analytics. I’m pointing out the dataset I want Microsoft Sentinel to use. In the query, I tell it to specifically exclude (!=) the data points that include “Samplehost” since those aren’t real attacks and I don’t want them to populate on the map.

After I queried the data points I wanted, I now have to choose how I want to express/visualize them. In this case, I want them visualized as a map! I choose the map setting and then configure the map to plot the attacks by latitude/longitude. I could do it by country but some of the attacks coming through did not include the country. I changed the metric settings to make the bubbles bigger using the event counts. The more events the bigger the bubble. I apply the settings and my map is done!

After a few hours and right before I decided to stop the project, you can see that there was a total of 10,529 attacks or failed login attempts. 20 were in the USA, which was me testing the PowerShell script, 9 from Cambodia, and a whopping 10.5k from Tunisia. They were using automated brute-forcing software to try thousands of different password combinations and usernames. There were even more attempts, but my PowerShell script had to be stopped and started multiple times. This is why it’s important to use strong passwords and uncommon usernames! The second picture is the number of API calls I had that day. All of them are not shown because I had to upgrade the number of calls I could make.

After the project was finished, I had to delete the resource group I created for it. If I left it alone, it would eat up my $200 credit. I need that credit for future projects. The reason I put everything under one resource group is for this exact reason, easier deletion of the resources I created. Now nothing would be left behind.

Ran the lab again.

I decided to do the lab again. I wanted to give attackers more time to attack the exposed VM so more locations could be represented on the world map. My decision paid off. I’m very happy that this time a more diverse group of threat actors decided to attack my VM which resulted in a beautiful world map.

I noticed an issue while running this lab a second time. Periodically my PowerShell script would stop updating. It showed it was still running but no new entries would be generated for 5 – 15 minutes. I thought that was strange because I knew for a fact many different countries were trying to brute-force their way into the VM, so there should be a steady stream of new entries. Especially because I set the refresh rate in the script to 300 Milliseconds, down from 1 second in the earlier lab. I decided to investigate what was happening. The first step in my investigation was to compare the number of security events where Event ID = 4625 against the number of Failed_RDP_Custom_Log’s generated. I accomplish this by writing queries in Log Analytics Workbook.

After writing my query we can see here that there was a total of 9282 EventID 4625 Security Events. This means that threat actors attempted to log into my VM 9282 times.
After writing this query we can see it returned a total of 3892 records, which means that my Custom Log only captured 3892 login attempts. Quite a bit away from 9282.

Okay, there is a discrepancy here. Both of those numbers of records should match, or at least be very close to each other. Next, I decided to look at my API usage for today. It shows 3541 API requests. It’s much closer to my Custom Log FAILED_RDP_CUSTOM_LOG. So now I know it isn’t my VM that is failing to send the data to my custom log.

Lastly, I chose to go into Microsoft Sentinel and see what the dashboard is showing. My PowerShell Script was still running in the background so the numbers aren’t an exact match because these screenshots were taken a few minutes apart. Again, it is showing a number (4,100) that is much closer to the FAILED_RDP_CUSTOM_LOG query.

Conclusions

What I suspect is happening is that the Windows Event Viewer is working just fine. It is properly logging ALL login attempts and is showing the actual number of attempts that have happened. The weak link in this problem is the PowerShell script. The reason my PowerShell script was pausing for extended amounts of time is that it was being overwhelmed. I set the refresh rate to 300 Milliseconds, but the actual login attempts from several different attackers were much more than 1 attempt per 300 Milliseconds. My PowerShell script was a bottleneck. If it could record each attempt as it happened, it would show the actual number of attempts and be in line with the Event Viewer. It could only record 1 attempt per 300 Milliseconds, so some login attempts were being lost or backlogged (I doubt it was backlogged), which lead to the discrepancy we are seeing. Perhaps in the future, I could lower the refresh rate even more to allow more login attempts to go through.

One response

Leave a Reply

Your email address will not be published. Required fields are marked *