Python: Virtual Environments

Python Virtual Environments (venv) is a python module that limits dependency and version conflict by:

  • Isolating the python environment
  • Separating dependencies on a project basis

Usage

# Create a venv:
python3 -m venv .venv

# Active venv:
source env/bin/activate

# On windows
.\env\Scripts\activate

# deactivate
deactivate

Workflow

  1. Create a venv using the mod (-m) argument or via IDE (VSCode -> ctrl + shift + p -> Python: Create env)
  2. Activate it using your OS specific way
  3. Add .venv (venv name in the example above) to .gitignore
  4. Develop python code, install packages: pip install <package>
  5. Once done, freeze requirements: pip freeze > requirements.txt
  6. Recreate exact environment on another host: pip install -r requirements.txt

Mucho Importante

  1. Always activate the venv when working on the project
  2. .gitignore the venv name, the users will build it locally
  3. Use pip freeze after installing new dependencies to update the requirements.txt

Always look on the bright side of isolation ✅

Happy coding

PowerShell: Restore a DNS zone in Active Directory

Beware of the copy-paste trap! Always test public code in a safe, isolated environment before running it in production.

The fast version

Did someone just remove a very important AD-integrated DNS forward lookup zone for you?

Hang tight, and i’ll show you how to get it back.

  1. Using Domain Admin access rights, have any type of elevated PowerShell session open with the DNSServer and activedirectory module imported
  2. Open notepad and save the script below as “Restore-ADDNSZone.ps1” at any location
  3. .\Restore-ADDNSZone.ps1 -ZoneName ‘myzone.org’
  4. If the zone was just deleted and the DC has access to the deleted zone objects, your zone will be restored. Verify by looking in DNS management.

If you’re not in a hurry, I recommend that you read what the script does first and test it in lab.

The output should look similar to this

DNS Zone restore the simple way

I wrote a simple script to demonstrate how a DNS zone restore can be achived using the Restore-ADObject cmdlet:

  • Importing Required Modules: Loads ActiveDirectory and DnsServer modules.
  • Setting Parameters: Allows specifying a DNS zone name, defaulting to “ehmiizblog”.
  • Searching for Deleted Zone: Looks for the deleted DNS zone in known AD locations.
  • Retrieving Deleted Records: Fetches resource records for the deleted zone.
  • Restoring Zone & Records: Restores the DNS zone and its records to their original names.
  • Restarting DNS Service: Restarts the DNS service to apply changes.
  • Output Messages: Provides feedback on the restoration progress and completion.

Didn’t work, what now

If you have access to a backup of the DNS server, you can export a .dns file and rebuild the zone on the production server.

The steps below will vary largely on your situation, but it might give you an idea of the process:

Sidenote:Tthe “Above explained” points adds further explenation to the command we ran in the previous step.

  1. Connecto to the backup DC
  2. Export the zone using dnscmd: dnscmd /ZoneExport zone.org zone.org_backup.dns
  3. Attached a disk or storage device to the DC, mount it and moved the newly created zone data file zone.org_backup.dns
  4. Attached the disk to the PDC
  5. Copied the file to system32\dns
  6. Create the new zone using dnscmd:
    • dnscmd SERVER /zoneadd zone.org /primary /file zone.org_backup.dns
    • Above explained: Adds a zone to the DNS server.
    • dnscmd SERVER /zonereload zone.org
    • Above explained: Copies zone information from its source.
  • This creates a non AD integrated DNS zone with resource records from the export
  1. Convert the zone from non-ad integrated into the AD integrated
    1. dnscmd SERVER /zoneresettype zone.org /dsprimary
    2. Above explained: Creates an active directory integrated zone.

References:

Happy restoring

Linux on GU605MI: Sound, Keyboard, Brightness & asusctl

Disclaimer: Please note that while these steps have been provided to assist you, I cannot guarantee that they will work flawlessly in every scenario. Always proceed with caution and make sure to back up your data before making any significant changes to your system.

Written on 2024-05-08 (Note: Information may become outdated soon, and this was just my approach)

If you’re a proud owner of the 2024 Asus Rog Zephyrus G16 (GU605MI) and running Fedora 40+, ensuring smooth functionality of essential features like sound, keyboard, screen brightness, and asusctl might require a bit (hehe) of tweaking.

Here’s a comprehensive guide, or really the steps I took, to get everything up and running.

Ensure Kernel Compatibility

First things first, ensure that your kernel version is at least 6.9.*. If you’re on a newer kernel, skip this step.

Kernel 6.9 has audio improvements for Intels new 14th gen CPUs, so it’s mandatory for the GU605 to have it.

You might want to research on how to perform this in a safer way.

I trust in Fedora and the Copr build system, so I just executed the following:

sudo dnf copr enable @kernel-vanilla/mainline
sudo dnf update -y
# Wait for transactions to complete (may take 5+ minutes)
systemctl reboot

Follow the Fedora Workstation Guide

Refer to the Fedora Workstation guide provided by Asus: Fedora Guide. The steps I took myself where the following:

# Updates the system
sudo dnf update -y
sudo dnf install https://mirrors.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm https://mirrors.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm

# Installs the nvidia driver
sudo dnf update -y
sudo dnf install kernel-devel
sudo dnf install akmod-nvidia xorg-x11-drv-nvidia-cuda

# Enable hibrenate
sudo systemctl enable nvidia-hibernate.service nvidia-suspend.service nvidia-resume.service nvidia-powerd.service

# Install asusctl and superfxctl, used to interact with the system
# Installs Rog Control gui (to interact with the command line interfaces graphically)
sudo dnf copr enable lukenukem/asus-linux
sudo dnf update

sudo dnf install asusctl supergfxctl
sudo dnf update --refresh
sudo systemctl enable supergfxd.service

sudo dnf install asusctl-rog-gui

Install Firmware, needed as of 2024-05-08

In the future the firmware might be added into the linux-kernel, if the sound works great after you’ve updated the system, skip this step.

The sound will not work without the correct firmware, we can clone down the correct firmware and copy it over to our system using the following lines:

git clone https://gitlab.com/kernel-firmware/linux-firmware.git
cd linux-firmware
sudo dnf install rdfind
make install DESTDIR=installdir
sudo cp -r installdir/lib/firmware/cirrus /lib/firmware
systemctl reboot

Fix Screen Brightness

The screens brightness works out of the box while on the dGPU.

However that comes with certain drawbacks, like flickering electron applications and increase in power consumption. The steps below gets the screen brightness controls to work in “Hybrid” and “Integrated” mode (while the display is being ran by the iGPU).

Open the grub configuration file:

sudo nano /etc/default/grub

Add the following string at the end of the line GRUB_CMD_LINE_LINUX=:

quiet splash nvidia-drm.modeset=1 i915.enable_dpcd_backlight=1 nvidia.NVreg_EnableBacklightHandler=0 nvidia.NVreg_RegistryDwords=EnableBrightnessControl=0

After editing, the line should look like this:

GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="rd.driver.blacklist=nouveau modprobe.blacklist=nouveau rhgb quiet rd.driver.blacklist=nouveau modprobe.blacklist=nouveau acpi_backlight=native quiet splash nvidia-drm.modeset=1 i915.enable_dpcd_backlight=1 nvidia.NVreg_EnableBacklightHandler=0 nvidia.NVreg_RegistryDwords=EnableBrightnessControl=0"
GRUB_DISABLE_RECOVERY="true"
GRUB_ENABLE_BLSCFG=true

Update the grub configuration and reboot:

sudo grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg
systemctl reboot

With these steps, I was able get a somewhat functional GU605MI Fedora system. If you encounter any issues, refer to the respective documentation or seek further assistance from the Asus-Linux community.

Happy computing!

PowerShell Guide: Script as a Windows Service

Red or blue pill

If you are in the same rabbit-hole as I was of setting up a Windows Service of any form of looping script, there’s two pills you can choose from:

  1. Red Pill: Create a program that abide to the law of the fearsome Service Control Manager.

  2. Blue Pill: Write a PowerShell script, 8 lines of XML, and download WinSW.exe

WinSW describes itself as following:

A wrapper executable that can run any executable as a Windows service, in a permissive license.

Naturally as someone who enjoys coding with hand grenades, I took the Blue Pill and here’s how that story went:

The Blue Pill

  1. Create a new working directory and save it to a variable
$DirParams = @{
    ItemType    = 'Directory'
    Name        = "PowerShell_Service"
    OutVariable = 'WorkingDirectory'
}
New-Item @DirParams
  1. Download the latest WinSW-x64.exe to the working directory
# Get the latest WinSW 64-bit executable browser download url
$ExecutableName = 'WinSW-x64.exe'
$LatestURL = Invoke-RestMethod 'https://api.github.com/repos/winsw/winsw/releases/latest'
$LatestDownloadURL = ($LatestURL.assets | Where-Object {$_.Name -eq $ExecutableName}).browser_download_url
$FinalPath = "$($WorkingDirectory.FullName)\$ExecutableName"

# Download it to the newly created working directory
Invoke-WebRequest -Uri $LatestDownloadURL -Outfile $FinalPath -Verbose
  1. Create the PowerShell script which the service runs

This loop checks for notepad every 5 sec and kills it if it finds it

while ($true) {
    $notepad = Get-Process notepad -ErrorAction SilentlyContinue
    if ($notepad) {
        $notepad.Kill()
    }
    Start-Sleep -Seconds 5
}
  1. Construct the .XML file

Just edit the id, name, description and startarguments

<service>
  <id>PowerShellService</id>
  <name>PowerShellService</name>
  <description>This service runs a custom PowerShell script.</description>
  <executable>C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe</executable>
  <startarguments>-NoLogo -file C:\Path\To\Script\Invoke-PowerShellServiceScript.ps1</startarguments>
  <log mode="roll"></log>
</service>

Save the .xml, in this example I saved it as PowerShell_Service.xml

# if not already, step into the workingdirectory
cd $WorkingDirectory.FullName

# Install the service
.\WinSW-x64.exe install .\PowerShell_Service.xml

# Make sure powershell.exe's executionpolicy is Bypass
Set-ExecutionPolicy -ExecutionPolicy Bypass -Scope LocalMachine

# As an administrator
Get-Service PowerShellService | Start-Service

Conclusion

Running a PowerShell script as a service on any windows machine isn’t that complicated thanks to WinSW. It’s a great choice if you don’t want to get deeper into the process of developing windows services (it’s kind of a fun rabbit-hole though).

I recommend reading docs of WinSW.

Some things to consider:

  • The service will run PowerShell 5.1 as System
  • Meaning the executionpolicy must be supporting that usecase (bypass as local machine will do)
  • The script in this example is just a demo of a loop, but anything you can think of that loops will do here
  • Starting the Service requires elevated rights in this example
  • If you get the notorious The service did not respond to the start or control request in a timely fashion, you have my condolences (This is a very general error msg that has no clear answer by itself it seems)

Good luck have fun, happy coding

/Emil

How to Restore a File from Git

Git is a powerful and popular version control system, sometimes a bit too powerful.

Depending on how your day went, you may want to restore a file from git to a previous state, either because you made an oopsie, want to undo some changes, or need to compare different versions.

Let’s go through four common scenarios on how to do just that!

Scenario 1: Saved Locally on the Local Git Repository

The simplest scenario is when you have saved your file locally on your local git repository, but have not staged or committed it yet.

In this case, you can use the git restore command to discard the changes in your working directory and restore the file to the last committed state.

For example, if you want to restore a file named index.html, you can run the following command:

git restore index.html

This will overwrite the index.html file in your working directory with the version from the HEAD commit, which is the latest commit on your current branch.

You can also use a dot (.) instead of the file name to restore all the files in your working directory.

git restore .

Scenario 2: Saved Locally and Staged Locally

The next scenario is when you have saved your file locally and staged it locally, but have not committed it yet.

In this case, you can use the git restore –staged command to unstage the file and remove it from the staging area.

For example, if you want to unstage a file named index.html, you can run the following command:

git restore --staged index.html

This will remove the index.html file from the staging area and leave it in your working directory with the changes intact. You can then use the git restore command as in the previous scenario to discard the changes in your working directory and restore the file to the last committed state. Alternatively, you can use this command:

git restore --source=HEAD

To unstage and restore the file in one step.

For example, if you want to unstage and restore a file named index.html, you can run the following command:

git restore --source=HEAD index.html

This will remove the index.html file from the staging area and overwrite it in your working directory with the version from the HEAD commit. You can also use a dot (.) instead of the file name to unstage and restore all the files in your staging area and working directory.

Scenario 3: Saved Locally, Staged Locally and Committed

The third scenario is when you have saved your file locally, staged it locally and committed it, but have not pushed it to the remote repository yet. In this case, you can use the git reset –hard command to reset your local branch to the previous commit and discard all the changes in your staging area and working directory. For example, if you want to reset your local branch to the previous commit, you can run the following command:

git reset --hard HEAD~1

This will reset your local branch to the commit before the HEAD commit, which is the latest commit on your current branch.

This will also discard all the changes in your staging area and working directory, including the file you want to restore.

You can then use the git checkout command to check out the file from the previous commit and restore it to your working directory.

For example, if you want to check out and restore a file named index.html from the previous commit, you can run the following command:

git checkout HEAD~1 index.html

This will check out the index.html file from the commit before the HEAD commit and overwrite it in your working directory with the version from that commit.

You can also use a dot (.) here as well, to check out and restore all the files from the previous commit.

Scenario 4: Saved Locally, Staged Locally, Committed and Pushed to Remote Repository

The fourth and final scenario is when you have saved your file locally, staged it locally, committed it and pushed it to the remote repository.

In this case, you can use the git revert command to create a new commit that reverses the changes in the previous commit and restores the file to the state before that commit.

For example, if you want to revert the previous commit and restore a file named index.html to the state before that commit, you can run the following command:

git revert HEAD

This will create a new commit that reverses the changes in the HEAD commit, which is the latest commit on your current branch.

This will also restore the index.html file in your working directory and staging area to the version from the commit before the HEAD commit.

You can then push the new commit to the remote repository to update it with the reverted changes.

You can also use the –no-commit option to revert the changes without creating a new commit, and then use the git restore or git checkout commands as in the previous scenarios to restore the file to the desired state.

To sum it up

We’ve demonstrated how to restore a file from git in four different scenarios, depending on how far you have progressed in the git workflow.

We have used the git restore, git reset, git checkout and git revert commands to discard, unstage, check out and revert changes in your files and restore them to the previous states.

I hope this post has been helpful and maybe even saved some headache!

If you have any questions or feedback, please feel free to DM me on Twitter or LinkedIn.

Happy coding

PowerShell 7.4: Install-Module is evolving.

Where does Install-Module come from?

Install-Module has evolved.

Have you ever asked yourself, what module imports the Install-Module cmdlet? It’s kind of a meta question, check for yourself! Spoiler a bit down for anyone reading on mobile.

Get-Command -Name Install-Module
    CommandType     Name                                               Version    Source
    -----------     ----                                               -------    ------
    Function        Install-Module                                     2.2.5      PowerShellGet

New sheriff in town 🤠

With the GA release of PowerShell 7.4 a rewrite of PowerShellGet is included, (hint hint, renamed to PSResourceGet), and boy is it fast.

I installed PowerShell 7.4 on two different Ubuntu 20.04 WSL distros, and I installed a few modules to benchmark the old trusty Install-Module and the new sheriff in town: Install-PSResource.

The results speak for themselves. PSResourceGet is much faster then PowerShellGet V2.

Speaking about PowerShellGet V2, there’s still a future for this module, but instead of new APIs and features, V3 (currently in pre-release) was converted to a compatibility layer over to the new and faster PSResourceGet.

Install-Module -Name PowerShellGet -AllowPrerelease -Force

The parameters of the new PSResourceGet is not supported from calling the older cmdlets, and there’s no official documentation out for PowerShellGet V3 yet, so to me this seems purely for pipeline scenarios where you have code in place that can just use the new functionality. It has less to do with interactive use it seems. Here’s some further reading on the subject.

Let’s take PSResourceGet for a spin

PSResourceGet seem to me an awesome new module based on it’s speed, so better get used to it’s new syntax because this will be my new main driver for sure.

Get-Command -Module Microsoft.PowerShell.PSResourceGet | sort Name
CommandType     Name                                               Version    Source
-----------     ----                                               -------    ------
Cmdlet          Find-PSResource                                    1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Get-InstalledPSResource                            1.0.1      Microsoft.PowerShell.PSResourceGet
Alias           Get-PSResource                                     1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Get-PSResourceRepository                           1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Get-PSScriptFileInfo                               1.0.1      Microsoft.PowerShell.PSResourceGet
Function        Import-PSGetRepository                             1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Install-PSResource                                 1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          New-PSScriptFileInfo                               1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Publish-PSResource                                 1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Register-PSResourceRepository                      1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Save-PSResource                                    1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Set-PSResourceRepository                           1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Test-PSScriptFileInfo                              1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Uninstall-PSResource                               1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Unregister-PSResourceRepository                    1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Update-PSModuleManifest                            1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Update-PSResource                                  1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Update-PSScriptFileInfo                            1.0.1      Microsoft.PowerShell.PSResourceGet

What’s installed?

It’s not only installing modules that’s faster, it’s also very fast at getting installed modules.

Getting installed modules can be very time-consuming on shared systems, especially where you have the az modules installed, so this is a great performance win overall.

Find new stuff

Finding new modules and scripts is also a crucial part of PowerShell, especially for the community members. I would argue with PSResourceGet going GA, PowerShell 7.4 is probably one of the most significant performance boosters of PowerShell (in it’s open source life).

As you can see, finding modules is way faster, and here we’re even using two-way wildcards.

What about publishing?

Let’s try to use the new Publish-PSResource. I have a minor bug-fix to do on my project linuxinfo and will edit my publishing script so that github action will publish it for me using Publish-PSResource.

I start with editing my very simple publishing script. Since I don’t know if the github-hosted runner will have PSResourceGet installed yet, I need to validate that the cmdlet is present before calling it. If it’s not, I’m simply installing it using PowerShellGet v2.

This should do it!

Hmm, seems like I messed something up. The Github-hosted runner can’t find Publish-PSResource so, it’s trying to install PSResourceGet using Install-Module. However I miss-spelled the module name if you look closely at line 7. It should be Microsoft.PowerShell.PSResourceGet, let’s fix that and re-run my workflow.

Looks way better now!

And there’s a new version of linuxinfo with a minor bugfix. And the Publish-PSResource migration was very straightforward.

Conclusion

In this post, we learned about the origin of Install-Module, being PowerShellGet v2, and it’s predecessor Install-PSResource, being PSResourceGet. We took some cmdlets for a spin and realized that, the new version is easily twice as fast, in some cases even 3 times faster.

We covered PowerShellGet V3 being a compatibility layer and some caveats with it.

We looked at migrating a simple publishing script from Publish-Module to Publish-PSResource.

I recommend to poke around with the new PSResourceGet cmdlets and read it’s official documentation, and for interactive use not rely on any compatibility layer, save that for the edge-cases.

Thanks for reading this far, hope you found it helpful. PM me on twitter for any feedback.

Happy coding

/Emil

PowerShell Solution: Use Send-MgUserMail in Azure Automation

Send-MgUserMail

The following solution example is covering how to set-up and use the Send-MgUserMail cmdlet in Azure Automation to send an email with a subject, message body and an attached zip file.

Pre-Requirements

Authentication & Access

This solution will use an Client Secret and a encrypted automation variable.

The alternative to using an Client Secret would be to use a certificate and I would recommend doing so since it’s a more secure solution in general.

Using a Client Secret is fine if you have good control over who has access to your App Registration and your automation account.

This step-by-step guide will set up the app registration and the secret, and finally add the secret to the automation accounts shared resources as a variable.


NOTE

If you’re looking to be more fine-grained in your access delegation, and want to skip the whole secret management aspect, be sure to look into Managed Identities, specifically User-Assigned. Thanks Dennis!


  1. In the Azure Portal -> App registrations
  2. New Registration -> Name the app to something descriptive like Runbook name or similar
  3. Register
  4. API permissions -> Add permissions -> Microsoft Graph -> Application permission
  5. Search for Mail.Send, check it, Add permissions, Grant admin consent for ORG
  6. Navigate to Certificates & Secrets -> Client secrets -> new client secret
  7. Fill in description and Expires after your needs
  8. Navigate to your automation account in Azure -> Variables -> Add variable -> Copy-paste your secret into this variable, select Encrypted, Create

The authentication will be done in the azure automation runbook, and finally the code will look similar to this:

# Connects to graph as your new app using encrypted secret

# Look in your App Registration -> Application (client) ID
$ClientId = "o2jvskg2-[notreal]-1246-820s-2621786s35e5" 

# Look in Azure -> Microsoft Entra ID -> Overview -> Tenant ID
$TenantId = "626226122-[notreal]-62ww-5053-56e32ss89sa5"

# Variable Name from step 8 (Authentication)
$ClientSecretCredential = (Get-AutomationVariable -Name 'From Step 8')

$Body = @{
    Grant_Type    = "client_credentials"
    Scope         = "https://graph.microsoft.com/.default"
    Client_Id     = $ClientId
    Client_Secret = $ClientSecretCredential
}

$RestMethodParams = @{
    Uri = "https://login.microsoftonline.com/$TenantId/oauth2/v2.0/token"
    Method = "POST"
    Body = $Body
}

$Connection = Invoke-RestMethod @RestMethodParams
$Token = $Connection.access_token

Connect-MgGraph -AccessToken $Token

Note that Get-AutomationVariable is a cmdlet which is only available for the az automation sandbox environment. It’s also the only way of getting the encrypted variable.

Get-AutomationVariable is an internal cmdlet from the module Orchestrator.AssetManagement.Cmdlets which is a part of Azure Automation, so running this outside of a runbook will fail.

Sending the mail

Now that we have authentication and access out of the way, we can start developing a function that we will use in the runbook to send an email. My example below has a requirement of an attachment. I’m using this for gathering data, compressing it and attaching the .zip file in the mail function.

Customize the function to your specific needs.

function Send-AutomatedEmail {
    param(
        [Parameter (Mandatory = $false)]
        [string]$From,
        [Parameter (Mandatory = $true)]
        [string]$Subject,
        [Parameter (Mandatory = $true)]
        $To,
        [Parameter (Mandatory = $true)]
        [string]$Body,
        [Parameter (Mandatory = $true)]
        [string]$AttachmentPath
    )

    if ([string]::IsNullOrEmpty( $From )) {
        $From = "noreply@contoso.com"
    }

    # I'm defining the parameters in a hashtable 
    $ParamTable = @{
        Subject = $Subject
        From    = $From
        To      = $To
        Type    = "html"
        Body    = $body
    }

    # ArrayList instead of adding to an array with += for increased performance
    $ToRecipients = [System.Collections.ArrayList]::new()
    
    $ParamTable.To | ForEach-Object {
        [void]$ToRecipients.Add(@{
                emailAddress = @{
                    address = $_
                }
            })
    }

    try {
        $MessageAttachment = [Convert]::ToBase64String([IO.File]::ReadAllBytes($AttachmentPath))
        $MessageAttachmentName = $AttachmentPath.Split("\") | Select-Object -Last 1
    }
    catch {
        Write-Error $Error[0] -ErrorAction Stop
    }

    $params = @{
        Message         = @{
            Subject      = $ParamTable.Subject
            Body         = @{
                ContentType = $ParamTable.Type
                Content     = $ParamTable.Body
            }
            ToRecipients = $ToRecipients
            Attachments  = @(
                @{
                    "@odata.type" = "#microsoft.graph.fileAttachment"
                    Name          = $MessageAttachmentName
                    ContentBytes  = $MessageAttachment
                }
            )

        }
        SaveToSentItems = "false"
    }

    try {
        Send-MgUserMail -UserId $ParamTable.From -BodyParameter $params -ErrorAction Stop
        Write-Output "Email sent to:"
        $ParamTable.To
    }
    catch {
        Write-Error $Error[0]
    }
}

Finally, we construct a new splatting table and send the email. A note, for this to run authentication must have happened earlier in the runbook.

# Generate some data and compress it
$Date = Get-Date -Format yyyy-MM-dd
$CSVPath = "$env:temp\$($Date)-BigReport.csv"
$ZIPPath = "$env:temp\$($Date)-BigReport.zip"

$BigReport | Sort-Object | Export-Csv -Path $CSVPath -NoTypeInformation -Encoding UTF8

Compress-Archive -Path $CSVPath -DestinationPath $ZipPath


# Build the email parameters
$SendMailSplat = @{
    Subject        = "Automated Email via MGGraph"
    Body           = "This is an automated email sent from Azure Automation using MGGraph."
    To             = "user1@mail.com", "user2@mail.com","user3@mail.com"
    AttachmentPath = $ZIPPath
}

# Send the email
Send-AutomatedEmail @SendMailSplat

And that’s all there is to it! Congrats on sending an email using the Microsoft Graph.

Key Takeaways

While building this solution, I noticed that there’s a lack of content and documentation on some things, one of those things are how to send an email to more than one recipient. If your migration from Send-MailMessage, it isn’t so straightforward, since Send-MgUserMail is based on either JSON or MIME format.

Meaning in a nutshell we can’t just pass an array of email accounts and call it a day, instead we need to build an object that looks like something along the lines of: Message -> ToRecipients -> emailAddress -> adress : adress.company.com

Alternative 1 (fast)

$ToRecipients = [System.Collections.ArrayList]::new()

$ParamTable.To | ForEach-Object {
    [void]$ToRecipients.Add(@{
            emailAddress = @{
                address = $_
            }
        })
}

Alternative 2 (slow)

$ToRecipients = @()
$ParamTable.To | ForEach-Object { $ToRecipients += @{
        emailAddress = @{
            address = $_ 
        }
    }
}

Use whatever fits your needs best.

Hope this was valuable to someone wanting to move away from Send-MailMessage to Send-MgUserMail!

Happy coding

/Emil

PowerShell: Super simple Hyper-V VM creation

Once again, meet Labmil.

2021, I wrote about my script to generate hyper-v VMs. I still use this way of creating my labs, and I think the simple nature of it is valuable.

The only requirements to using it is PowerShell, git and an iso file. Since it’s specifically a hyper-v script, naturally it will require Windows.

# Clone labmil
git clone https://github.com/ehmiiz/labmil.git

# Set iso path to desired VM OS
$IsoPath = "C:\Temp\WS2022.iso"

# Set working directory to the cloned repo
Set-Location labmil

# Create the VM with desired name, IsoPath is only needed once
if ( -not (Test-Path $IsoPath) ) {
    Write-Error "Path not found!" -ErrorAction Stop
}
.\New-LabmilVM.ps1 -Name "DC01" -IsoPath $IsoPath -Verbose
.\New-LabmilVM.ps1 -Name "DC02" -Verbose

The above script can be used to get going. But I would recommend just writing down the git clone part, or remember it (the repo name).

After this, interactive use is very simple.

The idea behind the script is to demonstrate in what order you start using labmil.

  1. Install it using git
  2. Be aware of where your ISO is
  3. Call the New-LabmilVM function, give the VM a name and, first time setup, provide the iso
  4. Create how many VMs you want using a different name

Features 2024 and forward

  • No breaking changes! I like the simple nature of the lambil script and want to support the way it’s working. It promotes learning but reduces repetitiveness.

  • Optional parameters:

    • Role: AD DS, AD CS: configures the OS, lightweight lab
    • NetworkSetup: Should automate internal switch and nic config

AD Labbing

The reason labmil exists is because of my interest in labbing with Windows Server but specifically AD DS.

I will be creating (even thought I know several other tools exist) a tool to populate a simple AD domain, with built in ACLs, OUs, Users, Computers, Group nesting, and security vulnerabilities. So I can automate setting up a new AD lab for myself but also for others.

Stay tuned!

Happy labbing!

PowerShell for Security: Continuous post of AD Weaknesses

Idea behind this post

As an Active Directory professional, I have gained insights into its unsecure features and outdated legacy “ideas,” as well as the growing list of vulnerabilities in the ADDS, ADCS & ADFS suite.

In this post, I will share my knowledge and experience in defending Active Directory with other AD admins. Each vulnerability section will be divided into three parts: Problem, Solution, and Script.

Please note that this post is personal and subject to change. Its sole purpose is to help others. Always exercise caution when running code from an untrusted source - read it carefully and test it in a lab environment before implementing it in production.

1. Clear-Text Passwords In Sysvol (KB2962486)

Problem:

Group policies are (partly) stored in the domain wide share named Sysvol. Sysvol is a share that every domain user has read access to. A feature of group policy preferences (GPP), is the ability to store credentials in a policy, thus making use of the permissions of said account in an effective way.

The only problem is that the credentials are encrypted using a AES key, that’s publically avalible here.

Solution:

Patch your Domain Controllers so that admins cannot store credentials in sysvol: MS14-025: Vulnerability in Group Policy Preferences could allow elevation of privilege

Script:

This is a simple script that will match the cpassword row of the xml file, telling you what policy you need to fix:

# Get domain
$DomainName = Get-ADDomain | Select-Object -ExpandProperty DNSRoot

# Build path
$DomainSYSVOLShareScan = "\\$domainname\SYSVOL\$domainname\Policies\"

# Check path recursivly for match
Get-ChildItem $DomainSYSVOLShareScan -Filter *.xml -Recurse | % {
    if (Select-String -Path $_.FullName -Pattern "Cpassword") {
        $_.FullName
    }
}

2. Authenticated Users Can Join Up to 10 Computers to the Domain (KrbRelayUp)

Problem:

Active Directory creates an attribute by default in it’s schema named: ms-DS-MachineAccountQuota. The value of this attribute determines how many computers a user in the Authenticated Users group can join to the domain.

However, this “trust by default” approach can pose a security risk, an attacker can leverage this attribute for privilege escalation attacks by adding new devices to the domain.

Solution:

Find and identify the value of ms-DS-MachineAccountQuota.

As Microsoft puts it:

Organizations should also consider setting the ms-DS-MachineAccountQuota attribute to 0 to make it more difficult for an attacker to leverage the attribute for attacks. Setting the attribute to 0 stops non-admin users from adding new devices to the domain, blocking the most effective method to carry out the attack’s first step and forcing attackers to choose more complex methods to acquire a suitable resource.

Script:

$DomainDN = (Get-ADDomain).DistinguishedName

Get-ADObject -Identity $DomainDN -Properties ms-DS-MachineAccountQuota

Set-ADDomain -Identity $DomainDN -Replace @{"ms-DS-MachineAccountQuota"="0"}

I recommend running the script line-by-line, and try it out in a lab-environment first.

The script:

  • Gets the DN of the domain
  • Gets the ms-DS-MachineAccountQuota attribute
  • Sets it to 0, making non-privileged users unable to domain join computers.

Talk this decision through with your security department, test plan execute.

3. AdminSDHolder ACL misconfigurations

Problem:

The AdminSDHolder is an object in AD that serves as a security descriptor template for protected accounts and groups in an AD domain.

It exists in every Active Directory domain and is located in the System Partition.

Main features of AdminSDHolder:

  • The AdminSDHolder object manages the ACLs of members of built-in privileged AD groups.

  • The Security Descriptor Propagation (SDPROP) process runs every hour on the domain controller holding the PDC emulator FSMO role. This process scans the domain for protected accounts, disables rights inheritance, and applies an ACL on the object that mirrors the ACL of the AdminSDHolder container.

  • The main function of SDPROP is to protect highly-privileged AD accounts, ensuring that they can’t be deleted or have rights modified, accidentally or intentionally, by users or processes with less privilege.

  • If a user is removed from a privileged group, the adminCount attribute remains set to 1 and inheritance disabled.

Below is a list of built-in protected objects.

Administrator
Administrators
Print Operators
Backup Operators
Replicator
krbtgt
Domain Controllers
Schema Admins
Enterprise Admins
Domain Admins
Server Operators
Account Operators
Read-only Domain Controllers
Key Admins
Enterprise Key Admins

Any other object that has direct access to any of these, will also be added a 1 in it’s admincount attribute by the sdprop process, within a 60 min interval.

A common missconfiguration is to add Service Accounts, Security Groups and even enable inheritance, to complete a task or setup a new system in AD, and forget to configure it securely again.

Solution:

Review the AdminSDHolder ACL under the System container, remove anything that does not have a very good reason to be there (AAD Connect, Exchange, MSOL_ are common, and should be secure with long randomized passwords).

Understanding what rights are unsecure in Active Directory is needed as a first step.

This diagram might help you do just that:

Missconfigured ACLs

Script:

# Gets ACL of AdminSDHolder, display as a GridView
$AdminSDHolder = Get-ADObject -Filter { name -like "AdminSDHolder" }
$AdminSDHolderACL = (Get-Acl "AD:$AdminSDHolder").Access | Out-GridView
  • Review if any IdentityReference is not known
  • Review that IsInherited is set to false on all ACEs (entries)
  • Review group members of all the groups, think twice if the access makes sense

Happy hunting

PowerShell KeePass and saving time

Glad to be back from a 7-month dad leave. Let’s dive into some timesaving PowerShell!

The Problem

Password managers are very useful for anyone having more than one set of credentials, and most of us do.

They reduce the chance of credential leakage to unauthorized people and are vastly superior both post-it papers and notepad files.

However, I found myself using the graphical user interface (GUI) of my password manager daily to simply search copy and paste secret. The problem with navigating a GUI every day is that it’s time consuming, and there’s room for improvement, especially if you enjoy delving into some PowerShell and/or always have a terminal open.

Summary: Password managers GUIs are slow and tedious to work with. Let’s explore an alternative that is much faster!

My Solution

The solution to this problem that I went with was to create a custom script, which will install and configure my PowerShell session to easily access my password manager after typing in its master password. Together with a couple of functions to easily retrieve and copy-paste my password to the clipboard.

Setting something to your clipboard, especially a password, is a risk since other applications also can access the clipboard, thus the clipboard needs to be cleared by setting a sleep timer and overwriting the secret.

As a start, I will need to create a couple of parameters so the input becomes dynamic, so that I can use the script regardless of what filepath or database name I have on the computer.

param (
    [Parameter(Mandatory)]
    [string]$KeePassFilePath,
    [Parameter(Mandatory)]
    [string]$KeePassDataBaseName
)

The modules I will be using in my script is:

$Modules = "SecretManagement.KeePass", "Microsoft.PowerShell.SecretManagement"

Since I use KeePass, naturally this module comes in handy.

It’s an awesome module that I highly recommend for any KeePass & PowerShell user. I will use SecretManagement to enable the KeePass module and use it’s vault capabilities.

This will save me tons of time and I trust the sources that the modules originate from, to deliver secure and tested code. Much more then I trust myself to think of all security aspects of a something that would replace the modules already offered. Another great benefit of having PowerShell as a gateway to your password manager is that you don’t need to install the vendors application at all, this is a big plus if you’re (like I am) a fan of minimalism.

Next part of the code would be to install the modules: I set a condition to check if both modules are already present. If not, I try to install them. Since the Install-Module cmdlets Name parameter accepts a string array (look for String [] in the help files), I wont have to foreach loop through the modules.

$ExistingModules = Get-Module -Name $Modules -ListAvailable | Select-Object -ExpandProperty Name -Unique

if ($ExistingModules.count -ne 2) {
    Install-Module $Modules -Repository PSGallery -Verbose
}

I then have a condition to check of the vault name, if it’s not present already, I register the new KeePass vault, and sets it as the DefaultVault.

if ( -not (Get-SecretVault -Name $KeePassDataBaseName -ErrorAction SilentlyContinue)) {
    Register-SecretVault -Name $KeePassDataBaseName -Verbose -ModuleName 'SecretManagement.KeePass' -DefaultVault -VaultParameters @{
        Path = $KeePassFilePath
        UseMasterPassword = $true
    }
    Write-Verbose "$KeePassDataBaseName successfully installed." -Verbose
}
else {
    Write-Verbose "$KeePassDataBaseName was already configured." -Verbose
}

Function(s)

To speed things up even further, we want to create some smaller functions to wrap all the long cmdlets that we’d otherwise have to write, to get our secrets to the clipboard.

I say functions, because here’s where you can enable the work we’ve done even further, to work with PSCredentialObjects or start a process, wait, and send the password directly too it, thus generating a sort of custom single sign-on solution. However, sticking to the subject, my function will:

  1. Have a parameter that will be the secret that we’re looking for in our password manager
  2. Look for the secret, use Get-SecretInfo if the name is unknown
  3. Call the GetNetworkCredential method, and accessing the ‘Password’ property of the NetworkCredential object, essentially converting the SecureString to a String, and setting the value to clipboard
  4. Start a job with a ScriptBlock, which will replace the secret with a ‘Cleared!’ string.
function Find-FSecret {
    param (
        [parameter(mandatory)]
        [string]$Secret
    )
    $SecretLookup = Get-Secret -Name $Secret
    if ($SecretLookup) {
        Set-Clipboard -Value $SecretLookup.GetNetworkCredential().Password
        Write-Verbose "Secret found and set to clipboard. Will auto clear in 20 seconds." -Verbose
        $null = Start-Job -ScriptBlock {
            Start-Sleep -Seconds 20
            Set-Clipboard -Value 'Cleared!' -Verbose
        }
    }
}

I then add this function to my profile, which will load it on my users sessions, together with an alias declaration:

if ( -not (Get-Alias ffs -ErrorAction SilentlyContinue)) {
  New-Alias -Name 'ffs' -Value 'Find-FSecret'
}

Every time I get somewhat annoyed by yet another “SIGN IN” page, I simply tab over to PowerShell and vent out some frustration using my function:

ffs github
VERBOSE: Secret found and set to clipboard. Will auto clear in 20 seconds.

Discussion

In my example, I’m using KeePass, however this is very applicable to other password managers, in fact the PowerShell Gallery has tons of SecretManagement modules can be just as simple to use as in my examples.

Some examples:

  • BitWarden
  • LastPass
  • Keeper
  • CyberArk
  • Devolutions

Look yourself:

Find-Module *SecretManagement*

Another mention is, you want to make sure you’re not leaking your clipboard history. There’s 3rd party applications & settings built into Windows that might do so.

There’s also the possibility for a PowerShell Transcript to catch the output of your console, so make sure you never actually paste the credentials outside of the actual logon screen. You wouldn’t want to screen-share, or share a server with someone who could look into your command-line history and find a password in clear text.

Speaking of which, you can regularly look for passwords in clear text super easily using PSSecretScanner. I would recommend to look into it after completing a project like this.

Happy coding,

Emil