PowerShell Guide: Script as a Windows Service

Red or blue pill

If you are in the same rabbit-hole as I was of setting up a Windows Service of any form of looping script, there’s two pills you can choose from:

  1. Red Pill: Create a program that abide to the law of the fearsome Service Control Manager.

  2. Blue Pill: Write a PowerShell script, 8 lines of XML, and download WinSW.exe

WinSW describes itself as following:

A wrapper executable that can run any executable as a Windows service, in a permissive license.

Naturally as someone who enjoys coding with hand grenades, I took the Blue Pill and here’s how that story went:

The Blue Pill

  1. Create a new working directory and save it to a variable
$DirParams = @{
    ItemType    = 'Directory'
    Name        = "PowerShell_Service"
    OutVariable = 'WorkingDirectory'
}
New-Item @DirParams
  1. Download the latest WinSW-x64.exe to the working directory
# Get the latest WinSW 64-bit executable browser download url
$ExecutableName = 'WinSW-x64.exe'
$LatestURL = Invoke-RestMethod 'https://api.github.com/repos/winsw/winsw/releases/latest'
$LatestDownloadURL = ($LatestURL.assets | Where-Object {$_.Name -eq $ExecutableName}).browser_download_url
$FinalPath = "$($WorkingDirectory.FullName)\$ExecutableName"

# Download it to the newly created working directory
Invoke-WebRequest -Uri $LatestDownloadURL -Outfile $FinalPath -Verbose
  1. Create the PowerShell script which the service runs

This loop checks for notepad every 5 sec and kills it if it finds it

while ($true) {
    $notepad = Get-Process notepad -ErrorAction SilentlyContinue
    if ($notepad) {
        $notepad.Kill()
    }
    Start-Sleep -Seconds 5
}
  1. Construct the .XML file

Just edit the id, name, description and startarguments

<service>
  <id>PowerShellService</id>
  <name>PowerShellService</name>
  <description>This service runs a custom PowerShell script.</description>
  <executable>C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe</executable>
  <startarguments>-NoLogo -file C:\Path\To\Script\Invoke-PowerShellServiceScript.ps1</startarguments>
  <log mode="roll"></log>
</service>

Save the .xml, in this example I saved it as PowerShell_Service.xml

# if not already, step into the workingdirectory
cd $WorkingDirectory.FullName

# Install the service
.\WinSW-x64.exe install .\PowerShell_Service.xml

# Make sure powershell.exe's executionpolicy is Bypass
Set-ExecutionPolicy -ExecutionPolicy Bypass -Scope LocalMachine

# As an administrator
Get-Service PowerShellService | Start-Service

Conclusion

Running a PowerShell script as a service on any windows machine isn’t that complicated thanks to WinSW. It’s a great choice if you don’t want to get deeper into the process of developing windows services (it’s kind of a fun rabbit-hole though).

I recommend reading docs of WinSW.

Some things to consider:

  • The service will run PowerShell 5.1 as System
  • Meaning the executionpolicy must be supporting that usecase (bypass as local machine will do)
  • The script in this example is just a demo of a loop, but anything you can think of that loops will do here
  • Starting the Service requires elevated rights in this example
  • If you get the notorious The service did not respond to the start or control request in a timely fashion, you have my condolences (This is a very general error msg that has no clear answer by itself it seems)

Good luck have fun, happy coding

/Emil

How to Restore a File from Git

Git is a powerful and popular version control system, sometimes a bit too powerful.

Depending on how your day went, you may want to restore a file from git to a previous state, either because you made an oopsie, want to undo some changes, or need to compare different versions.

Let’s go through four common scenarios on how to do just that!

Scenario 1: Saved Locally on the Local Git Repository

The simplest scenario is when you have saved your file locally on your local git repository, but have not staged or committed it yet.

In this case, you can use the git restore command to discard the changes in your working directory and restore the file to the last committed state.

For example, if you want to restore a file named index.html, you can run the following command:

git restore index.html

This will overwrite the index.html file in your working directory with the version from the HEAD commit, which is the latest commit on your current branch.

You can also use a dot (.) instead of the file name to restore all the files in your working directory.

git restore .

Scenario 2: Saved Locally and Staged Locally

The next scenario is when you have saved your file locally and staged it locally, but have not committed it yet.

In this case, you can use the git restore –staged command to unstage the file and remove it from the staging area.

For example, if you want to unstage a file named index.html, you can run the following command:

git restore --staged index.html

This will remove the index.html file from the staging area and leave it in your working directory with the changes intact. You can then use the git restore command as in the previous scenario to discard the changes in your working directory and restore the file to the last committed state. Alternatively, you can use this command:

git restore --source=HEAD

To unstage and restore the file in one step.

For example, if you want to unstage and restore a file named index.html, you can run the following command:

git restore --source=HEAD index.html

This will remove the index.html file from the staging area and overwrite it in your working directory with the version from the HEAD commit. You can also use a dot (.) instead of the file name to unstage and restore all the files in your staging area and working directory.

Scenario 3: Saved Locally, Staged Locally and Committed

The third scenario is when you have saved your file locally, staged it locally and committed it, but have not pushed it to the remote repository yet. In this case, you can use the git reset –hard command to reset your local branch to the previous commit and discard all the changes in your staging area and working directory. For example, if you want to reset your local branch to the previous commit, you can run the following command:

git reset --hard HEAD~1

This will reset your local branch to the commit before the HEAD commit, which is the latest commit on your current branch.

This will also discard all the changes in your staging area and working directory, including the file you want to restore.

You can then use the git checkout command to check out the file from the previous commit and restore it to your working directory.

For example, if you want to check out and restore a file named index.html from the previous commit, you can run the following command:

git checkout HEAD~1 index.html

This will check out the index.html file from the commit before the HEAD commit and overwrite it in your working directory with the version from that commit.

You can also use a dot (.) here as well, to check out and restore all the files from the previous commit.

Scenario 4: Saved Locally, Staged Locally, Committed and Pushed to Remote Repository

The fourth and final scenario is when you have saved your file locally, staged it locally, committed it and pushed it to the remote repository.

In this case, you can use the git revert command to create a new commit that reverses the changes in the previous commit and restores the file to the state before that commit.

For example, if you want to revert the previous commit and restore a file named index.html to the state before that commit, you can run the following command:

git revert HEAD

This will create a new commit that reverses the changes in the HEAD commit, which is the latest commit on your current branch.

This will also restore the index.html file in your working directory and staging area to the version from the commit before the HEAD commit.

You can then push the new commit to the remote repository to update it with the reverted changes.

You can also use the –no-commit option to revert the changes without creating a new commit, and then use the git restore or git checkout commands as in the previous scenarios to restore the file to the desired state.

To sum it up

We’ve demonstrated how to restore a file from git in four different scenarios, depending on how far you have progressed in the git workflow.

We have used the git restore, git reset, git checkout and git revert commands to discard, unstage, check out and revert changes in your files and restore them to the previous states.

I hope this post has been helpful and maybe even saved some headache!

If you have any questions or feedback, please feel free to DM me on Twitter or LinkedIn.

Happy coding

PowerShell 7.4: Install-Module is evolving.

Where does Install-Module come from?

Install-Module has evolved.

Have you ever asked yourself, what module imports the Install-Module cmdlet? It’s kind of a meta question, check for yourself! Spoiler a bit down for anyone reading on mobile.

Get-Command -Name Install-Module
    CommandType     Name                                               Version    Source
    -----------     ----                                               -------    ------
    Function        Install-Module                                     2.2.5      PowerShellGet

New sheriff in town šŸ¤ 

With the GA release of PowerShell 7.4 a rewrite of PowerShellGet is included, (hint hint, renamed to PSResourceGet), and boy is it fast.

I installed PowerShell 7.4 on two different Ubuntu 20.04 WSL distros, and I installed a few modules to benchmark the old trusty Install-Module and the new sheriff in town: Install-PSResource.

The results speak for themselves. PSResourceGet is much faster then PowerShellGet V2.

Speaking about PowerShellGet V2, there’s still a future for this module, but instead of new APIs and features, V3 (currently in pre-release) was converted to a compatibility layer over to the new and faster PSResourceGet.

Install-Module -Name PowerShellGet -AllowPrerelease -Force

The parameters of the new PSResourceGet is not supported from calling the older cmdlets, and there’s no official documentation out for PowerShellGet V3 yet, so to me this seems purely for pipeline scenarios where you have code in place that can just use the new functionality. It has less to do with interactive use it seems. Here’s some further reading on the subject.

Let’s take PSResourceGet for a spin

PSResourceGet seem to me an awesome new module based on it’s speed, so better get used to it’s new syntax because this will be my new main driver for sure.

Get-Command -Module Microsoft.PowerShell.PSResourceGet | sort Name
CommandType     Name                                               Version    Source
-----------     ----                                               -------    ------
Cmdlet          Find-PSResource                                    1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Get-InstalledPSResource                            1.0.1      Microsoft.PowerShell.PSResourceGet
Alias           Get-PSResource                                     1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Get-PSResourceRepository                           1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Get-PSScriptFileInfo                               1.0.1      Microsoft.PowerShell.PSResourceGet
Function        Import-PSGetRepository                             1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Install-PSResource                                 1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          New-PSScriptFileInfo                               1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Publish-PSResource                                 1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Register-PSResourceRepository                      1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Save-PSResource                                    1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Set-PSResourceRepository                           1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Test-PSScriptFileInfo                              1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Uninstall-PSResource                               1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Unregister-PSResourceRepository                    1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Update-PSModuleManifest                            1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Update-PSResource                                  1.0.1      Microsoft.PowerShell.PSResourceGet
Cmdlet          Update-PSScriptFileInfo                            1.0.1      Microsoft.PowerShell.PSResourceGet

What’s installed?

It’s not only installing modules that’s faster, it’s also very fast at getting installed modules.

Getting installed modules can be very time-consuming on shared systems, especially where you have the az modules installed, so this is a great performance win overall.

Find new stuff

Finding new modules and scripts is also a crucial part of PowerShell, especially for the community members. I would argue with PSResourceGet going GA, PowerShell 7.4 is probably one of the most significant performance boosters of PowerShell (in it’s open source life).

As you can see, finding modules is way faster, and here we’re even using two-way wildcards.

What about publishing?

Let’s try to use the new Publish-PSResource. I have a minor bug-fix to do on my project linuxinfo and will edit my publishing script so that github action will publish it for me using Publish-PSResource.

I start with editing my very simple publishing script. Since I don’t know if the github-hosted runner will have PSResourceGet installed yet, I need to validate that the cmdlet is present before calling it. If it’s not, I’m simply installing it using PowerShellGet v2.

This should do it!

Hmm, seems like I messed something up. The Github-hosted runner can’t find Publish-PSResource so, it’s trying to install PSResourceGet using Install-Module. However I miss-spelled the module name if you look closely at line 7. It should be Microsoft.PowerShell.PSResourceGet, let’s fix that and re-run my workflow.

Looks way better now!

And there’s a new version of linuxinfo with a minor bugfix. And the Publish-PSResource migration was very straightforward.

Conclusion

In this post, we learned about the origin of Install-Module, being PowerShellGet v2, and it’s predecessor Install-PSResource, being PSResourceGet. We took some cmdlets for a spin and realized that, the new version is easily twice as fast, in some cases even 3 times faster.

We covered PowerShellGet V3 being a compatibility layer and some caveats with it.

We looked at migrating a simple publishing script from Publish-Module to Publish-PSResource.

I recommend to poke around with the new PSResourceGet cmdlets and read it’s official documentation, and for interactive use not rely on any compatibility layer, save that for the edge-cases.

Thanks for reading this far, hope you found it helpful. PM me on twitter for any feedback.

Happy coding

/Emil

PowerShell Solution: Use Send-MgUserMail in Azure Automation

Send-MgUserMail

The following solution example is covering how to set-up and use the Send-MgUserMail cmdlet in Azure Automation to send an email with a subject, message body and an attached zip file.

Pre-Requirements

Authentication & Access

This solution will use an Client Secret and a encrypted automation variable.

The alternative to using an Client Secret would be to use a certificate and I would recommend doing so since it’s a more secure solution in general.

Using a Client Secret is fine if you have good control over who has access to your App Registration and your automation account.

This step-by-step guide will set up the app registration and the secret, and finally add the secret to the automation accounts shared resources as a variable.


NOTE

If you’re looking to be more fine-grained in your access delegation, and want to skip the whole secret management aspect, be sure to look into Managed Identities, specifically User-Assigned. Thanks Dennis!


  1. In the Azure Portal -> App registrations
  2. New Registration -> Name the app to something descriptive like Runbook name or similar
  3. Register
  4. API permissions -> Add permissions -> Microsoft Graph -> Application permission
  5. Search for Mail.Send, check it, Add permissions, Grant admin consent for ORG
  6. Navigate to Certificates & Secrets -> Client secrets -> new client secret
  7. Fill in description and Expires after your needs
  8. Navigate to your automation account in Azure -> Variables -> Add variable -> Copy-paste your secret into this variable, select Encrypted, Create

The authentication will be done in the azure automation runbook, and finally the code will look similar to this:

# Connects to graph as your new app using encrypted secret

# Look in your App Registration -> Application (client) ID
$ClientId = "o2jvskg2-[notreal]-1246-820s-2621786s35e5" 

# Look in Azure -> Microsoft Entra ID -> Overview -> Tenant ID
$TenantId = "626226122-[notreal]-62ww-5053-56e32ss89sa5"

# Variable Name from step 8 (Authentication)
$ClientSecretCredential = (Get-AutomationVariable -Name 'From Step 8')

$Body = @{
    Grant_Type    = "client_credentials"
    Scope         = "https://graph.microsoft.com/.default"
    Client_Id     = $ClientId
    Client_Secret = $ClientSecretCredential
}

$RestMethodParams = @{
    Uri = "https://login.microsoftonline.com/$TenantId/oauth2/v2.0/token"
    Method = "POST"
    Body = $Body
}

$Connection = Invoke-RestMethod @RestMethodParams
$Token = $Connection.access_token

Connect-MgGraph -AccessToken $Token

Note that Get-AutomationVariable is a cmdlet which is only available for the az automation sandbox environment. It’s also the only way of getting the encrypted variable.

Get-AutomationVariable is an internal cmdlet from the module Orchestrator.AssetManagement.Cmdlets which is a part of Azure Automation, so running this outside of a runbook will fail.

Sending the mail

Now that we have authentication and access out of the way, we can start developing a function that we will use in the runbook to send an email. My example below has a requirement of an attachment. I’m using this for gathering data, compressing it and attaching the .zip file in the mail function.

Customize the function to your specific needs.

function Send-AutomatedEmail {
    param(
        [Parameter (Mandatory = $false)]
        [string]$From,
        [Parameter (Mandatory = $true)]
        [string]$Subject,
        [Parameter (Mandatory = $true)]
        $To,
        [Parameter (Mandatory = $true)]
        [string]$Body,
        [Parameter (Mandatory = $true)]
        [string]$AttachmentPath
    )

    if ([string]::IsNullOrEmpty( $From )) {
        $From = "noreply@contoso.com"
    }

    # I'm defining the parameters in a hashtable 
    $ParamTable = @{
        Subject = $Subject
        From    = $From
        To      = $To
        Type    = "html"
        Body    = $body
    }

    # ArrayList instead of adding to an array with += for increased performance
    $ToRecipients = [System.Collections.ArrayList]::new()
    
    $ParamTable.To | ForEach-Object {
        [void]$ToRecipients.Add(@{
                emailAddress = @{
                    address = $_
                }
            })
    }

    try {
        $MessageAttachment = [Convert]::ToBase64String([IO.File]::ReadAllBytes($AttachmentPath))
        $MessageAttachmentName = $AttachmentPath.Split("\") | Select-Object -Last 1
    }
    catch {
        Write-Error $Error[0] -ErrorAction Stop
    }

    $params = @{
        Message         = @{
            Subject      = $ParamTable.Subject
            Body         = @{
                ContentType = $ParamTable.Type
                Content     = $ParamTable.Body
            }
            ToRecipients = $ToRecipients
            Attachments  = @(
                @{
                    "@odata.type" = "#microsoft.graph.fileAttachment"
                    Name          = $MessageAttachmentName
                    ContentBytes  = $MessageAttachment
                }
            )

        }
        SaveToSentItems = "false"
    }

    try {
        Send-MgUserMail -UserId $ParamTable.From -BodyParameter $params -ErrorAction Stop
        Write-Output "Email sent to:"
        $ParamTable.To
    }
    catch {
        Write-Error $Error[0]
    }
}

Finally, we construct a new splatting table and send the email. A note, for this to run authentication must have happened earlier in the runbook.

# Generate some data and compress it
$Date = Get-Date -Format yyyy-MM-dd
$CSVPath = "$env:temp\$($Date)-BigReport.csv"
$ZIPPath = "$env:temp\$($Date)-BigReport.zip"

$BigReport | Sort-Object | Export-Csv -Path $CSVPath -NoTypeInformation -Encoding UTF8

Compress-Archive -Path $CSVPath -DestinationPath $ZipPath


# Build the email parameters
$SendMailSplat = @{
    Subject        = "Automated Email via MGGraph"
    Body           = "This is an automated email sent from Azure Automation using MGGraph."
    To             = "user1@mail.com", "user2@mail.com","user3@mail.com"
    AttachmentPath = $ZIPPath
}

# Send the email
Send-AutomatedEmail @SendMailSplat

And that’s all there is to it! Congrats on sending an email using the Microsoft Graph.

Key Takeaways

While building this solution, I noticed that there’s a lack of content and documentation on some things, one of those things are how to send an email to more than one recipient. If your migration from Send-MailMessage, it isn’t so straightforward, since Send-MgUserMail is based on either JSON or MIME format.

Meaning in a nutshell we can’t just pass an array of email accounts and call it a day, instead we need to build an object that looks like something along the lines of: Message -> ToRecipients -> emailAddress -> adress : adress.company.com

Alternative 1 (fast)

$ToRecipients = [System.Collections.ArrayList]::new()

$ParamTable.To | ForEach-Object {
    [void]$ToRecipients.Add(@{
            emailAddress = @{
                address = $_
            }
        })
}

Alternative 2 (slow)

$ToRecipients = @()
$ParamTable.To | ForEach-Object { $ToRecipients += @{
        emailAddress = @{
            address = $_ 
        }
    }
}

Use whatever fits your needs best.

Hope this was valuable to someone wanting to move away from Send-MailMessage to Send-MgUserMail!

Happy coding

/Emil

PowerShell: Super simple Hyper-V VM creation

Once again, meet Labmil.

2021, I wrote about my script to generate hyper-v VMs. I still use this way of creating my labs, and I think the simple nature of it is valuable.

The only requirements to using it is PowerShell, git and an iso file. Since it’s specifically a hyper-v script, naturally it will require Windows.

# Clone labmil
git clone https://github.com/ehmiiz/labmil.git

# Set iso path to desired VM OS
$IsoPath = "C:\Temp\WS2022.iso"

# Set working directory to the cloned repo
Set-Location labmil

# Create the VM with desired name, IsoPath is only needed once
if ( -not (Test-Path $IsoPath) ) {
    Write-Error "Path not found!" -ErrorAction Stop
}
.\New-LabmilVM.ps1 -Name "DC01" -IsoPath $IsoPath -Verbose
.\New-LabmilVM.ps1 -Name "DC02" -Verbose

The above script can be used to get going. But I would recommend just writing down the git clone part, or remember it (the repo name).

After this, interactive use is very simple.

The idea behind the script is to demonstrate in what order you start using labmil.

  1. Install it using git
  2. Be aware of where your ISO is
  3. Call the New-LabmilVM function, give the VM a name and, first time setup, provide the iso
  4. Create how many VMs you want using a different name

Features 2024 and forward

  • No breaking changes! I like the simple nature of the lambil script and want to support the way it’s working. It promotes learning but reduces repetitiveness.

  • Optional parameters:

    • Role: AD DS, AD CS: configures the OS, lightweight lab
    • NetworkSetup: Should automate internal switch and nic config

AD Labbing

The reason labmil exists is because of my interest in labbing with Windows Server but specifically AD DS.

I will be creating (even thought I know several other tools exist) a tool to populate a simple AD domain, with built in ACLs, OUs, Users, Computers, Group nesting, and security vulnerabilities. So I can automate setting up a new AD lab for myself but also for others.

Stay tuned!

Happy labbing!

PowerShell for Security: Continuous post of AD Weaknesses

Idea behind this post

As an Active Directory professional, I have gained insights into its unsecure features and outdated legacy ā€œideas,ā€ as well as the growing list of vulnerabilities in the ADDS, ADCS & ADFS suite.

In this post, I will share my knowledge and experience in defending Active Directory with other AD admins. Each vulnerability section will be divided into three parts: Problem, Solution, and Script.

Please note that this post is personal and subject to change. Its sole purpose is to help others. Always exercise caution when running code from an untrusted source - read it carefully and test it in a lab environment before implementing it in production.

1. Clear-Text Passwords In Sysvol (KB2962486)

Problem:

Group policies are (partly) stored in the domain wide share named Sysvol. Sysvol is a share that every domain user has read access to. A feature of group policy preferences (GPP), is the ability to store credentials in a policy, thus making use of the permissions of said account in an effective way.

The only problem is that the credentials are encrypted using a AES key, that’s publically avalible here.

Solution:

Patch your Domain Controllers so that admins cannot store credentials in sysvol: MS14-025: Vulnerability in Group Policy Preferences could allow elevation of privilege

Script:

This is a simple script that will match the cpassword row of the xml file, telling you what policy you need to fix:

# Get domain
$DomainName = Get-ADDomain | Select-Object -ExpandProperty DNSRoot

# Build path
$DomainSYSVOLShareScan = "\\$domainname\SYSVOL\$domainname\Policies\"

# Check path recursivly for match
Get-ChildItem $DomainSYSVOLShareScan -Filter *.xml -Recurse | % {
    if (Select-String -Path $_.FullName -Pattern "Cpassword") {
        $_.FullName
    }
}

2. Authenticated Users Can Join Up to 10 Computers to the Domain (KrbRelayUp)

Problem:

Active Directory creates an attribute by default in it’s schema named: ms-DS-MachineAccountQuota. The value of this attribute determines how many computers a user in the Authenticated Users group can join to the domain.

However, this ā€œtrust by defaultā€ approach can pose a security risk, an attacker can leverage this attribute for privilege escalation attacks by adding new devices to the domain.

Solution:

Find and identify the value of ms-DS-MachineAccountQuota.

As Microsoft puts it:

Organizations should also consider setting the ms-DS-MachineAccountQuota attribute to 0 to make it more difficult for an attacker to leverage the attribute for attacks. Setting the attribute to 0 stops non-admin users from adding new devices to the domain, blocking the most effective method to carry out the attackā€™s first step and forcing attackers to choose more complex methods to acquire a suitable resource.

Script:

$DomainDN = (Get-ADDomain).DistinguishedName

Get-ADObject -Identity $DomainDN -Properties ms-DS-MachineAccountQuota

Set-ADDomain -Identity $DomainDN -Replace @{"ms-DS-MachineAccountQuota"="0"}

I recommend running the script line-by-line, and try it out in a lab-environment first.

The script:

  • Gets the DN of the domain
  • Gets the ms-DS-MachineAccountQuota attribute
  • Sets it to 0, making non-privileged users unable to domain join computers.

Talk this decision through with your security department, test plan execute.

3. AdminSDHolder ACL misconfigurations

Problem:

The AdminSDHolder is an object in AD that serves as a security descriptor template for protected accounts and groups in an AD domain.

It exists in every Active Directory domain and is located in the System Partition.

Main features of AdminSDHolder:

  • The AdminSDHolder object manages the ACLs of members of built-in privileged AD groups.

  • The Security Descriptor Propagation (SDPROP) process runs every hour on the domain controller holding the PDC emulator FSMO role. This process scans the domain for protected accounts, disables rights inheritance, and applies an ACL on the object that mirrors the ACL of the AdminSDHolder container.

  • The main function of SDPROP is to protect highly-privileged AD accounts, ensuring that they canā€™t be deleted or have rights modified, accidentally or intentionally, by users or processes with less privilege.

  • If a user is removed from a privileged group, the adminCount attribute remains set to 1 and inheritance disabled.

Below is a list of built-in protected objects.

Administrator
Administrators
Print Operators
Backup Operators
Replicator
krbtgt
Domain Controllers
Schema Admins
Enterprise Admins
Domain Admins
Server Operators
Account Operators
Read-only Domain Controllers
Key Admins
Enterprise Key Admins

Any other object that has direct access to any of these, will also be added a 1 in it’s admincount attribute by the sdprop process, within a 60 min interval.

A common missconfiguration is to add Service Accounts, Security Groups and even enable inheritance, to complete a task or setup a new system in AD, and forget to configure it securely again.

Solution:

Review the AdminSDHolder ACL under the System container, remove anything that does not have a very good reason to be there (AAD Connect, Exchange, MSOL_ are common, and should be secure with long randomized passwords).

Understanding what rights are unsecure in Active Directory is needed as a first step.

This diagram might help you do just that:

Missconfigured ACLs

Script:

# Gets ACL of AdminSDHolder, display as a GridView
$AdminSDHolder = Get-ADObject -Filter { name -like "AdminSDHolder" }
$AdminSDHolderACL = (Get-Acl "AD:$AdminSDHolder").Access | Out-GridView
  • Review if any IdentityReference is not known
  • Review that IsInherited is set to false on all ACEs (entries)
  • Review group members of all the groups, think twice if the access makes sense

Happy hunting

PowerShell KeePass and saving time

Glad to be back from a 7-month dad leave. Let’s dive into some timesaving PowerShell!

The Problem

Password managers are very useful for anyone having more than one set of credentials, and most of us do.

They reduce the chance of credential leakage to unauthorized people and are vastly superior both post-it papers and notepad files.

However, I found myself using the graphical user interface (GUI) of my password manager daily to simply search copy and paste secret. The problem with navigating a GUI every day is that it’s time consuming, and there’s room for improvement, especially if you enjoy delving into some PowerShell and/or always have a terminal open.

Summary: Password managers GUIs are slow and tedious to work with. Let’s explore an alternative that is much faster!

My Solution

The solution to this problem that I went with was to create a custom script, which will install and configure my PowerShell session to easily access my password manager after typing in its master password. Together with a couple of functions to easily retrieve and copy-paste my password to the clipboard.

Setting something to your clipboard, especially a password, is a risk since other applications also can access the clipboard, thus the clipboard needs to be cleared by setting a sleep timer and overwriting the secret.

As a start, I will need to create a couple of parameters so the input becomes dynamic, so that I can use the script regardless of what filepath or database name I have on the computer.

param (
    [Parameter(Mandatory)]
    [string]$KeePassFilePath,
    [Parameter(Mandatory)]
    [string]$KeePassDataBaseName
)

The modules I will be using in my script is:

$Modules = "SecretManagement.KeePass", "Microsoft.PowerShell.SecretManagement"

Since I use KeePass, naturally this module comes in handy.

It’s an awesome module that I highly recommend for any KeePass & PowerShell user. I will use SecretManagement to enable the KeePass module and use it’s vault capabilities.

This will save me tons of time and I trust the sources that the modules originate from, to deliver secure and tested code. Much more then I trust myself to think of all security aspects of a something that would replace the modules already offered. Another great benefit of having PowerShell as a gateway to your password manager is that you don’t need to install the vendors application at all, this is a big plus if you’re (like I am) a fan of minimalism.

Next part of the code would be to install the modules: I set a condition to check if both modules are already present. If not, I try to install them. Since the Install-Module cmdlets Name parameter accepts a string array (look for String [] in the help files), I wont have to foreach loop through the modules.

$ExistingModules = Get-Module -Name $Modules -ListAvailable | Select-Object -ExpandProperty Name -Unique

if ($ExistingModules.count -ne 2) {
    Install-Module $Modules -Repository PSGallery -Verbose
}

I then have a condition to check of the vault name, if it’s not present already, I register the new KeePass vault, and sets it as the DefaultVault.

if ( -not (Get-SecretVault -Name $KeePassDataBaseName -ErrorAction SilentlyContinue)) {
    Register-SecretVault -Name $KeePassDataBaseName -Verbose -ModuleName 'SecretManagement.KeePass' -DefaultVault -VaultParameters @{
        Path = $KeePassFilePath
        UseMasterPassword = $true
    }
    Write-Verbose "$KeePassDataBaseName successfully installed." -Verbose
}
else {
    Write-Verbose "$KeePassDataBaseName was already configured." -Verbose
}

Function(s)

To speed things up even further, we want to create some smaller functions to wrap all the long cmdlets that we’d otherwise have to write, to get our secrets to the clipboard.

I say functions, because here’s where you can enable the work we’ve done even further, to work with PSCredentialObjects or start a process, wait, and send the password directly too it, thus generating a sort of custom single sign-on solution. However, sticking to the subject, my function will:

  1. Have a parameter that will be the secret that we’re looking for in our password manager
  2. Look for the secret, use Get-SecretInfo if the name is unknown
  3. Call the GetNetworkCredential method, and accessing the ‘Password’ property of the NetworkCredential object, essentially converting the SecureString to a String, and setting the value to clipboard
  4. Start a job with a ScriptBlock, which will replace the secret with a ‘Cleared!’ string.
function Find-FSecret {
    param (
        [parameter(mandatory)]
        [string]$Secret
    )
    $SecretLookup = Get-Secret -Name $Secret
    if ($SecretLookup) {
        Set-Clipboard -Value $SecretLookup.GetNetworkCredential().Password
        Write-Verbose "Secret found and set to clipboard. Will auto clear in 20 seconds." -Verbose
        $null = Start-Job -ScriptBlock {
            Start-Sleep -Seconds 20
            Set-Clipboard -Value 'Cleared!' -Verbose
        }
    }
}

I then add this function to my profile, which will load it on my users sessions, together with an alias declaration:

if ( -not (Get-Alias ffs -ErrorAction SilentlyContinue)) {
  New-Alias -Name 'ffs' -Value 'Find-FSecret'
}

Every time I get somewhat annoyed by yet another “SIGN IN” page, I simply tab over to PowerShell and vent out some frustration using my function:

ffs github
VERBOSE: Secret found and set to clipboard. Will auto clear in 20 seconds.

Discussion

In my example, I’m using KeePass, however this is very applicable to other password managers, in fact the PowerShell Gallery has tons of SecretManagement modules can be just as simple to use as in my examples.

Some examples:

  • BitWarden
  • LastPass
  • Keeper
  • CyberArk
  • Devolutions

Look yourself:

Find-Module *SecretManagement*

Another mention is, you want to make sure you’re not leaking your clipboard history. There’s 3rd party applications & settings built into Windows that might do so.

There’s also the possibility for a PowerShell Transcript to catch the output of your console, so make sure you never actually paste the credentials outside of the actual logon screen. You wouldn’t want to screen-share, or share a server with someone who could look into your command-line history and find a password in clear text.

Speaking of which, you can regularly look for passwords in clear text super easily using PSSecretScanner. I would recommend to look into it after completing a project like this.

Happy coding,

Emil

Analyze your Linux system using PowerShell

Install-Module linuxinfo

I am pleased to share that I have been working on a fun hobby project! A PowerShell module designed to facilitate Linux system analysis for PowerShell users. With its standardized noun-verb commands and object-based output, this module leverages the benefits of PowerShell to streamline analysis and information gathering on a Linux system.

Install it from the PowerShellGallery:

Install-Module linuxinfo -Verbose

View it’s functions:

Get-Command -Module linuxinfo
CommandType     Name                                               Version    Source
-----------     ----                                               -------    ------
Function        Get-BatteryInfo                                    0.0.1      linuxinfo
Function        Get-ComputerInfo                                   0.0.1      linuxinfo
Function        Get-DisplayInfo                                    0.0.1      linuxinfo
Function        Get-FileSystemHelp                                 0.0.1      linuxinfo
Function        Get-NetworkInfo                                    0.0.1      linuxinfo
Function        Get-OSInfo                                         0.0.1      linuxinfo
Function        Get-SystemUptime                                   0.0.1      linuxinfo
Function        Get-USBInfo                                        0.0.1      linuxinfo

Get computer information:

Get-ComputerInfo
BiosDate        : 06/17/2022
BiosVendor      : INSYDE Corp.
BiosVerson      : 03.09
CPU             : 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz
CPUArchitecture : x86_64
CPUThreads      : 8
CPUCores        : 4
CPUSockets      : 1
DistName        : Fedora Linux
DistSupportURL  : https://fedoraproject.org/
DiskSizeGb      : {930, 16}
DiskFreeGb      : {848, 16}
DiskUsedGb      : 82
GPU             : Intel Corporation TigerLake-LP GT2 [Iris Xe Graphics] (rev 01)
DistVersion     : 37 (KDE Plasma)
KernelRelease   : 6.2.9-200.fc37.x86_64
OS              : GNU/Linux
RAM             : 31.9G

Getting the Operating system information:

Get-OSInfo
DistName      : Fedora Linux
DistVersion   : 37 (KDE Plasma)
SupportURL    : https://fedoraproject.org/
OS            : GNU/Linux
KernelRelease : 6.2.9-200.fc37.x86_64
OSInstallDate : 2023-03-25

Beyond Hardware Info

There’s more functions similar to the ones described above, where linuxinfo is parsing useful system information and displaying the output as a PSCustomObject. However, taking a look at a different kind of info:

Get-FileSystemHelp -All
Name                           Value
----                           -----
root                           root users home directory
etc                            system-global configuration files
mnt                            temporary mount points
dev                            device files for hardware access
bin                            essential user binaries
run                            stores runtime information
opt                            optional application software packages
media                          mount point for external / removable devices
lost+found                     stores corrupted filesystem files
usr                            user utilities and applications
tmp                            temporary files
var                            variable files
lib                            system libraries and kernel modules
boot                           boot loader files
proc                           procfs - process and kernel information
sys                            sysfs - devices and kernel information
srv                            services data directories
sbin                           essential system binaries
home                           users home directories

Get basic information about the linux filesystem using PowerShell. Can be very handy if you’re coming from a Windows background.

The function supports quick navigation using the -Go parameter, and displaying a richer help message with the -Full parameter.

Testing & Disclaimer

Currently the module has been tested on Ubuntu and Fedora, so I’m fairly confident that it works good on Debian & RHEL distributions.

However I’ve done no testing on arch linux, therefore I’m not sure how the experience is there. It’s also in an early stage (version 0.0.1), with improvement plans and new functionality. Be sure to hop on the GitHub repo to learn more.

Reason

I understand that the use-case for something like linuxinfo is a bit limited since Linux already has great tools for doing similar tasks. However this project is more of a personal journey into learning:

But most importantly, having some fun with PowerShell and extending the usefulness and value of PowerShell.

I’d be happy if you’d like to try it, and star the github repo if you feel it’s worth it.

Happy coding

PSCustomObject conditional loop-trick!

PSCustomObject, valuable skill

PSCustomObject is a feature in PowerShell that allows you to create structured data in a simple way.

There’s a ton to cover on the topic, but if your unfamiliar with it, first of all it’s probably one of the most important thing to spend time on understanding in PowerShell, secondly the PowerShell docs cover it very well.

In this blog post, I will cover a trick that I frequently use when generating structured data in form of objects that can later be piped to Export-CSV, or even better, Export-Excel

Looping & Conditions

This trick involves when you want to create a boolean (true or false) value in your PSCustomObject variable. Here’s an example of what I mean:

# Create an array object in a variable
$PSCustomObject = @()

# Get some data
$Process = Get-Process | Select-Object Name, Description -Unique

# Loop through data
foreach($p in $Process) {
    # Check if condition exists
    if ($p.Description) {
      # If it does, create the "true" version of the PSCustomObject
        $PSCustomObject += [PSCustomObject]@{
            Name = $p.Name
            ProcessHasDescription = $true
            Description = $p.Description
        }
    }
    else {
      # If it does not, create the "false" version
        $PSCustomObject += [PSCustomObject]@{
            Name = $p.Name
            ProcessHasDescription = $false
            Description = $null
        }
    }
}

# Show results
$PSCustomObject | Select-Object -First 10

Output:

Name             ProcessHasDescription Description
----             --------------------- -----------
audiodg                          False
Code                              True Visual Studio Code
CompPkgSrv                        True Component Package Support Server
concentr                          True Citrix Connection Center        
conhost                          False
conhost                           True Console Window Host
crashpad_handler                 False
csrss                            False
ctfmon                           False
dllhost                           True COM Surrogate

In this example, the Get-Process command is used to generate a list of system processes. The code then checks if a description is attached to each process. This technique can be applied to generate objects for all kinds of purposes. I’ve found it particularly useful for creating reports on Active Directory, computer hardware, access reports, or any other subject that requires a report with boolean values.

Some examples:

  • User changed password the last 30 days?
  • Computer disk less then 10gb left?
  • User has Full Control rights on network share?
  • Server answers on ping request?

Steps

  1. Generate an array of data
  2. Loop it and construct a condition
  3. If condition is met, create a PSCustomObject “true” block
  4. Else, create a PSCustomObject “false” block
  5. Export data

Closing thoughts

In this post, I aim to keep things short and concise by letting the example do the talking. The code is commented for easy understanding. This technique can be incredibly useful for generating reports or structured data that can inform decision-making in larger processes. I hope you find it helpful and please let me know if you do.

Wishing you a great day and, as always:

Happy coding

How to Learn Git, Markdown and PowerShell by Contributing to the PowerShell-Docs Repository

Intro

The PowerShell-Docs repository is the home of the official PowerShell documentation. It contains reference and conceptual content for various versions and modules of PowerShell. Contributing to this repository is a great way to learn Git, Markdown and PowerShell, as well as to help improve the quality and accuracy of the documentation.

In this blog post, I will show you how you can contribute to the PowerShell-Docs repository by doing quality contributions, and why itā€™s beneficial for your learning and development.

What are quality contributions?

Quality contributions are enhancements or fixes that improve the readability, consistency, style or accuracy of the documentation. They can include things like:

Quality contributions are different from content contributions, which involve adding new articles or topics, or making significant changes to existing ones. Content contributions require more discussion and approval from the PowerShell-Docs team before they can be merged.

How to make quality contributions?

Before we get into how to make quality contributions, I’d like to shamelessly plug my own module: PowerShell-Docs-CommunityModule It will help you pick out work that has not been done yet.

Install & try it, using the following code:

Set-Location $env:USERPROFILE

#  Make sure 'username' reflects your actual github username
git clone https://github.com/username/PowerShell-Docs

Install-Module PowerShell-Docs-CommunityModule

Find-MissingAliasNotes -Verbose

To make quality contributions, you need to have a GitHub account and some basic knowledge of Git and Markdown. You also need to install some tools that will help you edit and preview your changes locally. Here are some steps you can follow:

  1. Fork the PowerShell-Docs repository on GitHub.
  2. Clone your forked repository to your local machine using Git.
  3. Install Git, Markdown tools, Docs Authoring Pack (a VS Code extension), and Posh-Git (a PowerShell module).
  4. Check out the PowerShell Docs Quality Contributions project on GitHub. This project tracks all the open issues and PRs related to quality improvements.
  5. Pick an issue that interests you or create a new one if you find something that needs fixing. You can use PowerShell-Docs-CommunityModule to help you here.
  6. Assign yourself to the issue and start working on it locally using VS Code or your preferred editor. Make sure you create a new branch before editing any files. Making a new branch will ensure your edited files is clean in your upcoming Pull-Request.
  7. Preview your changes, make sure you’ve edited the ms.date at the top of the document to todays date (MM/dd/yyy), this way other contributors know when the document has been edited, it’s also required when doing an update, so the owners of the repository will ask you to do this if you miss it.
  8. Commit your changes using Git and push them to your forked repository on GitHub.
  9. Create a pull request (PR) from your forked repository to the original PowerShell-Docs repository on GitHub.
  10. Wait for feedback from reviewers or maintainers of the PowerShell-Docs team.
  11. Address any comments or suggestions they may have until your PR is approved and merged.

Why make quality contributions?

Making quality contributions has many benefits for both you and the PowerShell community.

For you:

  • You can learn Git, by contributing to a very friendly large code-base project. The owners are more then willing to help you with git related questions. You’ll grow a ton in this area once you start doing some PRs.
  • You will write/edit files in Markdown (.md), a very popular markup language.
  • Because you will be proof-reading the markdown documents, you will learn more PowerShell topics straight from the source of the documentation.
  • You can improve your writing skills by following Microsofts style guides and best practices for technical documentation.
  • You can get feedback from experts who work on PowerShell, Markdown and Git every day.
  • You can build your reputation as a contributor by having your name appear in commit history.

For the community:

  • You can help improve the clarity, consistency, style or accuracy of the documentation that many people rely on every day.
  • You can help reduce confusion, errors or frustration among users who read the documentation.
  • You can help keep the documentation up-to-date with changes in PowerShell features or functionality.
  • You will be listed in the Community Contributor Hall of Fame

Conclusion

In this post, I showed you how you can contribute to the PowerShell-Docs repository by doing quality contributions, and why itā€™s great for learning Git, Markdown and PowerShell, while at the same time using the PowerShell-Docs-CommunityModule to find out what to do first. We hope this blog post inspires you to join us in making the PowerShell documentation better for everyone.

If you have any questions, comments, or suggestions, please feel free to reach out on a DM either on twitter, email or mastodon! I hope to see a PR from you, and if you’ve successfully done so because of this post, make sure to notify me about it! šŸ˜ƒ

Happy contributing