Disclaimer: Please note that while these steps have been provided to assist you, I cannot guarantee that they will work flawlessly in every scenario. Always proceed with caution and make sure to back up your data before making any significant changes to your system.
Written on 2024-05-08 (Note: Information may become outdated soon, and this was just my approach)
If you’re a proud owner of the 2024 Asus Rog Zephyrus G16 (GU605MI) and running Fedora 40+, ensuring smooth functionality of essential features like sound, keyboard, screen brightness, and asusctl might require a bit (hehe) of tweaking.
Here’s a comprehensive guide, or really the steps I took, to get everything up and running.
The screens brightness works out of the box while on the dGPU.
However that comes with certain drawbacks, like flickering electron applications and increase in power consumption. The steps below gets the screen brightness controls to work in “Hybrid” and “Integrated” mode (while the display is being ran by the iGPU).
Open the grub configuration file:
sudo nano /etc/default/grub
Add the following string at the end of the line GRUB_CMD_LINE_LINUX=:
With these steps, I was able get a somewhat functional GU605MI Fedora system. If you encounter any issues, refer to the respective documentation or seek further assistance from the Asus-Linux community.
Download the latest WinSW-x64.exe to the working directory
# Get the latest WinSW 64-bit executable browser download url$ExecutableName='WinSW-x64.exe'$LatestURL=Invoke-RestMethod'https://api.github.com/repos/winsw/winsw/releases/latest'$LatestDownloadURL=($LatestURL.assets|Where-Object{$_.Name-eq$ExecutableName}).browser_download_url$FinalPath="$($WorkingDirectory.FullName)\$ExecutableName"# Download it to the newly created working directoryInvoke-WebRequest-Uri$LatestDownloadURL-Outfile$FinalPath-Verbose
Create the PowerShell script which the service runs
This loop checks for notepad every 5 sec and kills it if it finds it
Just edit the id, name, description and startarguments
<service><id>PowerShellService</id><name>PowerShellService</name><description>This service runs a custom PowerShell script.</description><executable>C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe</executable><startarguments>-NoLogo -file C:\Path\To\Script\Invoke-PowerShellServiceScript.ps1</startarguments><logmode="roll"></log></service>
Save the .xml, in this example I saved it as PowerShell_Service.xml
# if not already, step into the workingdirectorycd $WorkingDirectory.FullName# Install the service.\WinSW-x64.exeinstall.\PowerShell_Service.xml# Make sure powershell.exe's executionpolicy is BypassSet-ExecutionPolicy-ExecutionPolicyBypass-ScopeLocalMachine# As an administratorGet-ServicePowerShellService|Start-Service
Running a PowerShell script as a service on any windows machine isn’t that complicated thanks to WinSW. It’s a great choice if you don’t want to get deeper into the process of developing windows services (it’s kind of a fun rabbit-hole though).
Meaning the executionpolicy must be supporting that usecase (bypass as local machine will do)
The script in this example is just a demo of a loop, but anything you can think of that loops will do here
Starting the Service requires elevated rights in this example
If you get the notorious The service did not respond to the start or control request in a timely fashion, you have my condolences (This is a very general error msg that has no clear answer by itself it seems)
Git is a powerful and popular version control system, sometimes a bit too powerful.
Depending on how your day went, you may want to restore a file from git to a previous state, either because you made an oopsie, want to undo some changes, or need to compare different versions.
Let’s go through four common scenarios on how to do just that!
The simplest scenario is when you have saved your file locally on your local git repository, but have not staged or committed it yet.
In this case, you can use the git restore command to discard the changes in your working directory and restore the file to the last committed state.
For example, if you want to restore a file named index.html, you can run the following command:
git restore index.html
This will overwrite the index.html file in your working directory with the version from the HEAD commit, which is the latest commit on your current branch.
You can also use a dot (.) instead of the file name to restore all the files in your working directory.
The next scenario is when you have saved your file locally and staged it locally, but have not committed it yet.
In this case, you can use the git restore –staged command to unstage the file and remove it from the staging area.
For example, if you want to unstage a file named index.html, you can run the following command:
git restore --staged index.html
This will remove the index.html file from the staging area and leave it in your working directory with the changes intact. You can then use the git restore command as in the previous scenario to discard the changes in your working directory and restore the file to the last committed state. Alternatively, you can use this command:
git restore --source=HEAD
To unstage and restore the file in one step.
For example, if you want to unstage and restore a file named index.html, you can run the following command:
git restore --source=HEAD index.html
This will remove the index.html file from the staging area and overwrite it in your working directory with the version from the HEAD commit. You can also use a dot (.) instead of the file name to unstage and restore all the files in your staging area and working directory.
The third scenario is when you have saved your file locally, staged it locally and committed it, but have not pushed it to the remote repository yet. In this case, you can use the git reset –hard command to reset your local branch to the previous commit and discard all the changes in your staging area and working directory. For example, if you want to reset your local branch to the previous commit, you can run the following command:
git reset --hard HEAD~1
This will reset your local branch to the commit before the HEAD commit, which is the latest commit on your current branch.
This will also discard all the changes in your staging area and working directory, including the file you want to restore.
You can then use the git checkout command to check out the file from the previous commit and restore it to your working directory.
For example, if you want to check out and restore a file named index.html from the previous commit, you can run the following command:
git checkout HEAD~1 index.html
This will check out the index.html file from the commit before the HEAD commit and overwrite it in your working directory with the version from that commit.
You can also use a dot (.) here as well, to check out and restore all the files from the previous commit.
The fourth and final scenario is when you have saved your file locally, staged it locally, committed it and pushed it to the remote repository.
In this case, you can use the git revert command to create a new commit that reverses the changes in the previous commit and restores the file to the state before that commit.
For example, if you want to revert the previous commit and restore a file named index.html to the state before that commit, you can run the following command:
git revert HEAD
This will create a new commit that reverses the changes in the HEAD commit, which is the latest commit on your current branch.
This will also restore the index.html file in your working directory and staging area to the version from the commit before the HEAD commit.
You can then push the new commit to the remote repository to update it with the reverted changes.
You can also use the –no-commit option to revert the changes without creating a new commit, and then use the git restore or git checkout commands as in the previous scenarios to restore the file to the desired state.
We’ve demonstrated how to restore a file from git in four different scenarios, depending on how far you have progressed in the git workflow.
We have used the git restore, git reset, git checkout and git revert commands to discard, unstage, check out and revert changes in your files and restore them to the previous states.
I hope this post has been helpful and maybe even saved some headache!
If you have any questions or feedback, please feel free to DM me on Twitter or LinkedIn.
Have you ever asked yourself, what module imports the Install-Module cmdlet?
It’s kind of a meta question, check for yourself! Spoiler a bit down for anyone reading on mobile.
Get-Command-NameInstall-Module
CommandType Name Version Source
----------- ---- ------- ------
Function Install-Module 2.2.5 PowerShellGet
I installed PowerShell 7.4 on two different Ubuntu 20.04 WSL distros, and I installed a few modules to benchmark the old trusty Install-Module and the new sheriff in town: Install-PSResource.
The results speak for themselves. PSResourceGet is much faster then PowerShellGet V2.
Speaking about PowerShellGet V2, there’s still a future for this module, but instead of new APIs and features, V3 (currently in pre-release) was converted to a compatibility layer over to the new and faster PSResourceGet.
The parameters of the new PSResourceGet is not supported from calling the older cmdlets, and there’s no official documentation out for PowerShellGet V3 yet, so to me this seems purely for pipeline scenarios where you have code in place that can just use the new functionality. It has less to do with interactive use it seems. Here’s some further reading on the subject.
PSResourceGet seem to me an awesome new module based on it’s speed, so better get used to it’s new syntax because this will be my new main driver for sure.
Get-Command-ModuleMicrosoft.PowerShell.PSResourceGet|sort Name
It’s not only installing modules that’s faster, it’s also very fast at getting installed modules.
Getting installed modules can be very time-consuming on shared systems, especially where you have the az modules installed, so this is a great performance win overall.
Finding new modules and scripts is also a crucial part of PowerShell, especially for the community members. I would argue with PSResourceGet going GA, PowerShell 7.4 is probably one of the most significant performance boosters of PowerShell (in it’s open source life).
As you can see, finding modules is way faster, and here we’re even using two-way wildcards.
Let’s try to use the new Publish-PSResource. I have a minor bug-fix to do on my project linuxinfo and will edit my publishing script so that github action will publish it for me using Publish-PSResource.
I start with editing my very simple publishing script. Since I don’t know if the github-hosted runner will have PSResourceGet installed yet, I need to validate that the cmdlet is present before calling it. If it’s not, I’m simply installing it using PowerShellGet v2.
This should do it!
Hmm, seems like I messed something up. The Github-hosted runner can’t find Publish-PSResource so, it’s trying to install PSResourceGet using Install-Module. However I miss-spelled the module name if you look closely at line 7. It should be Microsoft.PowerShell.PSResourceGet, let’s fix that and re-run my workflow.
Looks way better now!
And there’s a new version of linuxinfo with a minor bugfix. And the Publish-PSResource migration was very straightforward.
In this post, we learned about the origin of Install-Module, being PowerShellGet v2, and it’s predecessor Install-PSResource, being PSResourceGet. We took some cmdlets for a spin and realized that, the new version is easily twice as fast, in some cases even 3 times faster.
We covered PowerShellGet V3 being a compatibility layer and some caveats with it.
We looked at migrating a simple publishing script from Publish-Module to Publish-PSResource.
I recommend to poke around with the new PSResourceGet cmdlets and read it’s official documentation, and for interactive use not rely on any compatibility layer, save that for the edge-cases.
Thanks for reading this far, hope you found it helpful. PM me on twitter for any feedback.
The following solution example is covering how to set-up and use the Send-MgUserMail cmdlet in Azure Automation to send an email with a subject, message body and an attached zip file.
This solution will use an Client Secret and a encrypted automation variable.
The alternative to using an Client Secret would be to use a certificate and I would recommend doing so since it’s a more secure solution in general.
Using a Client Secret is fine if you have good control over who has access to your App Registration and your automation account.
This step-by-step guide will set up the app registration and the secret, and finally add the secret to the automation accounts shared resources as a variable.
NOTE
If you’re looking to be more fine-grained in your access delegation, and want to skip the whole secret management aspect, be sure to look into Managed Identities, specifically User-Assigned. Thanks Dennis!
In the Azure Portal -> App registrations
New Registration -> Name the app to something descriptive like Runbook name or similar
Register
API permissions -> Add permissions -> Microsoft Graph -> Application permission
Search for Mail.Send, check it, Add permissions, Grant admin consent for ORG
Navigate to Certificates & Secrets -> Client secrets -> new client secret
Fill in description and Expires after your needs
Navigate to your automation account in Azure -> Variables -> Add variable -> Copy-paste your secret into this variable, select Encrypted, Create
The authentication will be done in the azure automation runbook, and finally the code will look similar to this:
# Connects to graph as your new app using encrypted secret# Look in your App Registration -> Application (client) ID$ClientId="o2jvskg2-[notreal]-1246-820s-2621786s35e5"# Look in Azure -> Microsoft Entra ID -> Overview -> Tenant ID$TenantId="626226122-[notreal]-62ww-5053-56e32ss89sa5"# Variable Name from step 8 (Authentication)$ClientSecretCredential=(Get-AutomationVariable-Name'From Step 8')$Body=@{Grant_Type="client_credentials"Scope="https://graph.microsoft.com/.default"Client_Id=$ClientIdClient_Secret=$ClientSecretCredential}$RestMethodParams=@{Uri="https://login.microsoftonline.com/$TenantId/oauth2/v2.0/token"Method="POST"Body=$Body}$Connection=Invoke-RestMethod@RestMethodParams$Token=$Connection.access_tokenConnect-MgGraph-AccessToken$Token
Note that Get-AutomationVariable is a cmdlet which is only available for the az automation sandbox environment. It’s also the only way of getting the encrypted variable.
Get-AutomationVariable is an internal cmdlet from the module Orchestrator.AssetManagement.Cmdlets which is a part of Azure Automation, so running this outside of a runbook will fail.
Now that we have authentication and access out of the way, we can start developing a function that we will use in the runbook to send an email.
My example below has a requirement of an attachment. I’m using this for gathering data, compressing it and attaching the .zip file in the mail function.
Customize the function to your specific needs.
functionSend-AutomatedEmail{param([Parameter(Mandatory=$false)][string]$From,[Parameter(Mandatory=$true)][string]$Subject,[Parameter(Mandatory=$true)]$To,[Parameter(Mandatory=$true)][string]$Body,[Parameter(Mandatory=$true)][string]$AttachmentPath)if([string]::IsNullOrEmpty($From)){$From="noreply@contoso.com"}# I'm defining the parameters in a hashtable $ParamTable=@{Subject=$SubjectFrom=$FromTo=$ToType ="html"Body=$body}# ArrayList instead of adding to an array with += for increased performance$ToRecipients=[System.Collections.ArrayList]::new()$ParamTable.To|ForEach-Object{[void]$ToRecipients.Add(@{emailAddress=@{address=$_}})}try{$MessageAttachment=[Convert]::ToBase64String([IO.File]::ReadAllBytes($AttachmentPath))$MessageAttachmentName=$AttachmentPath.Split("\")|Select-Object-Last1}catch{Write-Error$Error[0]-ErrorActionStop}$params=@{Message=@{Subject=$ParamTable.SubjectBody=@{ContentType=$ParamTable.Type
Content=$ParamTable.Body}ToRecipients=$ToRecipientsAttachments=@(@{"@odata.type"="#microsoft.graph.fileAttachment"Name=$MessageAttachmentNameContentBytes=$MessageAttachment})}SaveToSentItems="false"}try{Send-MgUserMail-UserId$ParamTable.From-BodyParameter$params-ErrorActionStopWrite-Output"Email sent to:"$ParamTable.To}catch{Write-Error$Error[0]}}
Finally, we construct a new splatting table and send the email. A note, for this to run authentication must have happened earlier in the runbook.
# Generate some data and compress it$Date=Get-Date-Formatyyyy-MM-dd$CSVPath="$env:temp\$($Date)-BigReport.csv"$ZIPPath="$env:temp\$($Date)-BigReport.zip"$BigReport|Sort-Object|Export-Csv-Path$CSVPath-NoTypeInformation-EncodingUTF8Compress-Archive-Path$CSVPath-DestinationPath$ZipPath# Build the email parameters$SendMailSplat=@{Subject="Automated Email via MGGraph"Body="This is an automated email sent from Azure Automation using MGGraph."To="user1@mail.com","user2@mail.com","user3@mail.com"AttachmentPath=$ZIPPath}# Send the emailSend-AutomatedEmail@SendMailSplat
And that’s all there is to it! Congrats on sending an email using the Microsoft Graph.
While building this solution, I noticed that there’s a lack of content and documentation on some things, one of those things are how to send an email to more than one recipient. If your migration from Send-MailMessage, it isn’t so straightforward, since Send-MgUserMail is based on either JSON or MIME format.
Meaning in a nutshell we can’t just pass an array of email accounts and call it a day, instead we need to build an object that looks like something along the lines of:
Message -> ToRecipients -> emailAddress -> adress : adress.company.com
The only requirements to using it is PowerShell, git and an iso file. Since it’s specifically a hyper-v script, naturally it will require Windows.
# Clone labmilgitclonehttps://github.com/ehmiiz/labmil.git# Set iso path to desired VM OS$IsoPath="C:\Temp\WS2022.iso"# Set working directory to the cloned repoSet-Locationlabmil# Create the VM with desired name, IsoPath is only needed onceif(-not(Test-Path$IsoPath)){Write-Error"Path not found!"-ErrorActionStop}.\New-LabmilVM.ps1-Name"DC01"-IsoPath$IsoPath-Verbose.\New-LabmilVM.ps1-Name"DC02"-Verbose
The above script can be used to get going. But I would recommend just writing down the git clone part, or remember it (the repo name).
After this, interactive use is very simple.
The idea behind the script is to demonstrate in what order you start using labmil.
Install it using git
Be aware of where your ISO is
Call the New-LabmilVM function, give the VM a name and, first time setup, provide the iso
Create how many VMs you want using a different name
No breaking changes! I like the simple nature of the lambil script and want to support the way it’s working. It promotes learning but reduces repetitiveness.
Optional parameters:
Role: AD DS, AD CS: configures the OS, lightweight lab
NetworkSetup: Should automate internal switch and nic config
The reason labmil exists is because of my interest in labbing with Windows Server but specifically AD DS.
I will be creating (even thought I know several other tools exist) a tool to populate a simple AD domain, with built in ACLs, OUs, Users, Computers, Group nesting, and security vulnerabilities. So I can automate setting up a new AD lab for myself but also for others.
As an Active Directory professional, I have gained insights into its unsecure features and outdated legacy “ideas,” as well as the growing list of vulnerabilities in the ADDS, ADCS & ADFS suite.
In this post, I will share my knowledge and experience in defending Active Directory with other AD admins. Each vulnerability section will be divided into three parts: Problem, Solution, and Script.
Please note that this post is personal and subject to change. Its sole purpose is to help others. Always exercise caution when running code from an untrusted source - read it carefully and test it in a lab environment before implementing it in production.
Group policies are (partly) stored in the domain wide share named Sysvol.
Sysvol is a share that every domain user has read access to. A feature of group policy preferences (GPP), is the ability to store credentials in a policy, thus making use of the permissions of said account in an effective way.
The only problem is that the credentials are encrypted using a AES key, that’s publically avalible here.
This is a simple script that will match the cpassword row of the xml file, telling you what policy you need to fix:
# Get domain$DomainName=Get-ADDomain|Select-Object-ExpandPropertyDNSRoot# Build path$DomainSYSVOLShareScan="\\$domainname\SYSVOL\$domainname\Policies\"# Check path recursivly for matchGet-ChildItem$DomainSYSVOLShareScan-Filter*.xml-Recurse|%{if(Select-String-Path$_.FullName-Pattern"Cpassword"){$_.FullName}}
Active Directory creates an attribute by default in it’s schema named: ms-DS-MachineAccountQuota. The value of this attribute determines how many computers a user in the Authenticated Users group can join to the domain.
However, this “trust by default” approach can pose a security risk, an attacker can leverage this attribute for privilege escalation attacks by adding new devices to the domain.
Organizations should also consider setting the ms-DS-MachineAccountQuota attribute to 0 to make it more difficult for an attacker to leverage the attribute for attacks. Setting the attribute to 0 stops non-admin users from adding new devices to the domain, blocking the most effective method to carry out the attack’s first step and forcing attackers to choose more complex methods to acquire a suitable resource.
The AdminSDHolder is an object in AD that serves as a security descriptor template for protected accounts and groups in an AD domain.
It exists in every Active Directory domain and is located in the System Partition.
Main features of AdminSDHolder:
The AdminSDHolder object manages the ACLs of members of built-in privileged AD groups.
The Security Descriptor Propagation (SDPROP) process runs every hour on the domain controller holding the PDC emulator FSMO role. This process scans the domain for protected accounts, disables rights inheritance, and applies an ACL on the object that mirrors the ACL of the AdminSDHolder container.
The main function of SDPROP is to protect highly-privileged AD accounts, ensuring that they can’t be deleted or have rights modified, accidentally or intentionally, by users or processes with less privilege.
If a user is removed from a privileged group, the adminCount attribute remains set to 1 and inheritance disabled.
Any other object that has direct access to any of these, will also be added a 1 in it’s admincount attribute by the sdprop process, within a 60 min interval.
A common missconfiguration is to add Service Accounts, Security Groups and even enable inheritance, to complete a task or setup a new system in AD, and forget to configure it securely again.
Review the AdminSDHolder ACL under the System container, remove anything that does not have a very good reason to be there (AAD Connect, Exchange, MSOL_ are common, and should be secure with long randomized passwords).
Understanding what rights are unsecure in Active Directory is needed as a first step.
# Gets ACL of AdminSDHolder, display as a GridView$AdminSDHolder=Get-ADObject-Filter{name-like"AdminSDHolder"}$AdminSDHolderACL=(Get-Acl"AD:$AdminSDHolder").Access|Out-GridView
Review if any IdentityReference is not known
Review that IsInherited is set to false on all ACEs (entries)
Review group members of all the groups, think twice if the access makes sense
Password managers are very useful for anyone having more than one set of credentials, and most of us do.
They reduce the chance of credential leakage to unauthorized people and are vastly superior both post-it papers and notepad files.
However, I found myself using the graphical user interface (GUI) of my password manager daily to simply search copy and paste secret. The problem with navigating a GUI every day is that it’s time consuming, and there’s room for improvement, especially if you enjoy delving into some PowerShell and/or always have a terminal open.
Summary: Password managers GUIs are slow and tedious to work with. Let’s explore an alternative that is much faster!
The solution to this problem that I went with was to create a custom script, which will install and configure my PowerShell session to easily access my password manager after typing in its master password. Together with a couple of functions to easily retrieve and copy-paste my password to the clipboard.
Setting something to your clipboard, especially a password, is a risk since other applications also can access the clipboard, thus the clipboard needs to be cleared by setting a sleep timer and overwriting the secret.
As a start, I will need to create a couple of parameters so the input becomes dynamic, so that I can use the script regardless of what filepath or database name I have on the computer.
It’s an awesome module that I highly recommend for any KeePass & PowerShell user.
I will use SecretManagement to enable the KeePass module and use it’s vault capabilities.
This will save me tons of time and I trust the sources that the modules originate from, to deliver secure and tested code. Much more then I trust myself to think of all security aspects of a something that would replace the modules already offered. Another great benefit of having PowerShell as a gateway to your password manager is that you don’t need to install the vendors application at all, this is a big plus if you’re (like I am) a fan of minimalism.
Next part of the code would be to install the modules:
I set a condition to check if both modules are already present. If not, I try to install them.
Since the Install-Module cmdlets Name parameter accepts a string array (look for String [] in the help files), I wont have to foreach loop through the modules.
To speed things up even further, we want to create some smaller functions to wrap all the long cmdlets that we’d otherwise have to write, to get our secrets to the clipboard.
I say functions, because here’s where you can enable the work we’ve done even further, to work with PSCredentialObjects or start a process, wait, and send the password directly too it, thus generating a sort of custom single sign-on solution. However, sticking to the subject, my function will:
Have a parameter that will be the secret that we’re looking for in our password manager
Look for the secret, use Get-SecretInfo if the name is unknown
Call the GetNetworkCredential method, and accessing the ‘Password’ property of the NetworkCredential object, essentially converting the SecureString to a String, and setting the value to clipboard
Start a job with a ScriptBlock, which will replace the secret with a ‘Cleared!’ string.
functionFind-FSecret{param([parameter(mandatory)][string]$Secret)$SecretLookup=Get-Secret-Name$Secretif($SecretLookup){Set-Clipboard-Value$SecretLookup.GetNetworkCredential().PasswordWrite-Verbose"Secret found and set to clipboard. Will auto clear in 20 seconds."-Verbose$null=Start-Job-ScriptBlock{Start-Sleep-Seconds20Set-Clipboard-Value'Cleared!'-Verbose}}}
I then add this function to my profile, which will load it on my users sessions, together with an alias declaration:
In my example, I’m using KeePass, however this is very applicable to other password managers, in fact the PowerShell Gallery has tons of SecretManagement modules can be just as simple to use as in my examples.
Some examples:
BitWarden
LastPass
Keeper
CyberArk
Devolutions
Look yourself:
Find-Module*SecretManagement*
Another mention is, you want to make sure you’re not leaking your clipboard history. There’s 3rd party applications & settings built into Windows that might do so.
There’s also the possibility for a PowerShell Transcript to catch the output of your console, so make sure you never actually paste the credentials outside of the actual logon screen. You wouldn’t want to screen-share, or share a server with someone who could look into your command-line history and find a password in clear text.
Speaking of which, you can regularly look for passwords in clear text super easily using PSSecretScanner. I would recommend to look into it after completing a project like this.
I am pleased to share that I have been working on a fun hobby project! A PowerShell module designed to facilitate Linux system analysis for PowerShell users. With its standardized noun-verb commands and object-based output, this module leverages the benefits of PowerShell to streamline analysis and information gathering on a Linux system.
Install it from the PowerShellGallery:
Install-Modulelinuxinfo-Verbose
View it’s functions:
Get-Command-Modulelinuxinfo
CommandType Name Version Source
----------- ---- ------- ------
Function Get-BatteryInfo 0.0.1 linuxinfo
Function Get-ComputerInfo 0.0.1 linuxinfo
Function Get-DisplayInfo 0.0.1 linuxinfo
Function Get-FileSystemHelp 0.0.1 linuxinfo
Function Get-NetworkInfo 0.0.1 linuxinfo
Function Get-OSInfo 0.0.1 linuxinfo
Function Get-SystemUptime 0.0.1 linuxinfo
Function Get-USBInfo 0.0.1 linuxinfo
There’s more functions similar to the ones described above, where linuxinfo is parsing useful system information and displaying the output as a PSCustomObject. However, taking a look at a different kind of info:
Get-FileSystemHelp-All
Name Value
---- -----
root root users home directory
etc system-global configuration files
mnt temporary mount points
dev device files for hardware access
bin essential user binaries
run stores runtime information
opt optional application software packages
media mount point for external / removable devices
lost+found stores corrupted filesystem files
usr user utilities and applications
tmp temporary files
var variable files
lib system libraries and kernel modules
boot boot loader files
proc procfs - process and kernel information
sys sysfs - devices and kernel information
srv services data directories
sbin essential system binaries
home users home directories
Get basic information about the linux filesystem using PowerShell. Can be very handy if you’re coming from a Windows background.
The function supports quick navigation using the -Go parameter, and displaying a richer help message with the -Full parameter.
Currently the module has been tested on Ubuntu and Fedora, so I’m fairly confident that it works good on Debian & RHEL distributions.
However I’ve done no testing on arch linux, therefore I’m not sure how the experience is there. It’s also in an early stage (version 0.0.1), with improvement plans and new functionality. Be sure to hop on the GitHub repo to learn more.
I understand that the use-case for something like linuxinfo is a bit limited since Linux already has great tools for doing similar tasks. However this project is more of a personal journey into learning:
PSCustomObject is a feature in PowerShell that allows you to create structured data in a simple way.
There’s a ton to cover on the topic, but if your unfamiliar with it, first of all it’s probably one of the most important thing to spend time on understanding in PowerShell, secondly the PowerShell docs cover it very well.
In this blog post, I will cover a trick that I frequently use when generating structured data in form of objects that can later be piped to Export-CSV, or even better, Export-Excel
This trick involves when you want to create a boolean (true or false) value in your PSCustomObject variable.
Here’s an example of what I mean:
# Create an array object in a variable$PSCustomObject=@()# Get some data$Process=Get-Process|Select-ObjectName,Description-Unique# Loop through dataforeach($pin$Process){# Check if condition existsif($p.Description){# If it does, create the "true" version of the PSCustomObject$PSCustomObject+=[PSCustomObject]@{Name=$p.NameProcessHasDescription=$trueDescription=$p.Description}}else{# If it does not, create the "false" version$PSCustomObject+=[PSCustomObject]@{Name=$p.NameProcessHasDescription=$falseDescription=$null}}}# Show results$PSCustomObject|Select-Object-First10
Output:
Name ProcessHasDescription Description
---- --------------------- -----------
audiodg False
Code True Visual Studio Code
CompPkgSrv True Component Package Support Server
concentr True Citrix Connection Center
conhost False
conhost True Console Window Host
crashpad_handler False
csrss False
ctfmon False
dllhost True COM Surrogate
In this example, the Get-Process command is used to generate a list of system processes. The code then checks if a description is attached to each process. This technique can be applied to generate objects for all kinds of purposes. I’ve found it particularly useful for creating reports on Active Directory, computer hardware, access reports, or any other subject that requires a report with boolean values.
In this post, I aim to keep things short and concise by letting the example do the talking. The code is commented for easy understanding. This technique can be incredibly useful for generating reports or structured data that can inform decision-making in larger processes. I hope you find it helpful and please let me know if you do.