10 suggestions to improve your next PowerShell script

8 minute read

Most of the time PowerShell is my favourite choice to automate processes and tasks. In order to improve the maintainability of my scripts I usually try to focus on some standards combined with a clean scripting stile. In this post I want to show you 10 suggestions to improve your next PowerShell script. I’ve tried to order the suggestions according to an actual PowerShell starting from the very first line till the last line.

1. Script prerequisites

Your PowerShell script might need specific modules or elevated user rights to run. The #Requires statement ensures that these prerequisites are met before the actual script get’s executed. So you don’t need to implement your own checks to verify prerequisites.

Simply use the #Requires statement at the very first line of your script. Find out more about #Requires statement.

Modules

Another benefit of specifying the modules within the requires statement is that scripts hosted on the PowerShell Gallery automatically install the modules mentioned in the #Requires list.

To make sure a specific module is installed use:

#Requires -module "Microsoft.Graph.Intune"

To ensure a module with a specific version is available:

#Requires -Module @{ ModuleName = 'Microsoft.Graph.Authentication'; ModuleVersion = '0.7.0'}

Script running as administrator

If your Script requires elevation simply add:

#Requires -RunAsAdministrator

2. Get access to common script parameters

You might have stumbled over parameters like -Verbose, -WhatIf, -Force, -Debug. These become available when you declare

[CmdletBinding()]

attribute at the beginning of your scripts. It provides you access to those common parameters. Learn more about the CmdletBindingAttribute

3. Use appropriate output streams

PowerShell offers multiple output streams to present information and output data. Using the designated streams allows proper error handling and comes with built-in functionality to show or hide certain information like debug or verbose output. Additionally it makes the script easier to understand and maintain because not everyone associates the same meaning to a specific output color you choose:

  • Write-Host -ForegroundColor Red “Error occured” -> Write-Error
  • Write-Host -ForegroundColor Green “Some verbose output” -> Write-Verbose
  • Write-Host -ForegroundColor Yellow “Some debug output” -> Write-Debug

By default the Verbose and Debug streams are not shown. These are either displayed by modifying the $VerbosePreference / $DebugPreference or by passing the -Verbose / -Debug switch to a script or function.

4. Avoid multi line code with splatting

Instead of using very long lines of code which probably exceed your PowerShell editor’s screen size you can use splatting to pass your arguments in a hashtable. This also improves the readability of your script and simplifies changing parameters because you don’t need to modify the reference to the hashtable. Another benefit of spaltting is that you reference the original parameter names of the cmdlet so it should also be clear for what the parameter is used. This is not always the case if you declare your own variables and pass them to the cmdlet parameters.

Instead of:

# Usually these variables are placed within the first line of the script

$mailSender = "[email protected]"
$mailRecipients = 
$smtpServer = "smtp.office365.com"
$mailPort = 587
$mailSubject = "Question regarding your blog"
$mailTemplate = "<h1>Hello</h1>"
$mailCredentials = Get-Credential

<# In between comes a lot of other script stuff #>

# And a hundred lines later you will access the variables
Send-MailMessage -From $mailSender -To $mailRecipients -SmtpServer $smtpServer -Port $mailPort -UseSsl -Subject $mailSubject -Body $mailTemplate -Credential $mailCredentials -BodyAsHtml

You could use:

$mailConfiguration = @{
    From = "[email protected]"
    To = @("[email protected]") 
    SmtpServer = "smtp.office365.com"
    Port = 587 
    UseSsl = $true
    Subject = "Question regarding your blog"
    Body = "<h1>Hoi</h1>"
    Credential = Get-Credential
    BodyAsHtml = $true
}
Send-MailMessage  @mailConfiguration

Note that the keys of the hashtable need to match the parameter names of your cmdlet.

Read more about splatting

5. Parameter declaration

Declare parameters where you expect a value as mandatory and others as optional with a default value when applicable. If you want to perform additional validation the parameter section with a ValidateScript is the right place because the parameters will be evaluated before the actual script execution.

Here’s a little example which combines a mandatory parameter and a ValidateScript to test a path:

[CmdletBinding()]
param (
    [Parameter(Mandatory)]
    [string]
    [ValidateScript(
        {
            $path = Test-Path $_
            if (-not $path){
                throw "Path '$_' does not exist!"
            }
            return $path
        }
    )]
    $FilePath
)

Read more about ValidateScript

6. Advanced Data Structures

Advanced data structures allow you to group and structure your information in scripts. So as a little reference let’s have a closer look on custom objects and Hash tables.

Custom Objects

A custom object holds properties and values which can both be altered after the object was created. I mainly use them to output date for reports like in my Conditional Access Documentation PowerShell script.

$myObject = [PSCustomobject]@{
    Property = "Value"
}

You can store an array of custom objects by initializing an empty array and adding the objects:

$myObjectArray = @()

$myObjectArray += [PSCustomobject]@{
    Property = "Value1"
}

$myObjectArray += [PSCustomobject]@{
    Property = "Value2"
}

Hash tables

Hash tables store information in a key - value format. They are very efficient and useful if you want to perform mappings like a company code to a full company name. Hash tables are more efficient to retrieve or find data than an array of PSCustomobjects.

Another typical use case for hash tables are request parameters for a web request or splatting as mentioned in suggestion #4.

$myhashtable = @{
    Key1 = "Value1"
    Key2 = "Value2"
}

Iterating over a hashtable

Iterating (processing each entry which consists of a single key & value) in a hashtable works a little bit different if you are not familiar with object oriented programming languages.

In your foreach loop you need to call the GetEnumerator() method which enumerates all key and value pairs:

foreach ($entry in $myhashtable.GetEnumerator()){
    Write-Output "Accessing the key $($entry.Key)"
    Write-Output "Accessing the value $($entry.Value)"
    Write-Output "`n"
}

Accessing a specific hashtable value by key

$myhashtable["Key1"]

7. Comparing Objects

A common use case is to compare two arays with objects. PowerShell offers a built-in Cmdlet for this:

Compare-Object -ReferenceObject $myFirstObjectArray -DifferenceObject $mySecondObjectArray -Property ObjectId

Make sure to specify the -Property parameter to ensure proper comparison.

Read more about Compare-Object

8. Logging

Instead of writing complex logging functions you can easily redirect all PowerShell output streams to a file with the transcript function. This captures all PowerShell output.

Simply call the transcript function at the beginning of your script:

Start-Transcript -Path $(Join-Path $env:TEMP "ExampleScript.log")

And don’t forget to stop it at the end:

Stop-Transcript

9. Error Handling

This might be a rather controversial point but don’t be afraid of errors in your script which you can’t control. If your script requires a connection to a service like Azure Active Directory or an external API just forward your errors to the client and make it a terminating error.

By default PowerShell does not stop the script execution when an error occurs. This default behaviour is controlled by the $ErrorActionPreference variable which has a default value of Continue.

For example if you have a script with a line of code which is crucial for your script like retrieving all users from Azure Active Directory you can tell PowerShell to stop the script execution by adding the -ErrorAction parameter.

So instead of investing a lot of time for connection checks simply halt the script if the user didn’t establish a connection:

$allUsers = Get-MsolUser -All -ErrorAction Stop

Because the built in error is quite clear:

Get-MsolUser : You must call the Connect-MsolService cmdlet before calling any other cmdlets.
At line:1 char:1
+ Get-MsolUser -All
+ ~~~~~~~~~~~~~~~~~
    + CategoryInfo          : OperationStopped: (:) [Get-MsolUser], MicrosoftOnlineException
    + FullyQualifiedErrorId : Microsoft.Online.Administration.Automation.MicrosoftOnlineException,Microsoft.Online.Adm
   inistration.Automation.GetUser

If you want to create your own terminating exception you can halt a script by calling throw followed by your custom message.

Dependant actions

Assuming you have a couple of actions which depend on the previous action’s success - these are ideally gathered in a try{} catch{} construct. This often applies to loops where you perform bulk operations like creating something and then modify attributes.

10. Use the right workflow for your scripts

For the last suggestion I gathered some non-scripting essentials which are as important as writing good PowerShell scripts.

Documentation

Everyone knows documentation is important so let’s keep this one short:

  • Add an inline script documentation OR documentation in a readme.md file for your script (personally i prefer Readme files with markdown)
  • Illustrate complex scripts and interfaces with a diagram (your colleagus will thank you)
  • Document requirements for your scripts like an Azure AD App registration

And here a personal (maybe) unpopular onpinion:

Comments like this don’t provide any value so don’t write them

# Connect to Azure AD
Connect-AzureAD

Take care of secrets and credential assets

Instead of jiggling around with secure strings and other fancy methods to keep your credentials ‘secure’ better use an appropriate service like Azure Key Vault. This also offers you more flexibility to rotate credentials because they are not hard-coded in a script.

If you are new to working with PowerShell and git also make sure to never ever check-in secrets or tokens into version control. Once pushed they will remain in the commit history of your repository.

Use Visual Studio Code

Visual Studio code is a really powerful editor for PowerShell scripts. It offers a lot of autocomplete features to help you with declaring functions, parameters and script blocks.

  • Support for git version control
  • Auto format options for scripts
  • Search and replace features
  • Comes with the PSSScriptanalyzer which shows handy recommendations to improve the readability and functionality of your scripts

Get VS Code here

Use Version Control

This one is not directly related to scripting but about how and where you store your scripts. Storing PowerShell on a normal file system feels just wrong. By using a version control system like git you have a full history, and multiple people can work on the same scripts. And if something goes wrong you cann roll back your changes.

Insted of :

scripts/
├── intunescript/
│   ├── retireIntuneDevice1.0.ps1
│   ├── retireIntuneDevice1.1.ps1
│   ├── retireIntuneDevice_final.ps1
│   ├── retireIntuneDevice_working.ps1
│   └── retireIntuneDevice_final.ps1.old

Get familiar with:

git add "retireIntuneDevice.ps1"
git commit -m "Added support for a CSV list which specifies devices to be retired"
git push

By the way Azure DevOps comes with five free licenses to get started with git repositories and CI/CD pipelines.

Final words

Of course not every point mentioned applies to all scripts and the list is not exhaustive but you hopefully have a few inputs to rock your next PowerShell script(s). The listed suggestions are not related to a specific PowerShell version.

Keep on PowerShelling and remember: Practice makes perfect.

Comments