My take on the Verified Effective self-assessment.

In the latest PowerShell.org TechLetter, Don released a slimmed-down version of a retired scenario for the Verified Effective exam, which is offered at the PowerShell Summits. I decided to type up a solution for it and share my thought process. If you haven’t worked through it yourself yet, I’d recommend doing that first.

The exam takes the form of a PowerShell transcript, which shows you the commands that someone entered at the console, and the output from those commands. Your job is to write a PowerShell function which will produce exactly the same output, if those same commands are run (with some minor exceptions, such as your computer’s name or serial number / etc.) Here’s the transcript for the sample self-assessment.

<#
**********************
Windows PowerShell transcript start
Start time: 20150602172842
Username  : COMPANY\Administrator 
Machine	  : WIN81 (Microsoft Windows NT 6.3.9600.0) 
**********************
Transcript started, output file is C:\example1.txt
PS C:\> help Get-CorpSysInfo
PS C:\> Get-CorpSysInfo -ComputerName win81

BIOSSerial                    ComputerName                                      SPVersion OSVersion                    
----------                    ------------                                      --------- ---------                    
VMware-56 4d 09 95 89 20 e... win81                                                     0 6.3.9600                     


PS C:\> Get-CorpSysInfo -ComputerName win81 -Protocol dcom

BIOSSerial                    ComputerName                                      SPVersion OSVersion                    
----------                    ------------                                      --------- ---------                    
VMware-56 4d 09 95 89 20 e... win81                                                     0 6.3.9600                     


PS C:\> Get-CorpSysInfo -ComputerName win81 -Protocol cim
Get-CorpSysInfo : Cannot validate argument on parameter 'Protocol'. The argument "cim" does not belong to the set 
"Dcom,Wsman" specified by the ValidateSet attribute. Supply an argument that is in the set and then try the command 
again.
At line:1 char:47
+ Get-CorpSysInfo -ComputerName win81 -Protocol cim
+                                               ~~~
    + CategoryInfo          : InvalidData: (:) [Get-CorpSysInfo], ParameterBindingValidationException
    + FullyQualifiedErrorId : ParameterArgumentValidationError,Get-CorpSysInfo
 
PS C:\> Get-CorpSysInfo -ComputerName win81 -Verbose
VERBOSE: Attempting connection to win81 over Wsman
VERBOSE: Operation '' complete.
VERBOSE: Perform operation 'Enumerate CimInstances' with following parameters, 
''namespaceName' = root\cimv2,'className' = Win32_OperatingSystem'.
VERBOSE: Operation 'Enumerate CimInstances' complete.
VERBOSE: Perform operation 'Enumerate CimInstances' with following parameters, 
''namespaceName' = root\cimv2,'className' = Win32_BIOS'.
VERBOSE: Operation 'Enumerate CimInstances' complete.

BIOSSerial                    ComputerName                                      SPVersion OSVersion                    
----------                    ------------                                      --------- ---------                    
VMware-56 4d 09 95 89 20 e... win81                                                     0 6.3.9600                     


PS C:\> Get-CorpSysInfo -ComputerName win81,NOTONLINE -Verbose
VERBOSE: Attempting connection to win81 over Wsman
VERBOSE: Operation '' complete.
VERBOSE: Perform operation 'Enumerate CimInstances' with following parameters, 
''namespaceName' = root\cimv2,'className' = Win32_OperatingSystem'.
VERBOSE: Operation 'Enumerate CimInstances' complete.
VERBOSE: Perform operation 'Enumerate CimInstances' with following parameters, 
''namespaceName' = root\cimv2,'className' = Win32_BIOS'.
VERBOSE: Operation 'Enumerate CimInstances' complete.

BIOSSerial                    ComputerName                                      SPVersion OSVersion                    
----------                    ------------                                      --------- ---------                    
VMware-56 4d 09 95 89 20 e... win81                                                     0 6.3.9600                     
VERBOSE: Attempting connection to NOTONLINE over Wsman
New-CimSession : WinRM cannot process the request. The following error with errorcode 0x80090311 occurred while using 
Kerberos authentication: There are currently no logon servers available to service the logon request.  
 Possible causes are:
  -The user name or password specified are invalid.
  -Kerberos is used when no authentication method and no user name are specified.
  -Kerberos accepts domain user names, but not local user names.
  -The Service Principal Name (SPN) for the remote computer name and port does not exist.
  -The client and remote computers are in different domains and there is no trust between the two domains.
 After checking for the above issues, try the following:
  -Check the Event Viewer for events related to authentication.
  -Change the authentication method; add the destination computer to the WinRM TrustedHosts configuration setting or 
use HTTPS transport.
 Note that computers in the TrustedHosts list might not be authenticated.
   -For more information about WinRM configuration, run the following command: winrm help config.
At C:\Program Files\WindowsPowerShell\Modules\CorpTools\CorpTools.psm1:25 char:28
+                 $session = New-CimSession -ComputerName $computer -SessionOption ...
+                            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [New-CimSession], CimException
    + FullyQualifiedErrorId : HRESULT 0x8007051f,Microsoft.Management.Infrastructure.CimCmdlets.NewCimSessionCommand
    + PSComputerName        : NOTONLINE
 
VERBOSE: Operation '' complete.
WARNING: Failed establishing Wsman session to NOTONLINE


PS C:\> 'win81','localhost' | Get-CorpSysInfo -Verbose
VERBOSE: Attempting connection to win81 over Wsman
VERBOSE: Operation '' complete.
VERBOSE: Perform operation 'Enumerate CimInstances' with following parameters, 
''namespaceName' = root\cimv2,'className' = Win32_OperatingSystem'.
VERBOSE: Operation 'Enumerate CimInstances' complete.
VERBOSE: Perform operation 'Enumerate CimInstances' with following parameters, 
''namespaceName' = root\cimv2,'className' = Win32_BIOS'.
VERBOSE: Operation 'Enumerate CimInstances' complete.

BIOSSerial                    ComputerName                                      SPVersion OSVersion                    
----------                    ------------                                      --------- ---------                    
VMware-56 4d 09 95 89 20 e... win81                                                     0 6.3.9600                     
VERBOSE: Attempting connection to localhost over Wsman
VERBOSE: Operation '' complete.
VERBOSE: Perform operation 'Enumerate CimInstances' with following parameters, 
''namespaceName' = root\cimv2,'className' = Win32_OperatingSystem'.
VERBOSE: Operation 'Enumerate CimInstances' complete.
VERBOSE: Perform operation 'Enumerate CimInstances' with following parameters, 
''namespaceName' = root\cimv2,'className' = Win32_BIOS'.
VERBOSE: Operation 'Enumerate CimInstances' complete.
VMware-56 4d 09 95 89 20 e... localhost                                                 0 6.3.9600                     


PS C:\> Get-CorpSysInfo -ComputerName win81,NOTONLINE -Verbose -Protocol dcom
VERBOSE: Attempting connection to win81 over dcom
VERBOSE: Operation '' complete.
VERBOSE: Perform operation 'Enumerate CimInstances' with following parameters, 
''namespaceName' = root\cimv2,'className' = Win32_OperatingSystem'.
VERBOSE: Operation 'Enumerate CimInstances' complete.
VERBOSE: Perform operation 'Enumerate CimInstances' with following parameters, 
''namespaceName' = root\cimv2,'className' = Win32_BIOS'.
VERBOSE: Operation 'Enumerate CimInstances' complete.

BIOSSerial                    ComputerName                                      SPVersion OSVersion                    
----------                    ------------                                      --------- ---------                    
VMware-56 4d 09 95 89 20 e... win81                                                     0 6.3.9600                     
VERBOSE: Attempting connection to NOTONLINE over dcom
New-CimSession : The RPC server is unavailable. 
At C:\Program Files\WindowsPowerShell\Modules\CorpTools\CorpTools.psm1:25 char:28
+                 $session = New-CimSession -ComputerName $computer -SessionOption ...
+                            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [New-CimSession], CimException
    + FullyQualifiedErrorId : HRESULT 0x800706ba,Microsoft.Management.Infrastructure.CimCmdlets.NewCimSessionCommand
    + PSComputerName        : NOTONLINE
 
VERBOSE: Operation '' complete.
WARNING: Failed establishing dcom session to NOTONLINE


PS C:\> Stop-Transcript
**********************
Windows PowerShell transcript end
End time: 20150602173025
**********************
#>

Now, some of that output is pretty detailed. One of the errors even has a partial line of code that you can copy and paste, showing that there are $session and $computer variables in play, and the New-CimSession command is being used. You can see that the information is coming from the Win32_OperatingSystem and Win32_BIOS classes. There’s a fair bit of verbose / warning output to duplicate (though as you’ll see, quite a bit of it is automatic.) We can see exactly what the ValidateSet options on the Protocol parameter should be, and that it should default to Wsman. As long as you’re familiar with how these features work already (which is basically what the exam is testing), that information should jump right out of the transcript.

I can tell you that if you’re running the latest versions of the PowerShell v5 preview, some of the output is going to look slightly different. In the current v5 version, PowerShell tries to auto-size the table output by briefly delaying the first output from displaying (for like a quarter of a second or something.) This means that where you see output interspersed with Verbose lines in the transcript, you might see a bunch of Verbose followed by a bunch of Output instead. That’s fine, don’t worry about it. (In the real exam, I believe they tell you exactly what version of PowerShell to be running, just in case. If you need to, create a VM without all your more recent goodies, in preparation for the test.)

So, far starters, here’s a gist link with how I would write this function if I didn’t have to try to exactly match this transcript: https://gist.github.com/dlwyatt/d4f7504c21afdd341473#file-attempt1-ps1

It’s pretty straightforward code for a function that supports multiple values to a parameter either on the command line or via the pipeline. You declare the parameter as an array (in this case, of strings), and then have a foreach loop in the function’s Process block. I put the call to New-CimSessionOption in the Begin block, since it only needs to happen once anyway. Inside the Process block’s foreach loop, there’s a Try block that contains the steps required to collect the data from a remote computer, and output a PowerShell custom object with the properties shown in the transcript. Any errors are output as non-terminating errors in the Catch block, and the session is cleaned up in the Finally block, if it was successfully opened. (Note that $session is set to $null just before the Try block; this is a good habit to have, when using that sort of cleanup code in a Finally block.)

In the real world, I’d be perfectly happy with that code. I’d just need to add a comment-based help block, and I’d be done. It already has a pretty decent amount of Verbose output from the cmdlets themselves.

However, in this case, the job is to match the transcript exactly, and after running those commands, I can see that it misses the mark in a few ways. This is where the attention to detail (and knowledge of some of PowerShell’s behavior, such as with error handling) kicks in, and I suspect that quite a few people might wind up failing here. Here’s what I noticed when running those commands with my original code:

  • There’s no Verbose output showing “Attempting connection to win81 over Wsman”, but the rest of the automatic Verbose output looks pretty good.
  • There’s no Warning output on failed connections.
  • The error output looks quite a bit different. Instead of showing that the error came from New-CimSession, it shows Get-CorpSysInfo instead. This is due to how I did my error handling, with the try/catch and Write-Error construct.
  • There’s a sneaky call to “help Get-CorpSysInfo” at the beginning of the transcript, with completely blank output. I have a suspicion that the original version of this exam (which I have not seen) actually included help output, and the test-taker would have to write a comment-based help block. However, in the interest of reproducing the sample transcript, I’ll try to find a way to make the blank output happen.

Adding calls to Write-Verbose and Write-Warning is easy enough. I don’t like having to change my error-handling habits to match a transcript, but oh well, that’s the task at hand. Here’s attempt #2: https://gist.github.com/dlwyatt/d4f7504c21afdd341473#file-attempt2-ps1

Now it’s just about done. By allowing the errors from New-CimSession to be displayed normally (rather than using -ErrorAction Stop, and calling Write-Error in the Catch block), the error output matches the transcript. The new calls to Write-Verbose and Write-Warning also match. The only thing left, really, is that blank “help” output.

I tried to get clever on that. I thought, maybe if I put this into a script module, and in the script module, created an en-US\Get-CorpSysInfo.help.txt file that was blank, maybe that would fool the help system. No such luck; it still just displays the default help output. I couldn’t think of a decent way of doing this… so I resorted to cheese. Wait for it:

function help { }

Yeah, that’s dumb. But it worked!

If I were taking the actual test, at this point, I’d put my code into C:\Program Files\WindowsPowerShell\Modules\CorpTools\CorpTools.psm1 (a path which you can see in the various bits of Error output).

Advertisements
Posted in PowerShell, Professional | 2 Comments

Building a hosted TeamCity server

Build.powershell.org went live on Friday!  This is something I’ve been wanting to do for months, but we had to wait for enough funding to pay for the necessary cloud compute time.  (Merci beaucoup, Chef!)

I thought it might be interesting to share the story of how this came to be.  The Pester project was using another community TeamCity instance (teamcity.codebetter.com) to run its builds.  However, there were a couple of things nagging at me in that environment.  One, they only had Windows Server 2012 build agents with PowerShell 3.0 installed.  We could technically test with “PowerShell.exe -Version 2.0” to get a little bit of increased test coverage, but we had no way of automatically testing on PowerShell v4 and v5. Also, their build agents (which are permanent, and used by all of the projects’ builds) are running the build code as LocalSystem, which set off my security alarms in a big way.  Pester’s builds automatically publish the module to Chocolatey, Nuget.org and PowerShellGet under the right conditions, and this means that our API keys for those services could easily be stolen by anyone who decided to run some malicious code in their build.

Another popular community CI service, Appveyor, already addresses the security portion of those concerns.  They use each agent only once, then delete it and reimage the thing.  Even if someone runs malicious code in their build, it doesn’t get a chance to steal anyone else’s secrets.  However, they still don’t have a full suite of agents for testing PowerShell modules; their build agents have PowerShell v4 only.  (If I could snap my fingers and create more agent capabilities in AppVeyor, I’d just be using their service for my open-source projects instead of building something new.  It’s that good.)

So, with these concerns in mind, I set out to get us a CI environment tailored for PowerShell.  I went with TeamCity, since I’m most familiar with that product, and they offer free licenses to open-source projects.

To cover my security concerns, I took the same approach as Appveyor.  TeamCity already has some cloud integration for build agents, though it’s only really working with AWS at the moment.  (I first attempted to use their newer Azure plugin, but it doesn’t work very well yet.)  With the AWS plugin, you can define a cloud profile which includes any number of images or instances, tell it the maximum number of concurrent agents you want, and TeamCity will automatically take care of creating / deleting the cloud instances as needed.  They even already had an option for terminating the instance after the first build completes, which is exactly what I wanted, for security reasons.  As an added bonus, this keeps our costs down (which is important since we’re running this service on donations.)  During idle periods where no builds are running, we only have to pay for the TeamCity server itself, not any build agents.

After that, it was just a matter of creating a bunch of custom AMIs.  For starters, I’ve set up four:  Windows 2008 R2 with PowerShell v2, Windows 2012 with PowerShell v3, Windows 2012 R2 with PowerShell v4, and Windows 2012 R2 with PowerShell v5 (currently the April 2015 preview).  All of the build agent images have Pester 3.3.9 (latest release) installed, and the PowerShell v5 image has the Nuget bootstrapping for OneGet already done.  If people request other OS / PS combinations, or additional software on the build agents, I can easily make that happen, so it’s my hope that this will become the most PowerShell-friendly option out there for CI.

Posted in Uncategorized | Leave a comment

Interesting “gotcha” with dynamic parameters and pipeline input

I was working on a tricky problem with Pester’s mocking functionality today, and came across something that I didn’t previously know about how dynamic parameters work in PowerShell. Dynamic parameters are generated before (and only before) pipeline input is processed.

This means that if your dynamicparam block depends on the values of a parameter that happens to accept pipeline input, you won’t get your dynamic parameters when a user chooses to use that pipeline input functionality.

This is true of both compiled Cmdlets, and Advanced Functions. For example:

# This works fine:
Get-ChildItem -Path 'Cert:\' -CodeSigningCert -Recurse

# And this generates an error that the CodeSigningCert
# parameter cannot be found:
'Cert:\CurrentUser' | Get-ChildItem -CodeSigningCert -Recurse
Posted in Uncategorized | Leave a comment

Script Block properties in DSC resources

Here’s a little-known feature of Desired State Configuration. First, notice that the Script resource’s properties are all strings, not ScriptBlocks, in its schema:

Get-DscResource Script -Syntax

<#
Script [String] #ResourceName
{
    GetScript = [string]
    SetScript = [string]
    TestScript = [string]
    [Credential = [PSCredential]]
    [DependsOn = [string[]]]
}
#>

However, even though they’re strings in the MOF file, PowerShell will allow you to assign ScriptBlock objects to those properties in your configuration function. Also, if you do this, it will automatically handle $using: scoped variables, serializing them to XML and injecting them into your MOF file. (This is very similar to how PowerShell Remoting works when you use the $using: scope modifier.)

configuration ScriptTest {
    node localhost {
       $MyHashtable = @{
           Key1 = 'This was created'
           Key2 = 'on the machine where the MOF was compiled'
       }
       Script test {
           GetScript = {$using:MyHashtable}
           TestScript = {$true}
           SetScript = { }
       }
    }
}

ScriptTest -OutputPath $env:temp\ScriptTest

<#
localhost.mof snippet, line breaks added by me for readability

 GetScript = "$MyHashtable = [System.Management.Automation.PSSerializer]::Deserialize('<Objs Version=\"1.1.0.1\" xmlns=\"http://schemas.microsoft.com/powershell/2004/04\">
\n  <Obj RefId=\"0\">
\n    <TN RefId=\"0\">
\n      <T>System.Collections.Hashtable</T>
\n      <T>System.Object</T>
\n    </TN>
\n    <DCT>
\n      <En>
\n        <S N=\"Key\">Key1</S>
\n        <S N=\"Value\">This was created</S>
\n      </En>
\n      <En>
\n        <S N=\"Key\">Key2</S>
\n        <S N=\"Value\">on the machine where the MOF was compiled</S>
\n      </En>
\n    </DCT>
\n  </Obj>
\n</Objs>')
\n$MyHashtable";
#>

The cool thing is, this script block serialization feature may exist because of the Script resource, but it’s actually usable by any DSC resource that wants to accept executable code as a parameter. Just define your resource’s property as a String in your schema and *-TargetResource methods, and assign a ScriptBlock object to that property in your configuration script. When DSC compiles the configuration to a MOF, it will take care of the serialization and conversion to string for you. (In your resource’s methods, you’ll need to convert the string back into a ScriptBlock object by using the [scriptblock]::Create() method.)

Posted in PowerShell | Tagged | Leave a comment

Manage Local Group Policy Objects from PowerShell and Desired State Configuration

Ever since DSC was first released, people have been asking how they can use it to manage user-specific settings. For the most part, the answer has been: don’t do that. DSC resources execute as LocalSystem, and are intended to manage system-wide settings.

However, that’s not the whole story. The local Group Policy objects on a computer can be treated as a system-wide setting, and can also be used to enforce user-specific policies. There’s a problem, though: those settings tend to be stored in the Administrative Templates section of the registry, which is saved in a registry.pol file on disk. There are no command-line utilities or APIs in Windows for reading or writing these registry.pol files, and the GroupPolicy PowerShell module cmdlet Set-GPRegistryValue only works for domain GPOs, not local. I’d solved the problem of managing Registry.pol files from a script ages ago, first as a VBScript, and later in C#. The .NET code wasn’t terribly user-friendly compared to a PowerShell cmdlet, but it worked, and I used it extensively in my previous job.

Recently, when this question of managing user-specific settings and/or local GPOs came up again, I decided to write some PowerShell wrappers around that C# class, and DSC resources as well.

The result is the new PolicyFileEditor module, which can be found on GitHub, or via PowerShellGet. The module exposes 4 commands:

Get-PolicyFileEntry [-Path] <string> [-Key] <string> [-ValueName] <string> [<CommonParameters>]
Get-PolicyFileEntry [-Path] <string> -All [<CommonParameters>]

Set-PolicyFileEntry [-Path] <string> [-Key] <string> [-ValueName] <string> [-Data] <Object> [-Type <RegistryValueKind>] [-NoGptIniUpdate] [-WhatIf] [-Confirm] [<CommonParameters>]

Remove-PolicyFileEntry [-Path] <string> [-Key] <string> [-ValueName] <string> [-NoGptIniUpdate] [-WhatIf] [-Confirm] [<CommonParameters>]

Update-GptIniVersion [-Path] <string> [-PolicyType] <string[]> [-WhatIf] [-Confirm] [<CommonParameters>]

There are also two DSC resources (cAdministrativeTemplateSetting and cAccountAdministrativeTemplateSetting) which are wrappers around the *-PolicyFileEntry commands for the various local GPOs.

We immediately put this to good use in one of our dev\test cloud environments were we kept accidentally kicking each other out of RDP sessions:

cAdministrativeTemplateSetting AllowMultipleRdpSessions
{
    Ensure       = 'Present'
    PolicyType   = 'Machine'
    KeyValueName = 'SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fSingleSessionPerUser'
    Type         = 'DWord'
    Data         = '0'
}
Posted in PowerShell | Tagged , | 16 Comments

Odd behavior with regex-based operators and case sensitivity

I happened to be browsing the PowerShell reddit page this evening, and came across a very interesting post: RegEx -NotMatch Isn’t Ignoring Case. I spent some time playing around with these operators for a bit, and came up with several scenarios in which the results might not match the case-sensitivity behavior when you use -imatch or -cmatch. (-notmatch, -replace and -split also have similar oddities in some circumstances.)

I posted a bug report here, where you can find repro code if you’re interested. The PowerShell team may take action and change this behavior at some point, but for the time being, just avoid doing two things:

  • Don’t pass an actualy [regex] object to these operators for your pattern; just pass in a string and let PowerShell construct the Regex for you behind the scenes. If you pass in a [regex], that object’s case sensitivity behavior may kick in (even if it doesn’t match the expected behavior of the operator you’re using.
  • Don’t pass in a pattern with an embedded regex option to control case-sensitivity. This will override the operator’s behavior. For example: (‘TEST’ -imatch ‘test’) will be true, and (‘TEST’ -imatch ‘(?-i)test’) will be false.

If you don’t do either of those things, then the -match, -notmatch, -replace, and -split operators (and their case-sensitive versions) should always behave the way you expected them to.

Posted in PowerShell | Tagged , , | Leave a comment

Long PowerShell Commands (Backticks, gasp!)

Hrm… I know that the prevailing opinion in the community is that when you have a long command, avoid using the backtick character for line continuations in PowerShell. I know why authors avoid it, and I know the “gotcha” around having a space after your backtick causing weird stuff to happen. But dang it, this code:

    VisitCacheFiles -Source                     $SourceDirectory `
                    -Destination                $AgentDirectory `
                    -Exclude                    *.properties `
                    -OnMissingFile              ${function:TestAgentDirectory_OnMissingFile} `
                    -OnMissingDestinationFolder ${function:TestAgentDirectory_OnMissingDestinationFolder}

Is so much nicer to read than this:

    $splat = @{
        Source                     = $SourceDirectory
        Destination                = $AgentDirectory
        Exclude                    = '*.properties'
        OnMissingFile              = ${function:TestAgentDirectory_OnMissingFile}
        OnMissingDestinationFolder = ${function:TestAgentDirectory_OnMissingDestinationFolder}
    }

    VisitCacheFiles @splat

Why? The first thing you see is the function name, instead of having some big hash table literal hitting your brain, forcing you to look down then back up again to see what’s going on. In fact, it’s easy to completely overlook the call to the actual function amidst all the other noise, when using splatting. Also, the vertical alignment of the parameters after the function name makes it very easy to see, at a glance, that they’re all associated with that command. (Or rather, that they _should_ be; no syntactic guarantees there.) I’m on the fence about whether to vertically align the arguments to those parameters as well, as I have in this post. It does seem to enhance the readability a bit more than when the parameters and arguments are a big blob of code.

So, I’m debating whether to start learning to love the backtick in my scripts. Before, I avoided it, but now that I’m using IseSteroids v2.0, I’ve noticed that it has two nice features related to line-ending backticks. First, it highlights them for you, making them easier to spot. The highlighting goes away if you have spaces after the backtick. Second, one of IseSteroids’ automated refactoring rules will detect backticks that are the last non-whitespace character on a line, and remove that offending whitespace. However, not everyone uses the same editor / addons, and when other people are reading or modifying my scripts, that could get annoying.

On a side note, Microsoft developers also appear to prefer the first style. You’ll see it all over the place if you look at things like the PSDesiredStateConfiguration and PowerShellGet modules. In fact, you’ll see relatively few uses of splatting at all in Microsoft’s modules, except where they’re either passing on an existing hashtable (e.g., @PSBoundParameters), or using splatting to dynamically create sets of parameters. (e.g., if ($Credential) { $splatParams[‘Credential’] = $Credential } )

What do you think? Is improved code flow / readability worth putting up with the possible gotchas around the tiny backtick? Does your choice of script editor affect that decision?

Posted in PowerShell | Tagged , | 2 Comments

Handy Microsoft Virtual Academy courses for PowerShell and C# development.

There are some great free resources available at the Microsoft Virtual Academy. For those who are trying to learn to write code in PowerShell or C#, here are a couple of examples:

Advanced Tools & Scripting with PowerShell 3.0 Jump Start
Programming in C# Jump Start

Posted in Uncategorized | Leave a comment

Career Updates

A few months ago, Don Jones made a post on his blog titled Don’t Get Stuck in Your Job. I didn’t want to say anything publicly at the time, but the story in that post came from me. Back in July, I interviewed to replace Steven Murawski when he moved from Stack Exchange to Chef, and was turned down.

That was a pretty big blow, but there was nothing to do but move forward. I reached out to some of my friends and contacts for advice, and since then, I’ve been very busy: full-time job, consulting in the evenings and weekends for extra money, open source projects, PowerShell.org and MVP-related activities, etc. I’ve also been trying to beef up my skill set, both by learning more about IT technologies that are outside of my usual comfort zone, and by becoming a better developer.

As luck and industry trends would have it, now’s a very good time to be an IT Pro who also has development skills, and all that extra time I’ve spent over the past year or two is paying off in a big way. Next week, I’ll be starting a new job: Application Operations Engineer for DevOpsGuys. My main focus will be on Windows automation and configuration management with PowerShell and Desired State Configuration. It’s a smaller company than I’m used to, though, and I suspect there will be ample opportunity to gain experience with the Linux ecosystem, as well as to get involved with other forms of software development. (And – gasp! – I’ll be able to do these things on the job, instead of having to lose so much sleep to get everything done!)

It’s an exciting change that moves me right to the front of the technology curve, and I’m really looking forward to it!

Posted in Professional | 2 Comments

Automatic variables in Desired State Configuration

When you read through the DSC documentation at http://go.microsoft.com/fwlink/?LinkId=311940 , you learn that there are three automatic variables available to your DSC configuration code: $ConfigurationData , $AllNodes, and $Node.

There are actually six automatic DSC variables. Not documented (and arguably less useful), you also have variables named $MyTypeName, $SelectedNodes, and $NodeName.

$MyTypeName will be the name of the configuration (or composite configuration) that is active at any given time. This can perhaps be useful if you’ve placed some common code into a library or function that is used from multiple DSC configurations, and want to use that configuration name in a log file or error message.

$SelectedNodes and $NodeName, like $Node, are available inside a node{} block. $NodeName is the same as $Node.NodeName, and is a minor convenience. As you probably know, the body of the node{} block is essentially like a body of a ForEach loop, where $Node / $NodeName are associated with the node in the current iteration of the loop. The $SelectedNodes variable, on the other hand, is the entire array that was passed to the Node command. For example, in (node $AllNodes.Where({$_.SomeFilterCondition}).NodeName { }), inside the body, the $SelectedNodes variable will contain the same hash tables as $AllNodes.Where({$_.SomeFilterCondition}) . I can’t come up with a situation where having access to this information would be useful, off the top of my head, but if you have some sort of logic where it’s helpful to know things about the other nodes in the “loop”, you can use $SelectedNodes to get at that information.

Here’s some example code that you can run to display the values in these variables. I also threw in a demonstration of the (NodeName = ‘*’) case, which allows you to define a global property for all nodes (which can optionally be overridden on the individual nodes.) This is all built-in DSC functionality, and doesn’t require any external modules.

Posted in PowerShell | Tagged | 4 Comments