SharePoint Automation Gary Lapointe – Founding Partner, Aptillon, Inc.

10Jul/140

“_ULS_EXPAND_FORCED_LOGGING_MESSAGES_” Environment Variable Explained

Have you ever been looking through the SharePoint ULS logs trying to troubleshoot one issue or another and come across entries such as this:

[Forced due to logging gap, Original Level: <TraceLevel>] <Some message> {0}

I was recently working with a client who noticed an message like this in the ULS logs and was baffled by the fact that the {0} was not being replaced with the relevant data referred to in the message text. I ‘d noticed this in the past but never bothered looking into it to figure out what was up with it so I decided to crack open Reflector and poke around a bit.

What I found was that there was a method, SendTraceData(), which is part of the Microsoft.SharePoint.Diagnostics.ULS class and it is within this method that the [Forced due to logging gap…] message is appended to the string that is to be written to the ULS logs (this happens when the time between log writes exceeds the default 50ms). Within the routine that is adding this string there is a test to see if the ExpandForcedLoggingMessages property is false, and if it is then set whatever data was provided to null, thereby clearing the relevant information that would be used in any subsequent string.Format() call to add the data to the message. Here’s a screenshot showing that code so you can see what I mean:

private static void SendTraceData()

So the trick to figuring out why the {0} is not being replaced with the appropriate value is to understand how that ExpandForcedLoggingMessages property is set. I did some more digging with the Reflector analyze tool and found that in the CompleteInit() method of the same Microsoft.SharePoint.Diagnostics.ULS class there is code which sets the property based on the value of the _ULS_EXPAND_FORCED_LOGGING_MESSAGES_ environment variable:

private static unsafe void CompleteInit()

This variable, by default, is not set (and I can’t find one lick of documentation about it) but if we create the variable and set it to a value of true and reboot the server we will find that suddenly the values are being populated (or expanded) on that server (for this to take affect farm wide you need to do this on each server in the farm).

To set the variable you can either do it manually via the computer properties or you can use this simple PowerShell command:

[System.Environment]::SetEnvironmentVariable("_ULS_EXPAND_FORCED_LOGGING_MESSAGES_", "true", "Machine")

 

Don’t forget to reboot the machine after you set the variable.

Because I didn’t want to wait for a particular event to occur to validate that this actually worked I decided to create a simple test script which called the SendTraceData() method. Because the containing class is internal I had to use some reflection to make this work but you shouldn’t need to run this code – I only include it here for completeness to show how I validated that the change worked as expected:

$bindings = @("InvokeMethod", "NonPublic", "Instance", "Static", "Public", "GetField", "GetProperty")
$asm = [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint")
$type = $asm.GetType("Microsoft.SharePoint.Diagnostics.ULS")
$ulscatType = $asm.GetType("Microsoft.SharePoint.Diagnostics.ULSCat")
$ulscat = $ulscatType.GetProperty("msoulscatUnknown").GetValue($null, $null)
$method = $type.GetMethod("SendTraceData", $bindings)
$method.Invoke($null, @($true, [System.UInt32]0, $ulscat, 20, "This is test data: '{0}'.", @("hello world")))

 

If we look at the ULS logs using ULSViewer we can see our test event got created and the data was preserved:

ULSViewer

And now you know how to get that {0} populated with data which may or may not be useful but I personally would rather have more data available when troubleshooting an issue and hate the idea of having possibly key information nullified.

Oh, and while you’re looking at the CompleteInit() method you may have noticed that there’s also a _ULS_MAX_LOGGING_GAP_IN_MILLISECONDS_ environment variable and a _ULS_EXPAND_FORCED_LOGGING_CACHED_MESSAGES_ environment variable. The later is similar to the one I’ve been talking about and is related to messages you see that look like this:

[Forced due to logging gap, cached @ <Date>, Original Level: <TraceLevel>] <Some message> {0}

So if you’re going to enable the one you should probably enable the other as well.

The _ULS_MAX_LOGGING_GAP_IN_MILLISECONDS_ environment variable controls how big, in milliseconds, the maximum logging gap should be. By default the value is 50 milliseconds. I would recommend not changing this value but if you want events written more frequently to the ULS logs then you can lower the value or likewise if you want the events written less frequently you can bump up the value (note that you won’t lose events, it just affects how often SharePoint is writing to the file).

I hope this information helps you with your troubleshooting adventures!

-Gary

8Jul/140

Get-SPControlElement

I haven’t really done much blogging lately so I figured I’d try and get back into it by sharing some scripts from my collection. Over the years I’ve created tons and tons of little scripts here and there to help me solve one problem or another – whether it be some upgrade related thing or a content or build migration or just some random snippet to help me figure out what the heck is going on with my code or with SharePoint in general – I’ve got a ton of them just scattered all over the place. So at various points this summer I’m going to *try* and just grab an arbitrary one here or there and clean it up some and post it so others can benefit (and also to force me to clean up some of my crap).

The script that I want to share today I created a couple of years ago to help me troubleshoot an issue I was having with a custom delegate control registration. The problem ended up being resolved easily enough as it was simply an issue of scope – the control was registered at one scope but expected at another. Of course, at the time this didn’t really make sense as the scope that should have been expected was what I was providing, turns out that there’s what I would characterize as some bad design decisions within the SharePoint product so what I expected wasn’t what SharePoint wanted. Anyways, I digress – to help me isolate the issue I threw together a quick little script which would dump out the control registrations so that I could validate that my control was in fact being registered and being registered at the scope I expected. I could also use the script to see what was the “top” registration, or specifically which control would actually win and get its code called and rendered on the page, and finally, I could use this to see what other controls I was boxing out for the win (or loss). The script below is a much cleaned up version of my quick and dirty hack but it does the same thing, just with comments and a little more flexibility with the use of the SPWebPipeBind type. Note that to do the job you have to use an internal method of the Microsoft.SharePoint.SPElementProvider class but as it’s just a simple query I’m personally not concerned about the use of the reflection to make it work.

So with that, here’s my little Get-SPControlElement function:

<#

.Synopsis

   Retrieves the control elements registered for a given delegate control ID.

.DESCRIPTION

   Retrieves the control elements registered for a given delegate control ID.

.EXAMPLE

   Get-SPControlElement -Web "http://demo" -ControlId "QuickLaunchDataSource" -TopOnly

.EXAMPLE

   Get-SPControlElement -Web "http://demo" -ControlId "QuickLaunchDataSource" -Scope "Web"

#>

function Get-SPControlElement {

    [CmdletBinding()]

    Param (

        # Specify the URL of an existing Site or an existing SPWeb object.

        [Parameter(Mandatory = $true,

                   ValueFromPipelineByPropertyName = $true,

                   ValueFromPipeline = $true,

                   Position = 0)]

        [Microsoft.SharePoint.PowerShell.SPWebPipeBind]

        $Web,

 

        # The ID of the delegate control to return instances of.

        [Parameter(Mandatory = $true,

                   ValueFromPipelineByPropertyName = $true,

                   ValueFromPipeline = $false,

                   Position = 1)]

        [string]

        $ControlId,

 

        # If specified, query for the top element only. If not specified, show all controls registered for the control ID.

        [Parameter(Mandatory = $false,

                   Position = 2)]

        [Switch]

        $TopOnly,

 

        # The scope to search for. Valid values are "Farm", "WebApplication", "Site", and "Web". To show controls registered at all scopes omit the parameter or pass in a $null value.

        [Parameter(Mandatory = $false,

                   Position = 3)]

        [string]

        $Scope

    )

 

    Begin {

        $bindings = @("InvokeMethod", "NonPublic", "Instance", "CreateInstance", "Public")

        $asm = [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint")

        $type = $asm.GetType("Microsoft.SharePoint.SPElementProvider")

        $constructor = $type.GetConstructor($bindings, $null, @(), $null)

        $provider = $constructor.Invoke(@())

    }

    Process {

        $spWeb = $Web.Read()

        if ($TopOnly) {

            $method = $type.GetMethod("QueryForTopControlElement", $bindings)

        } else {

            $method = $type.GetMethod("QueryForControlElements", $bindings)

        }

        $method.Invoke($provider, @([Microsoft.SharePoint.SPWeb]$spWeb, $Scope, $ControlId))

        $spWeb.Dispose()

    }

    End {

    }

}

 

 

And here’s a simple example demonstrating how to call the function (note that I put the code in a file named Get-SPControlElement.ps1 and then I dot source the file to load the function into memory – you could scope the function globally to avoid the need to dot source if you want):

image

This should work on both SharePoint 2010 and 2013 and I believe with some minor modifications you could make it work with 2007.

-Gary

9Jan/1420

Announcing My Custom SharePoint Online Cmdlets

For quite a while now I’ve been pointing out the lack of cmdlets that are available for working with Office 365 and SharePoint 2013 (SharePoint Online) and I’ve mentioned several times that someone should really do something about that and that I’d love to be that person if only I had the time. Well, as it turns out, over Christmas break I managed to find some free time so I went ahead and got started on my own custom cmdlets for SharePoint Online which I’m now officially releasing with this post.

If you refer back to my earlier post where I detailed the SharePoint Online cmdlets provided by Microsoft, you’ll note that there are currently only 30 cmdlets – with my new offering I just about double that number by adding an additional 27 cmdlets. I’m only going to briefly outline by design goals and a couple simple examples in this article so I encourage you to visit the downloads page of my site to download the installer and/or source code for the project and for details about each cmdlet you should go to the command index page (click the SharePoint Online tab to see the cmdlets).

In creating the cmdlets I had two core goals that I wanted to achieve: first I wanted to add some basic retrieval functionality for common objects such as SPWeb and SPList objects so that I could do some simple discovery type operations just as I would with SharePoint on-premises cmdlets; second, I wanted to make working with the cmdlets and the objects returned by them much more like working with the on-premises API so that you don’t have to think about calling the Load and ExecuteQuery methods every time you try to do something. To accomplish these goals I had to not only create the actual cmdlets themselves (which was the easy part) but I also had create custom wrapper objects for several of the objects in the Microsoft.SharePoint.Client namespace. For example, when you call my Get-SPOWeb cmdlet what you get back is an SPOWeb object, not a Microsoft.SharePoint.Client.Web object. By taking this approach I an avoid the common uninitialized property errors that you’d see when you output the object to the PowerShell console. These errors makes working with the native objects nearly impossible as it requires you to know which properties are initialized and which ones aren’t – by creating a wrapper object I can hide this issue and make working with the objects very simple. I can also do things like create an Update method which handles the calling of the ExecuteQuery method as well as the refresh of the object, again, taking a lot of the nuances of working with the .NET CSOM API out of the picture. For those of you not familiar with working with the .NET CSOM API (and for those that are) the point to take away from all my technical babbling is that using my cmdlets will make working with SharePoint Online very similar to working with SharePoint on-premises.

Let’s take a look at how we can use my cmdlets. First off you need to download and install the cmdlets – I’ve created the cmdlets so that they work just like the Microsoft provided SharePoint Online cmdlets in that I just created a PowerShell V3 module manifest which references a binary assembly (dll) that contains all the cmdlets. All the installer does is put all the required files (dependent Microsoft.SharePoint.Client dlls, my assembly containing the cmdlets themselves, and the PowerShell help and format file) into an appropriate directory and then adds the path to that directory to the PSModulePath environment variable. This makes cmdlets available without having to manually load the module and is just like what the Microsoft SharePoint Online cmdlets do. Note that Microsoft provides a shortcut for the SharePoint Online Management Shell which is simply a PowerShell console with the module explicitly loaded but use of the management shell is completely unnecessary so I don’t bother creating an equivalent shortcut. To see that the module is installed correctly simply open any PowerShell console and run the Get-Module cmdlet passing in the -ListAvailable parameter. You should see the module listed as shown below:

Get-Module -ListAvailable

With the cmdlets installed you can now connect to a Site Collection using the Connect-SPOSite cmdlet. Note that the majority of my cmdlets are scoped to a single Site Collection and you must establish a connection to that Site Collection before you can do anything else; if you need to work on a different Site Collection then you’ll need to call Connect-SPOSite again.  In this example I’m connecting to a site by providing my username and the Site Collection to connect to:

Connect-SPOSite -Credential "gary@contoso.onmicrosoft.com" -Url "https://contoso.sharepoint.com"

 

I recommend you actually create a variable to store you credential information ($cred = Get-Credential) so that you can reuse the information each time you connect to a different Site Collection. Oh, and as a bonus, though the default behavior is to connect to SharePoint Online, if you pass in the –NetworkCredentials switch parameter then you can use these cmdlets against SharePoint on-premises – you’re welcome. To kill a connection use the Disconnect-SPOSite cmdlet (I particularly recommend you do this when creating scripts, not much point if you’re just doing something one-off in the console but if you writing a script you want to make sure you kill your context so someone can’t come behind you and hijack your credentials – this is absolutely the same pattern that the Microsoft SharePoint Online cmdlets follow so if you’re familiar with them this should not be new to you).

Once you’ve established a connection you’re ready to start using the other cmdlets. If you’d like to retrieve the root Site (SPWeb) or a child Site you can use the Get-SPOWeb cmdlet as shown:

$rootWeb = Get-SPOWebIdentity "/" -Detail

$childWeb = Get-SPOWebIdentity "/test" -Detail

 

Note the –Detail parameter which is useful when you want to see as much information as possible. If you’re just looking for a list Sites then I would omit the parameter so that you’re bringing back less information which should help it to run a little faster (to see all the Sites omit the –Identity parameter).

When you get back an object I encourage you to use the Get-Member cmdlet (aliased as gm) to see the list of properties and methods available. All of my cmdlets will return my own SPO* wrapper object you can always get back to the original Microsoft.SharePoint.Client object via a property on my object. The fact is I couldn’t wrap every single sub-object or every single property and method so there will be times when you’ll have to use the native object but hopefully what I did provide will at least cover a 80% of the common use cases (I hope). The following screenshot shows the members for the SPOWeb object:

SNAGHTML6491027

Notice the highlighted Web property which you can use to get to the original Microsoft.SharePoint.Client.Web object. If you need to update any properties on the object you can either use the Set-SPOWeb cmdlet or update the property directly on the object and then call the Update() method – no need to call ExecuteQuery and then refresh the object, it’s all handled for you.

I don’t want to spend time going through every cmdlet and object that I’ve created as I detail each of the cmdlets on my command index page and the object members are easily discovered using the Get-Member cmdlet so to wrap up I want to simply point out that this is definitely a work in progress and will hopefully grow over time. Also, I don’t claim to be a .NET CSOM expert so there may be some areas that can be improved upon from a performance point of view so if you download the code and see something that can be improved please share you feedback, or if you see anything else that can be improved again, please share (note that I’m absolutely horrible when it comes to responding to comments on my blog but I do eventually get to them all, just not necessarily very quickly).

Have fun and happy PowerShelling!

-Gary

7Aug/136

SharePoint 2013 Version 16.0.0.1810???

So today I was doing some SharePoint 2013 app development against my Office 365 SharePoint 2013 tenant and I needed to view the HTTP traffic from the site in order to troubleshoot some issues I was having and I stumbled across something I found very interesting when I looked at the header details in Fiddler:

image

Yup, that’s right – my tenant is on SharePoint 16 – not 15 which is the current public release of SharePoint. To confirm that this wasn’t just something with the headers I navigated to http://mytenantsite.sharepoint.com/_vti_pvt/service.cnf and there it is again:

image

(In case you’re wondering, the service.cnf file hard codes the version information for the current web application and if you have an on-prem version you can see it in the _vti_pvt folder of you local web application’s site – you can also look at _vti_pvt/buildversion.cnf to get the actual build number).

Another confirmation test I hadn’t thought of but James Love (@jimmywim) pointed out to me was that the page layouts within the site are also referencing 16.0.0.0 assemblies. I downloaded the DefaultLayout.aspx file to confirm and this is what I see:

<%@ Page language="C#"   Inherits="Microsoft.SharePoint.Publishing.PublishingLayoutPage, Microsoft.SharePoint.Publishing,Version=16.0.0.0,Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>

So you may be asking yourself – what does all this mean? Has Microsoft upgraded my SharePoint Online Farm to the next version of SharePoint? Well, I honestly don’t know but I think all the evidence is pretty clear that they must have. In terms of what is different, I simply couldn’t tell you as I’ve not noticed anything different – perhaps you have? If so post a comment as I’d love to know!

I personally don’t like speculating about what this means and what Microsoft is doing so for me what is important about this information is the fact that I/you might have clients or customers running on bits that are not just different due to patch levels but are in fact on an entirely different version. As I’m working on SharePoint Apps that will be sold in the App Store this type of information is critical when it comes to troubleshooting problems. So until Microsoft releases some public information about what’s up with all of this, the best we can do is keep the information handy as we work on various projects and, hopefully, keep the discussions going so that as folks discover version related differences we communicate that information to each other for our global benefit.

-Gary

18Jul/130

My First Pluralsight Course

Throughout the years I've done numerous presentations related to using PowerShell with SharePoint and through them all I've often found myself wanting more time so that I could share more details. Recently some friends of mine pointed me in the direction of Pluralsight and the opportunities they had for new authors – I saw this as a fantastic way to take many of the presentations I've done in the past and polish them up, add some details, and create a real course out of them. So with that I'm pleased to announce that my first Pluralsight course is now available: Using Windows PowerShell with SharePoint 2010 and SharePoint 2013. The folks at Pluralsight were amazing to work with and going through the authoring process has given me a whole new level of respect for the existing authors and for Pluralsight as a company; they seem to be doing everything right and every Pluralsight employee I've interacted with exemplifies professionalism and dedication to the goal of producing a great product, and I am humbled by the acceptance of my course into their library.

So if you're a developer or an administrator and you're working with SharePoint then I strongly recommend you go through some or all of my course to help you better understand how to use PowerShell, and more specifically, how to use it with SharePoint. You can see the full course outline by following the previous link but I've gone ahead and included the course and module descriptions here for your direct reference:

Using Windows PowerShell with SharePoint 2010 and SharePoint 2013

When it comes to administering and automating SharePoint 2010, 2013, and Office 365, there is no better tool than Windows PowerShell. After going through this course you'll have the skills and knowledge necessary to be productive with PowerShell. In the first two modules you'll get a jump start into PowerShell where you'll learn everything from basic syntax to creating functions and scripts, all within the context of SharePoint. Next you'll discover what's new when it comes to using PowerShell V3 with SharePoint 2013. Administering SharePoint with PowerShell does not mean that you're limited to what you can do directly on the server and in this course you'll learn everything you need to know to manage your Farm remotely, whether you are using Office 365 or an on-premises installation. And finally, sometimes the out of the box cmdlets just aren't enough so we'll teach you how to create your own custom cmdlets that you can deploy to your SharePoint Farm. After completing this course you'll be on your way to becoming a SharePoint superstar as you'll have all the core knowledge you need to start administering and automating SharePoint using Windows PowerShell.

  1. Introduction to PowerShell for SharePoint

    This module focuses on the basics of Windows PowerShell, all with an emphasis on SharePoint. At the conclusion of this module, you should have enough basic knowledge to start working with SharePoint via the SharePoint Management Shell.

  2. Scripting with PowerShell & SharePoint

    This module builds on the foundations presented in the first module and gets beyond what is typically done in the console. During this module you'll start by learning about conditional logic and looping and then move onto creating functions and scripts.

  3. PowerShell V3 + SharePoint 2013

    In this module you'll learn about many of the new features offered by Windows PowerShell V3 with SharePoint 2013.

  4. PowerShell & Office 365

    In this module we'll shift from SharePoint on-premises to SharePoint Online. You'll learn how to connect to your SharePoint Online tenant and what you can and can't do with the available SharePoint Online cmdlets.

  5. PowerShell Remoting

    In this module, we'll switch gears back to on-premises SharePoint installations as we take a look at how you can use PowerShell from your client machine to remotely connect to and manage your on-premises SharePoint Farm.

  6. Creating Custom Cmdlets for SharePoint

    In this module you'll learn how to extend the out of the box SharePoint cmdlets by creating your own custom cmdlets and PipeBind objects.

I already have some ideas for another course to essentially round out the PowerShell + SharePoint side of things – specifically I'm planning on creating a course that assumes you know PowerShell and now you need to learn how to better apply that knowledge to solve specific problems – so my plan for the next course will be to provide more solution focused education (at least, that's the plan, I'll have to see how this first course does before I commit to anything).

I hope that you find my course useful and please provide feedback (positive or negative) as I'm anxious to know what works and what doesn't so that I can continue to improve and bring better and better stuff to the community.

4Jun/131

Parallel SharePoint Tasks with PowerShell

Today I was working on a deployment for a client which entailed activating a custom SharePoint Feature on about 1000 Site Collections. This Feature did a fair number of things and on average it takes about 10-15 minutes to complete in their test environment (which is pretty slow compared to their production environment which I've not yet deployed to but I expect close to a 5 minute run time per Site Collection once I go to production with it). You can obviously do the math and quickly see that it will take me somewhere around 10 days for this to complete if I did one Site Collection at a time. This is just unacceptable as I personally don't want to be monitoring a Feature activation script for that long. What's worse is that when I look at CPU and memory utilization on the servers I can see that they have plenty of resources so it's not like the operation is actually taxing the system, they're just slow operations. So the solution, for me, is pretty obvious: I need to activate these Features in parallel.

There are two ways that I can achieve this using PowerShell and they depend on which version of PowerShell you're using. In my case I'm running SharePoint 2010 which means that I'm using PowerShell V2; because of this my only option is to use the Start-Job cmdlet with some control logic to dictate how many jobs I'm willing to run at once. If I were using SharePoint 2013 I could use the new workflow capabilities of PowerShell V3 thereby making the whole process a lot easier to understand. I'll show both approaches but I want to first start with what you would do for SharePoint 2010 with PowerShell V2.

Using Start-Job for Parallel Operations

The trick with using the Start-Job cmdlet is knowing when to stop creating new jobs until existing jobs have completed. The key is to use the Get-Job cmdlet and filter on the JobStateInfo property's State property and then, if you have reached your job count threshold, call the Wait-Job cmdlet to block the script until a job completes. The following script is a simple example of what I created for my client and can be used as a template for your own scripts:

$jobThreshold = 10

foreach ($site in (Get-SPSite -Limit All)) {
    # Get all running jobs
    $running = @(Get-Job | where { $_.JobStateInfo.State -eq "Running" })

    # Loop as long as our running job count is >= threshold
    while ($running.Count -ge $jobThreshold) {
        # Block until we get at least one job complete
        $running | Wait-Job -Any | Out-Null
        # Refresh the running job list
        $running = @(Get-Job | where { $_.JobStateInfo.State -eq "Running" })
    }

    Start-Job -InputObject $site.Url {
        $url = $input | %{$_}
        Write-Host "BEGIN: $(Get-Date) Processing $url..."

        # We're in a new process so load the snap-in
        Add-PSSnapin Microsoft.SharePoint.PowerShell

        # Enable the custom feature
        Enable-SPFeature -Url $url -Identity MyCustomFeature

        Write-Host "END: $(Get-Date) Processing $url."
    }
    # Dump the results of any completed jobs
    Get-Job | where { $_.JobStateInfo.State -eq "Completed" } | Receive-Job

    # Remove completed jobs so we don't see their results again
    Get-Job | where { $_.JobStateInfo.State -eq "Completed" } | Remove-Job
}

If you run this script and open up task manager you'll see that it's created a powershell.exe process for each job. You might be able to get away with more processes running at once but I'd recommend starting smaller before you bump it up too high and risk crippling your system.

Using PowerShell V3 Workflow

With PowerShell V3 we now have the ability to create a workflow within which I can specify tasks that should be run in parallel. I actually detailed how to do this in an earlier post so I won't spend much time on it here. I do want to show the code again for the sake of comparison as well as to point out one core difference (I recommend you read the Workflow section of my aforementioned post for more details). First though, here's a slightly modified version of the code so you can compare it to the V2 equivalent:

workflow Enable-SPFeatureInParallel {
    param(
        [string[]]$urls,
        [string]$feature
    )
 
    foreach -parallel($url in $urls) {
        InlineScript {
            # Write-Host doesn't work within a workflow
            Write-Output "BEGIN: $(Get-Date) Processing $($using:url)..."
 
            # We're in a new process so load the snap-in
            Add-PSSnapin Microsoft.SharePoint.PowerShell
 
            # Enable the custom feature
            Enable-SPFeature -Identity $using:feature -Url $using:url
            
            Write-Output "END: $(Get-Date) Processing $($using:url)."
        }
    }
}
Enable-SPFeatureInParallel (Get-SPSite -Limit All).Url "MyCustomFeature"

The first thing you should be asking yourself when you look at this is how many will be processed simultaneously? With the V2 version we could set the limit to whatever arbitrary value made sense for our situation. With this approach, however, we're limited to only 5 processes. You can see this if you run the code and open up task manager where, like the Start-Job approach, you'll see the powershell.exe for each process (note that it's not the workflow that is creating the powershell.exe process, it's the call to the InlineScript activity which is doing it – this call to InlineScript just helps to point out that you'll never see more than five created).

Summary

So, though we're limited by the number of processes and there are some downsides in terms of how we output information (like the fact that we can't use Write-Host and any output generated by one run could be intermixed with output from another run) I think the V3 approach is much cleaner and easier to use. That said, you could make the Start-Job approach generic so that you pass in a script to run along with an array of input values so that this could be easily used without having to look at the details of what's happening.

27Feb/1320

Provisioning Search on SharePoint 2013 Foundation Using PowerShell

There was recently a twitter conversation between @cacallahan, @toddklindt, and @brianlala discussing provisioning Search on SharePoint Foundation and whether it was possible or not and somewhere during the conversation it was suggested that I might know how to do this (sorry guys for not responding immediately) – unfortunately I hadn’t actually done any work with SharePoint 2013 Foundation yet and so had not yet tried and thus didn’t know the answer (I knew there were issues and suspected a workaround was possible but I didn’t have a server built to test anything). Well, last night and today I managed to have some free time so I figured I’d take a look at the problem to see if my guess about a workaround was correct.

Before I get to the results of my discovery let’s first look at what the blocking issue is. It’s actually quite simple – the product team, for various reasons, have decided that for SharePoint 2013 Foundation you can only have one Search Service Application and you shouldn’t be able to modify the topology of the Service Application; this means that when you provision Search using the Farm Configuration Wizard it will create a default topology for you in which all roles are on a single server. So, to enforce these rules they chose to make it so that the PowerShell cmdlets would not allow you to provision the service application or run any method or cmdlet that would otherwise allow you to modify an existing topology (so you can’t change the topology created by the wizard). I totally get the reasoning for the restriction – if you need enterprise topology type structures then pony up the money and get off the free stuff. That said, I think they took the lazy way out by simply blocking the cmdlets when they could have easily put in other restrictions that would have achieved their goals while still allowing users to use PowerShell to provision the environment.

If you’re curious as to what happens when you try to provision the service using PowerShell on SharePoint Foundation 2013 here’s a screenshot which shows the error that is thrown:

SNAGHTML2aaf94c0

This error is thrown by a simple piece of code in the InternalValidate() method of the cmdlet which checks to make sure you are on Standard or Enterprise before allowing the cmdlet to execute (and any other cmdlets or methods that would otherwise affect the topology likewise perform this check).

To solve the problem I decided to start from the perspective of code run via the browser and drill down to see what I could find. So using Reflector I located the class and associated methods that are called by the Farm Configuration Wizard; this quickly led me to the public Microsoft.Office.Server.Search.Administration.SearchService.CreateApplication() static methods. So I did a quick test calling one of these methods and I was happy to find that the Search Service Application created perfectly – though there was one minor problem: the topology was empty. At first glance I figured this wouldn’t be an issue – I could simply clone the topology and add my components – unfortunately this is where I learned that they applied the SKU check to methods and cmdlets that would allow you to manipulate the topology. (On a side note, using these methods for Standard or Enterprise is potentially a great alternative to the New-SPEnterpriseSearchServiceApplication cmdlet as it lets you specify the names of databases that you can’t specify when using the cmdlet and because it creates an initially empty topology there’s less cleanup and manipulation of the cloned topology (assuming you don’t want to use what’s created) and it provisions slightly faster because it does less). So at this point I figured I’d hit the real road block – I could create the service application but it was useless as I couldn’t manipulate it.

This left me with only one option – to use reflection to call the internal method that the Farm Configuration Wizard calls to provision the service application. Now, before I get to the code that demonstrates how to do this I need to share a word of caution – using reflection to call internal methods is totally not supported. So what does this mean? Will Microsoft no longer support your Farm? Well, my understanding (and folks in the know please correct me if I’m in the wrong) is that Microsoft will continue to support you and that you will simply have to remove unsupported code before they will help you troubleshoot issues. Well, in this case it’s a one-time operation so there’s nothing really to remove; I figure the worst case scenario is that they’ll tell you that you need to recreate the service application using the Farm Configuration Wizard and then they’ll help you with your issue. But let’s take the question of supportability out of the equation for a second and look at it from a completely practical standpoint – if you were to look at the code that the Farm Configuration Wizard calls you’d see that, outside of some error checking and data validation and variable initialization, there’s effectively just two lines of code that do the provisioning of the service so I believe that the probability of getting it wrong is pretty low and the fact is search will either work or it won’t so if it doesn’t work then try again or just use the dang wizard. So, with all that said, if you decide to use any of this code you need to weigh the risks yourself and make an informed decision with those risks in mind. Alright, enough of that crap – you want to see the code so let’s get to the code.

To keep the PowerShell itself nice and simple I decide to derive this example from a script that Todd Klindt provides on his blog (the script I use is considerably more complex as it handles the changing of service options like the index folder and the service and crawl accounts, to name a few, and I don’t want the point of this post to be lost in all those details). Just to make sure the full chain of credit is provided I should note that Todd’s script is actually a derivative of what Spence Harbar provides on his blog but I wanted to reference Todd’s post specifically as it’s a bit shorter and more focused on the topic. Okay, background info – check; disclaimer – check; attribution – check – looks like it’s time for some code so here you go:

#Start the service instances
Start-SPEnterpriseSearchServiceInstance $env:computername
Start-SPEnterpriseSearchQueryAndSiteSettingsServiceInstance $env:computername

#Provide a unique name for the service application
$serviceAppName = "Search Service Application"

#Get the application pools to use (make sure you change the value for your environment)
$svcPool = Get-SPServiceApplicationPool "SharePoint Services App Pool"
$adminPool = Get-SPServiceApplicationPool "SharePoint Services App Pool"

#Get the service from the service instance so we can call a method on it
$searchServiceInstance = Get-SPEnterpriseSearchServiceInstanceLocal
$searchService = $searchServiceInstance.Service

#Use reflection to provision the default topology just as the wizard would
$bindings = @("InvokeMethod", "NonPublic", "Instance")
$types = @([string], [Type], [Microsoft.SharePoint.Administration.SPIisWebServiceApplicationPool], [Microsoft.SharePoint.Administration.SPIisWebServiceApplicationPool])
$values = @($serviceAppName, [Microsoft.Office.Server.Search.Administration.SearchServiceApplication], [Microsoft.SharePoint.Administration.SPIisWebServiceApplicationPool]$svcPool, [Microsoft.SharePoint.Administration.SPIisWebServiceApplicationPool]$adminPool)
$methodInfo = $searchService.GetType().GetMethod("CreateApplicationWithDefaultTopology", $bindings, $null, $types, $null)
$searchServiceApp = $methodInfo.Invoke($searchService, $values)

#Create the search service application proxy (we get to use the cmdlet for this!)
$searchProxy = New-SPEnterpriseSearchServiceApplicationProxy -Name "$serviceAppName Proxy" -SearchApplication $searchServiceApp

#Provision the search service application
$searchServiceApp.Provision()

 

Basically there’s two things that need to be done: first we need to use reflection to get the MethodInfo object for the CreateApplicationWithDefaultTopology() method of the Microsoft.Office.Server.Search.Administration.SearchService class and we’ll use this object to invoke the actual method, passing in the parameter types and values (and yes, the cast of the SPIisWebServiceApplicationPool objects is necessary otherwise you’ll get an error about trying to convert PSObjects to SPIisWebServiceApplicationPool types); the next thing we need to do, after the service application is created, is to create the service application proxy and then call the Provision() method on the search service application that we previously created (if you miss this step you’ll get errors about things like the admin component not be started and whatnot).

Once completed you’ll get a fully functional, PowerShell provisioned search service application. If you navigate to the search administration page you should see something that looks just like this (just like if you used the wizard):

image

So there you have it – it is indeed possible to provision the service using PowerShell – I’ll let you determine whether you should or not :)

Happy PowerShelling!

-Gary

24Feb/130

SharePoint Evolution Conference 2013

Today I finally got around to booking my travel for my favorite SharePoint conference – the SharePoint Evolution Conference in London England. For the third year in a row I have the honor of being able to present, and once again I’ll be covering the PowerShell side of things at the conference. Normally I tend to do more of a deep dive when I present but this time I’ll be stepping back a bit and providing more of an introductory type session to perhaps help those who have yet to realize the power of PowerShell get up to speed. This time I’ll only be doing the one session so I’ll get plenty of time to sit in on what I expect to be some great sessions (I originally was going to present on the SharePoint Education features but as those features are going away it was decided to scrub the session rather than present on something people shouldn’t be using).

Here’s the abstract for my session if your planning on attending:

The Power that is PowerShell – learn all about using PowerShell in SharePoint.
Audience: Developer / IT Pro

With the release of SharePoint 2013 Microsoft has nearly doubled the number of PowerShell cmdlets that are available for managing the product. This increase coupled with the fact that many things can now only be managed using PowerShell (such as search topology) means that administrators and developers alike must embrace PowerShell in order to be successful with SharePoint. In this session you will learn the basics of how to use PowerShell, from simple syntax to creating scripts for common or complex tasks; additionally, we’ll look at some of the new features available with PowerShell V3, the required version for SharePoint 2013.

Steve Smith and his crew at Combined Knowledge know how to put on an incredible show with fantastic speakers, sessions, and after-hours events. If you’re only given a budget for one conference each year you may not initially think about flying to London (assuming you’re not local) for that conference but I highly suggest you consider it. I’ve only been out of the States twice before in my life and both times were for this conference and it was totally worth it. From a purely educational standpoint the sessions are top notch with speakers who truly know their stuff so you can be assured that you’re getting quality information that stems from real world experience and not just playing around in a lab environment.

If you do make it to the conference please stop by my session and say hi!

Speaker_Evo-2013-Banner

18Feb/135

Fix: The trust relationship between this workstation and the primary domain failed

This short post is really just for my own memory as I keep bumping into this with my virtual machines but I figured others might also find it useful. Typically when I do SharePoint development I do everything on an all-up server but with SharePoint 2013 I’ve moved my Domain Controller to a separate server (where I also will install the Office Web Apps); however, if I leave any of my machines off for a while then the computer password will expire which means that things start to break and you’ll see errors like "The trust relationship between this workstation and the primary domain failed." The common fix is to remove the server from the domain and then join it back in but that takes some time so what I prefer to do is to simply run the following command which will reset the password:

netdom.exe resetpwd /s:domain_controller_name /ud:domain\administrator /pd:*

Be sure to replace the domain_controller_name placeholder with the name of your domain controller server. If you’re not already logged in to the server then you’ll have to log in using a local administrator account. After running the command it will prompt for a password for the specified account (it’s an odd prompt as it doesn’t show any characters being typed – not even masked). After providing the password reboot the server and you should be good to go.

If you want to prevent the server from changing its password in the first place (or prevent the DC from accepting the password change) you can follow the steps in this support article to disable the setting and avoid the issue altogether: http://support.microsoft.com/kb/154501 (more times than not I forget to do this in new development environments which prompted me to post the reset password fix).

UPDATE 4/12/2013: Seems you can do all this using PowerShell: "Test-ComputerSecureChannel -Repair". Thanks Alexey for pointing this out - learn something new every day!

-Gary

Tagged as: 5 Comments
10Feb/132

Announcing My SharePoint 2013 Custom Cmdlets

I’ve been putting this off for quite a while but I’ve finally pushed out a SharePoint 2013 build of my custom cmdlets. The reason it took so long was because I had to make a fair bit of changes to my existing stuff so that it would be easier to maintain both builds going forward. Specifically I needed to change the namespace of all the classes (which had 2010 in them) and I wanted to use a different name for the WSPs so the version wasn’t included in it either. So now I have just one WSP name for the SharePoint 2010 and SharePoint 2013 cmdlets for both Server and Foundation: Lapointe.SharePoint.PowerShell.wsp. If you previously had my old 2010 cmdlets deployed you’ll need to fully retract them before installing this new build (technically you can rename the new WSP file to the old name but I’d rather you embrace the change and just suck it up and do the dang retraction – it only takes a minute, so don’t be lazy!)

I’ve updated my downloads page to point to the correct WSP for each environment and I’ve deleted the old WSPs so if you were foolishly linking directly to my WSPs (please don’t do that) then your links are now broken. I’ve also posted the source code which has been upgraded for Visual Studio 2012 and contains a separate project for SharePoint 2010 and SharePoint 2013 (in addition to my custom MAML generator).

Another change I’ve made to help me manage these custom cmdlets better was that I got rid of my old command index page and created a new app for displaying the cmdlet details (the old page is still there, it just redirects to the new page). This new page is actually built dynamically using the PowerShell help file that I generate dynamically from the actual cmdlet classes – so for me this is pretty cool because now all the PowerShell help documentation and online documentation of each cmdlet is generated automatically so I don’t have to do anything other than provide the actual help details in the cmdlet classes themselves and I don’t do anything special to keep them in sync (just copy the help files up to my site).

At present both the SharePoint 2010 and SharePoint 2013 cmdlets are exactly the same (except for a few in code changes to make it work with 2013). I have, however, added a few new cmdlets from what was previously available and I’ll be added some more in the coming weeks (I’m hoping to start converting some of my more frequently used utility scripts and functions to cmdlets so I don’t have to keep hunting around for them). There is however, one breaking change (well, two to be exact) – the first is that I had to rename my Repair-SPSite cmdlet to Repair-SPMigratedSite because SharePoint 2013 introduces a cmdlet of the same name; the second was that I removed the gl-applytheme STSADM command as the functionality that it provided was specific to SharePoint 2007 and is no longer available (but I’m not really supporting the STSADM stuff anyways and contemplated removing them entirely but decided to leave them in, for now).

I haven’t had time to do a ton of testing of all the cmdlets on SharePoint 2013 - there’s just too many of them and I don’t make any money on these things so it’s not a high priority – so, as always, your feedback is appreciated and I’ll do my best to fix any bugs that are discovered but I can’t promise when I’ll get to them.

Happy PowerShelling!

-Gary