The information in this post is specific to SharePoint 2010 Beta 2 and may need adjusting for the RTM version.
In an effort to continue with my previous posts where I demonstrated how to build a basic farm and it’s site structure using XML configuration files and PowerShell for SharePoint 2010 I would like to now share how to create a search service application. An automated install of the service applications is, without a doubt, the most difficult PowerShell task you’ll undertake when scripting your SharePoint 2010 install, specifically the search application is the most difficult which is why I’ve chosen to explain it first as I expect it to be one of the most needed and one of the least understood. Note that I’m not planning on giving any depth to what the various components are, there’s plenty of other resources that will explain what the admin component is, for example.
To start off let’s look at the XML file that will drive our setup. Like my previous examples I have a fairly simplistic XML structure that drives all my configurations. This structure allows me to create as many service application instances as needed, each with their own configurations:
Examining the structure above you can see that I chose to put the <EnterpriseSearchService /> element under a <Services /> element – this will allow me to have all my service configurations in one file rather than a separate file for each service (note that there can be only one <EnterpriseSearchService /> element). Under the <EnterpriseSearchService /> element I have a container element for the applications – there should be only one <EnterpriseSearchServiceApplications /> elements but you can have as many <EnterpriseSearchServiceApplication /> elements under it. The application element is where all the meat of the configurations are. Within this element you define the application pool to use, the crawl and query servers to use, and the server for the administrative component, and finally the proxy definition and it’s proxy group memberships. The <CrawlServers /> and <QueryServers /> elements can have as many <Server /> child elements as needed but the <AdminComponent /> element can have only one <Server /> child element. And finally the <Proxy /> element can have as many <ProxyGroup /> child elements as desired.
Okay, so that’s the easy part – hopefully you can begin to see the power and flexibility of this simple XML file. No for the scripts – first we need to look at a couple of helper functions, one to get/create our application pools and another for the proxy group memberships. Let’s take a look at the application pool function which I called Get-ApplicationPool:
In this function I’m attempting to get the application pool if it already exists and if it doesn’t then I proceed to attempt to get the managed account that will be associated with the application pool. If the managed account doesn’t exist then I prompt for credentials and then create the managed account which I then use to create the application pool which gets returned to the calling function.
The next function, which I’ve named Set-ProxyGroupMembership associates my service application proxy with one or more proxy groups:
This function is probably a bit more complicated than it needs to be but I’m going to use it with every service application script so I’ll explain it briefly here and just reference this post in my future posts. For this function I wanted to be able to pass the proxy object that I created into the function using the pipeline rather than a parameter (it just flowed better that way and allowed me to pass more than one proxy if I desired without having to write a loop within the function). The first thing I’m doing in this function is clearing out any existing proxy group assignments that may have been set automatically but are not what I want per the XML file. Once I’ve cleared undesired assignments then I add any missing assignments. Some service applications will automatically add the proxy to the default proxy group which may not be what you want.
Now that we have our two helper functions out of the way we can start looking at the core function. I’ll talk about it in chunks and then at the end of this post provide the complete function.
The first thing I do is load the XML file to a variable, $svcConfig, which I use throughout the function:
Line 1 loads the file into a System.Xml.XmlDocument typed variable and then I grab the <EnterpriseSearchService /> element and set that to the $svcConfig variable. Next I need to get the search service itself and set that to a variable which I’ll use throughout the function as well. I pass the -Local switch in to get the service instance on the current machien. If I’m unable to find a service instance then something is wrong and I throw an error:
Next I need to get the managed account that will be used for the search service. I first try to retrieve the account in case it already exists and if it doesn’t exist then I create after asking the user for the password:
Now that we have a managed account and service instance we can set the core properties for the search service. I end up doing this on every machine but it only needs to be done once – just easier to set it every time rather than try and figure out if it’s been set yet and doing so has no negative repercussions:
The core service settings are in place, now it’s time to create all the service applications. In the example XML we have just one but we could have more so I use the ForEach-Object cmdlet to loop through all the definitions:
The first thing we need to do to create our app is to create the application pool for the service application itself and the administration component:
Before creating the application pools I store the current XML element in the $appConfig node for easier reference and to avoid conflicts with sub-loops. I then call the helper function I showed earlier to create the two application pools which I’ll use later. Next I check to see if the service application has already been created (line 1 below) by calling Get-SPEnterpriseSearchServiceApplication and if it does not exist then I create a new one. This helps when you have to run the script again due to possible errors that may occur later in the script (I’ve often seen update conflict errors occur randomly, running the script again is usually all that’s necessary):
Now that the service application exists we can go ahead and create the proxy and set the proxy group memberships:
Like with the service application I first try and get the proxy in case it has already been created and if I don’t find it then I create it. Once I have a reference to the proxy object I check to see if it’s online and if not then I set it online and call Update() to commit the change. And finally I call the Set-ProxyGroupsMembership function that I previously defined.
The intent of the script is to allow it to be run on multiple servers to support a multi-server scripted deployment. That’s where this next bit comes in:
For both the crawl servers, query servers, and admin component I get the name of the current computer ($env:computername) and then check to see if an <Server /> element has been declared with a matching name for the specific component. The variables declared are then used throughout the rest of the script.
Before I can create the crawl or query component I need start search service instance that we previously acquired:
If the service isn’t already online and if we’re on an appropriate server then I start the service by passing the service instance to the Start-SPEnterpriseSearchServiceInstance cmdlet. Next I need to set the administration component:
The trick with this bit is that you have to set the administration component before you can set the query or crawl components so the first time you run this script it must be on the sever that is to run the administration component – short of having the user run the script multiple times on the same server and adding appropriate code to handle that I’ve not come up with any way around this – frankly, it sucks, big time – so be careful with this one!
Okay, we’re about halfway through, still with me? 🙂
Now it’s time to create the crawl topology:
On line 1 I’m getting all existing crawl topologies for the service application (Get-SPEnterpriseSearchCrawlTopology) and filtering on whether or not the crawl topology has components and is active or not. I do this because when the search application is created it automatically creates a crawl topology for us but that topology is not configured correctly (there are no crawl components) but once the topology has been made active it doesn’t let us change it in order to add crawl components. When I create our new topology it will be inactive so I will use this fact when I run the script on the next server. Once I have the crawl topology I can then add the crawl components using the New-SPEnterpriseSearchCrawlComponent cmdlet (note that you have to pass in the crawl store ID so I have to get that ID as shown in line 12).
After we create crawl topology and components we do essentially the exact same thing for the query topology and components:
Great! We have our admin component created, our crawl topology and components created, and our query topology and components created. Now we just need to make things active. There’s nothing more to do with the admin component so we’ll first start the "Search Query and Site Settings Service" and then continue with the crawl topology:
So starting the query and site settings service was easy, now lets move on to the hard stuff:
The first thing I do is set a variable to indicate whether I’ve gotten all designated crawl servers configured – we don’t want to set the crawl topology active until all the servers have been configured because once we make it active we can’t change it (this is critical if you are planning on doing a phased server roll-out – you will need to rebuild your topology if you need to add additional crawl or query components). On line 11 I set the topology as active using the Set-SPEnterpriseSearchCrawlTopology cmdlet. Problem is not quite that simple – you see, this cmdlet runs asynchronously, meaning that it returns immediately and does not wait until the service is made active – this is critical because we can’t proceed to the query piece until the crawl topology is active so all I’m doing in lines 14 through 22 is checking the status and if it’s not "Ready" then I sleep for 2 seconds and try again.
Only one more thing – now that the crawl topology is active we do, once again, the same thing for the query topology:
This code is identical to that of the crawl topology but uses the query specific cmdlets.
And, finally, after about 236 lines of code, we’re done! Makes me miss the days of MOSS 2007 where I could start search with one line of STSADM (maybe I need to create a Start-OSearch cmdlet :)). So, putting it all together, here’s the complete function:
This script took me an incredible amount of time to figure out and I really hope others are able to benefit from it. If you find areas of improvement or anything that requires correction please, please, please post a comment so that I and others can benefit from your experiences with it.
Also, this script is a derivative of a slightly more complex one that I use for all my stuff and though that more complex script has gone through many rounds of testing this one has not – mainly I’ve not had a chance to test in a multi-server environment and have only had time to do a single server deploy (though the changes related to the servers were very small and, if they were to fail, would likely have failed on the single server). Mainly try to remember that the product is still in beta so you should expect that things may either change between now and RTM or things may just not work from one environment to the next.
Good luck and happy scripting!