There was recently a twitter conversation between @cacallahan, @toddklindt, and @brianlala discussing provisioning Search on SharePoint Foundation and whether it was possible or not and somewhere during the conversation it was suggested that I might know how to do this (sorry guys for not responding immediately) – unfortunately I hadn’t actually done any work with SharePoint 2013 Foundation yet and so had not yet tried and thus didn’t know the answer (I knew there were issues and suspected a workaround was possible but I didn’t have a server built to test anything). Well, last night and today I managed to have some free time so I figured I’d take a look at the problem to see if my guess about a workaround was correct.
Before I get to the results of my discovery let’s first look at what the blocking issue is. It’s actually quite simple – the product team, for various reasons, have decided that for SharePoint 2013 Foundation you can only have one Search Service Application and you shouldn’t be able to modify the topology of the Service Application; this means that when you provision Search using the Farm Configuration Wizard it will create a default topology for you in which all roles are on a single server. So, to enforce these rules they chose to make it so that the PowerShell cmdlets would not allow you to provision the service application or run any method or cmdlet that would otherwise allow you to modify an existing topology (so you can’t change the topology created by the wizard). I totally get the reasoning for the restriction – if you need enterprise topology type structures then pony up the money and get off the free stuff. That said, I think they took the lazy way out by simply blocking the cmdlets when they could have easily put in other restrictions that would have achieved their goals while still allowing users to use PowerShell to provision the environment.
If you’re curious as to what happens when you try to provision the service using PowerShell on SharePoint Foundation 2013 here’s a screenshot which shows the error that is thrown:
This error is thrown by a simple piece of code in the InternalValidate() method of the cmdlet which checks to make sure you are on Standard or Enterprise before allowing the cmdlet to execute (and any other cmdlets or methods that would otherwise affect the topology likewise perform this check).
To solve the problem I decided to start from the perspective of code run via the browser and drill down to see what I could find. So using Reflector I located the class and associated methods that are called by the Farm Configuration Wizard; this quickly led me to the public Microsoft.Office.Server.Search.Administration.SearchService.CreateApplication() static methods. So I did a quick test calling one of these methods and I was happy to find that the Search Service Application created perfectly – though there was one minor problem: the topology was empty. At first glance I figured this wouldn’t be an issue – I could simply clone the topology and add my components – unfortunately this is where I learned that they applied the SKU check to methods and cmdlets that would allow you to manipulate the topology. (On a side note, using these methods for Standard or Enterprise is potentially a great alternative to the New-SPEnterpriseSearchServiceApplication cmdlet as it lets you specify the names of databases that you can’t specify when using the cmdlet and because it creates an initially empty topology there’s less cleanup and manipulation of the cloned topology (assuming you don’t want to use what’s created) and it provisions slightly faster because it does less). So at this point I figured I’d hit the real road block – I could create the service application but it was useless as I couldn’t manipulate it.
This left me with only one option – to use reflection to call the internal method that the Farm Configuration Wizard calls to provision the service application. Now, before I get to the code that demonstrates how to do this I need to share a word of caution – using reflection to call internal methods is totally not supported. So what does this mean? Will Microsoft no longer support your Farm? Well, my understanding (and folks in the know please correct me if I’m in the wrong) is that Microsoft will continue to support you and that you will simply have to remove unsupported code before they will help you troubleshoot issues. Well, in this case it’s a one-time operation so there’s nothing really to remove; I figure the worst case scenario is that they’ll tell you that you need to recreate the service application using the Farm Configuration Wizard and then they’ll help you with your issue. But let’s take the question of supportability out of the equation for a second and look at it from a completely practical standpoint – if you were to look at the code that the Farm Configuration Wizard calls you’d see that, outside of some error checking and data validation and variable initialization, there’s effectively just two lines of code that do the provisioning of the service so I believe that the probability of getting it wrong is pretty low and the fact is search will either work or it won’t so if it doesn’t work then try again or just use the dang wizard. So, with all that said, if you decide to use any of this code you need to weigh the risks yourself and make an informed decision with those risks in mind. Alright, enough of that crap – you want to see the code so let’s get to the code.
To keep the PowerShell itself nice and simple I decide to derive this example from a script that Todd Klindt provides on his blog (the script I use is considerably more complex as it handles the changing of service options like the index folder and the service and crawl accounts, to name a few, and I don’t want the point of this post to be lost in all those details). Just to make sure the full chain of credit is provided I should note that Todd’s script is actually a derivative of what Spence Harbar provides on his blog but I wanted to reference Todd’s post specifically as it’s a bit shorter and more focused on the topic. Okay, background info – check; disclaimer – check; attribution – check – looks like it’s time for some code so here you go:
#Provide a unique name for the service application
$serviceAppName = "Search Service Application"
#Get the application pools to use (make sure you change the value for your environment)
$svcPool = Get-SPServiceApplicationPool "SharePoint Services App Pool"
$adminPool = Get-SPServiceApplicationPool "SharePoint Services App Pool"
#Get the service from the service instance so we can call a method on it
#Use reflection to provision the default topology just as the wizard would
$bindings = @("InvokeMethod", "NonPublic", "Instance")
$types = @([string], [Type], [Microsoft.SharePoint.Administration.SPIisWebServiceApplicationPool], [Microsoft.SharePoint.Administration.SPIisWebServiceApplicationPool])
$values = @($serviceAppName, [Microsoft.Office.Server.Search.Administration.SearchServiceApplication], [Microsoft.SharePoint.Administration.SPIisWebServiceApplicationPool]$svcPool, [Microsoft.SharePoint.Administration.SPIisWebServiceApplicationPool]$adminPool)
$methodInfo = $searchService.GetType().GetMethod("CreateApplicationWithDefaultTopology", $bindings, $null, $types, $null)
$searchServiceApp = $methodInfo.Invoke($searchService, $values)
#Create the search service application proxy (we get to use the cmdlet for this!)
$searchProxy = New-SPEnterpriseSearchServiceApplicationProxy -Name "$serviceAppName Proxy" -SearchApplication $searchServiceApp
#Provision the search service application
Basically there’s two things that need to be done: first we need to use reflection to get the MethodInfo object for the CreateApplicationWithDefaultTopology() method of the Microsoft.Office.Server.Search.Administration.SearchService class and we’ll use this object to invoke the actual method, passing in the parameter types and values (and yes, the cast of the SPIisWebServiceApplicationPool objects is necessary otherwise you’ll get an error about trying to convert PSObjects to SPIisWebServiceApplicationPool types); the next thing we need to do, after the service application is created, is to create the service application proxy and then call the Provision() method on the search service application that we previously created (if you miss this step you’ll get errors about things like the admin component not be started and whatnot).
Once completed you’ll get a fully functional, PowerShell provisioned search service application. If you navigate to the search administration page you should see something that looks just like this (just like if you used the wizard):
So there you have it – it is indeed possible to provision the service using PowerShell – I’ll let you determine whether you should or not
It's been a while since my last real SharePoint 2010 scripting post but we're getting close to RTM so I figured I need to buckle down and play some catch up and get some long overdue posts published. So, continuing my series of posts on scripting the various services and service applications within SharePoint 2010 I decided that I would share something that I know a lot of people have been struggling with recently - scripting the SharePoint Foundation Search Service.
This one threw me for a bit of a loop because all the other services and service applications can be configured almost exclusively using PowerShell cmdlets - this one though has to be configured almost exclusively using the object model. We basically have four cmdlets available to help with the configuration and unfortunately they're not much help at all:
- Get-SPSearchService - Returns back an object representing the actual service
- Get-SPSearchServiceInstance - Returns an object representing a service configuration for the service
- Set-SPSearchService - Updates a few select properties associated with the service
- Set-SPSearchServiceInstance - Updates the ProxyType for the service
The main failing with these cmdlets is that you can't set the services process identity, the database name and server or failover server, and you can't trigger the provisioning of the service instances which is required for the service to be considered fully "started". All of these things I can do through Central Admin but there's no way to do it using any provided cmdlets - so how do we solve the problem? By getting our hands dirty and writing a boat load of code against the object model.
So let's get started. As before we'll use an XML file to drive the setup process:
<SvcAccount Name="sp2010\spsearch" />
<CrawlAccount Name="sp2010\spcrawl" />
<Server Name="sp2010svr" ProxyType="Default" />
As you can see the configuration file is pretty simple. We define two accounts that we'll use, one for the process identity of the service and the other for the crawl account. There's a few simple attributes for the database and some miscellaneous configurations and a list of all the servers in which the service should be started on.
Okay, let's start digging into the actual script. The first thing I do is load the XML file to a variable, $svcConfig, which I use throughout the function:
Line 1 loads the file into a System.Xml.XmlDocument typed variable and then I grab the <FoundationSearchService /> element and set that to the $svcConfig variable. Next I need to determine if the script should continue on this server by checking the <Servers /> element to see if there's a match for the current machine:
So at this point we know that we're on a target machine so the first thing we want to do is use the Start-SPServiceInstance to start the Foundation Search Service:
The trick with this is that if we're not using SharePoint Foundation then once the service is initially started it renames itself to "SharePoint Foundation Help Search" so I had to put a provision to look for one name or the other to allow this script to be run multiple times and from multiple machines. Now that the service is started lets set a few variables that we'll use throughout the rest of the script:
We'll use the $searchSvc and $searchSvcInstance variables extensively. Note that we'll also need to repeat lines one and two at least a couple of times to avoid update conflicts as a result of timer jobs modifying those objects.
The next step will be to set the process identity for the service. We'll go ahead and also get the crawl account information while we're at it to avoid prompting for passwords in more than one location:
This is where things start to get interesting. I use the Get-Credential cmdlet to return back the credentials of the user to use for the service but once I have that there's no parameter on any cmdlet that will allow me to set the credential so I have to do it using the object model. I use the $searchSvc variable from earlier and edit the object returned by the ProcessIdentity property (after confirming that the value needs to be changed).
Once we have the process set we can go ahead and set the other simple properties on the service - fortunately the cmdlet Set-SPSearchService can actually help us out with this one:
Alright, that was the easy stuff - now we have to deal with the database. The first step is to see if there's already a database defined for the service and if it matches what we want. This is important as we want to be able to run the script more than once so we don't want to just blindly delete and recreate the database. The first bit of code builds a connection string using the SqlConnectionStringBuilder object (note that in PowerShell you have to use the PSBase property to access the properties on this object) and then compares that to what is currently set. If a match is not found then the existing database is deleted and the search service updated:
At this point if the $searchDb variable is null then we want to go ahead and create a new search database:
I first create a new SPSearchDatabase object by calling the static Create() method and passing in the SqlConnectionStringBuilder object that was previously created. I then call the Provision() method to actually create the database on the SQL server instance. Once it's created we can associate the database with the service by setting the SearchDatabase property on the $searchSvcInstance variable. If an error occurs then I attempt to delete the database from SQL Server if it's not yet associated with the service.
Now that we have our database provisioned we can go ahead and set the failover server:
Most of the logic here is just in determining whether or not to set the failover server. Basically you just call the AddFailoverServiceInstance() method of the SearchDatabase property (SPSearchDatabase) and then update the service instance.
We're almost there - we've set all the properties we can now we need to complete the provisioning process:
If the service instance is not currently marked as Online (again, accounting for multiple runs) and the service instance we're working with is for the current machine then we call the Provision() method on the service instance. If an error occurs provisioning the service then I try to set the status back to its previous value.
Only two steps left; First we need to create a timer job to trigger the search service instance to be provisioned on the other servers in the farm:
And finally, we need to set the ProxyType for the service instances so I loop through the <Server /> elements and call the Set-SPSearchServiceInstance cmdlet, providing the ProxyType attribute as defined in the XML:
Phew - we're done! Let's put it all together now - here's the complete script:
One thing you should note is that I'm not setting the schedule for the service. This is because the timer job class that I'd need to use to set the schedule is marked internal thus making it impossible for me to set the schedule without using reflection.
As you can see we're in a bit of a conundrum with SharePoint 2010 - scripting your installations is considered to be a best practice and you should strive to do so whenever possible but the level of complexity involved with scripting such simple things has made it prohibitively complex for the average administrator to do.
I recognized this issue the very first day I started working with SharePoint 2010 and to solve the problem I've been working on a product for ShareSquared called SharePoint Composer which will allow administrators, architects, and developers to visually design their SharePoint configurations and then build out the entire Farm using the model they create in the design tool. This tool will allow you to enforce your corporate standards by clearly documenting every configuration and building the farm based on those configurations in a single-click, automated way - all without having to know any PowerShell at all! Keep a watch here for more information about SharePoint Composer.
Note - I've not had a chance to test this in a multi-server farm so if anyone can give me some feedback about their experiences with it I'd greatly appreciate it.
The information in this post is specific to SharePoint 2010 Beta 2 and may need adjusting for the RTM version.
In an effort to continue with my previous posts where I demonstrated how to build a basic farm and it's site structure using XML configuration files and PowerShell for SharePoint 2010 I would like to now share how to create a search service application. An automated install of the service applications is, without a doubt, the most difficult PowerShell task you'll undertake when scripting your SharePoint 2010 install, specifically the search application is the most difficult which is why I've chosen to explain it first as I expect it to be one of the most needed and one of the least understood. Note that I'm not planning on giving any depth to what the various components are, there's plenty of other resources that will explain what the admin component is, for example.
To start off let's look at the XML file that will drive our setup. Like my previous examples I have a fairly simplistic XML structure that drives all my configurations. This structure allows me to create as many service application instances as needed, each with their own configurations:
Examining the structure above you can see that I chose to put the <EnterpriseSearchService /> element under a <Services /> element - this will allow me to have all my service configurations in one file rather than a separate file for each service (note that there can be only one <EnterpriseSearchService /> element). Under the <EnterpriseSearchService /> element I have a container element for the applications - there should be only one <EnterpriseSearchServiceApplications /> elements but you can have as many <EnterpriseSearchServiceApplication /> elements under it. The application element is where all the meat of the configurations are. Within this element you define the application pool to use, the crawl and query servers to use, and the server for the administrative component, and finally the proxy definition and it's proxy group memberships. The <CrawlServers /> and <QueryServers /> elements can have as many <Server /> child elements as needed but the <AdminComponent /> element can have only one <Server /> child element. And finally the <Proxy /> element can have as many <ProxyGroup /> child elements as desired.
Okay, so that's the easy part - hopefully you can begin to see the power and flexibility of this simple XML file. No for the scripts - first we need to look at a couple of helper functions, one to get/create our application pools and another for the proxy group memberships. Let's take a look at the application pool function which I called Get-ApplicationPool:
In this function I'm attempting to get the application pool if it already exists and if it doesn't then I proceed to attempt to get the managed account that will be associated with the application pool. If the managed account doesn't exist then I prompt for credentials and then create the managed account which I then use to create the application pool which gets returned to the calling function.
The next function, which I've named Set-ProxyGroupMembership associates my service application proxy with one or more proxy groups:
This function is probably a bit more complicated than it needs to be but I'm going to use it with every service application script so I'll explain it briefly here and just reference this post in my future posts. For this function I wanted to be able to pass the proxy object that I created into the function using the pipeline rather than a parameter (it just flowed better that way and allowed me to pass more than one proxy if I desired without having to write a loop within the function). The first thing I'm doing in this function is clearing out any existing proxy group assignments that may have been set automatically but are not what I want per the XML file. Once I've cleared undesired assignments then I add any missing assignments. Some service applications will automatically add the proxy to the default proxy group which may not be what you want.
Now that we have our two helper functions out of the way we can start looking at the core function. I'll talk about it in chunks and then at the end of this post provide the complete function.
The first thing I do is load the XML file to a variable, $svcConfig, which I use throughout the function:
Line 1 loads the file into a System.Xml.XmlDocument typed variable and then I grab the <EnterpriseSearchService /> element and set that to the $svcConfig variable. Next I need to get the search service itself and set that to a variable which I'll use throughout the function as well. I pass the -Local switch in to get the service instance on the current machien. If I'm unable to find a service instance then something is wrong and I throw an error:
Next I need to get the managed account that will be used for the search service. I first try to retrieve the account in case it already exists and if it doesn't exist then I create after asking the user for the password:
Now that we have a managed account and service instance we can set the core properties for the search service. I end up doing this on every machine but it only needs to be done once - just easier to set it every time rather than try and figure out if it's been set yet and doing so has no negative repercussions:
The core service settings are in place, now it's time to create all the service applications. In the example XML we have just one but we could have more so I use the ForEach-Object cmdlet to loop through all the definitions:
The first thing we need to do to create our app is to create the application pool for the service application itself and the administration component:
Before creating the application pools I store the current XML element in the $appConfig node for easier reference and to avoid conflicts with sub-loops. I then call the helper function I showed earlier to create the two application pools which I'll use later. Next I check to see if the service application has already been created (line 1 below) by calling Get-SPEnterpriseSearchServiceApplication and if it does not exist then I create a new one. This helps when you have to run the script again due to possible errors that may occur later in the script (I've often seen update conflict errors occur randomly, running the script again is usually all that's necessary):
Now that the service application exists we can go ahead and create the proxy and set the proxy group memberships:
Like with the service application I first try and get the proxy in case it has already been created and if I don't find it then I create it. Once I have a reference to the proxy object I check to see if it's online and if not then I set it online and call Update() to commit the change. And finally I call the Set-ProxyGroupsMembership function that I previously defined.
The intent of the script is to allow it to be run on multiple servers to support a multi-server scripted deployment. That's where this next bit comes in:
For both the crawl servers, query servers, and admin component I get the name of the current computer ($env:computername) and then check to see if an <Server /> element has been declared with a matching name for the specific component. The variables declared are then used throughout the rest of the script.
Before I can create the crawl or query component I need start search service instance that we previously acquired:
If the service isn't already online and if we're on an appropriate server then I start the service by passing the service instance to the Start-SPEnterpriseSearchServiceInstance cmdlet. Next I need to set the administration component:
The trick with this bit is that you have to set the administration component before you can set the query or crawl components so the first time you run this script it must be on the sever that is to run the administration component - short of having the user run the script multiple times on the same server and adding appropriate code to handle that I've not come up with any way around this - frankly, it sucks, big time - so be careful with this one!
Okay, we're about halfway through, still with me?
Now it's time to create the crawl topology:
On line 1 I'm getting all existing crawl topologies for the service application (Get-SPEnterpriseSearchCrawlTopology) and filtering on whether or not the crawl topology has components and is active or not. I do this because when the search application is created it automatically creates a crawl topology for us but that topology is not configured correctly (there are no crawl components) but once the topology has been made active it doesn't let us change it in order to add crawl components. When I create our new topology it will be inactive so I will use this fact when I run the script on the next server. Once I have the crawl topology I can then add the crawl components using the New-SPEnterpriseSearchCrawlComponent cmdlet (note that you have to pass in the crawl store ID so I have to get that ID as shown in line 12).
After we create crawl topology and components we do essentially the exact same thing for the query topology and components:
Great! We have our admin component created, our crawl topology and components created, and our query topology and components created. Now we just need to make things active. There's nothing more to do with the admin component so we'll first start the "Search Query and Site Settings Service" and then continue with the crawl topology:
So starting the query and site settings service was easy, now lets move on to the hard stuff:
The first thing I do is set a variable to indicate whether I've gotten all designated crawl servers configured - we don't want to set the crawl topology active until all the servers have been configured because once we make it active we can't change it (this is critical if you are planning on doing a phased server roll-out - you will need to rebuild your topology if you need to add additional crawl or query components). On line 11 I set the topology as active using the Set-SPEnterpriseSearchCrawlTopology cmdlet. Problem is not quite that simple - you see, this cmdlet runs asynchronously, meaning that it returns immediately and does not wait until the service is made active - this is critical because we can't proceed to the query piece until the crawl topology is active so all I'm doing in lines 14 through 22 is checking the status and if it's not "Ready" then I sleep for 2 seconds and try again.
Only one more thing - now that the crawl topology is active we do, once again, the same thing for the query topology:
This code is identical to that of the crawl topology but uses the query specific cmdlets.
And, finally, after about 236 lines of code, we're done! Makes me miss the days of MOSS 2007 where I could start search with one line of STSADM (maybe I need to create a Start-OSearch cmdlet ). So, putting it all together, here's the complete function:
This script took me an incredible amount of time to figure out and I really hope others are able to benefit from it. If you find areas of improvement or anything that requires correction please, please, please post a comment so that I and others can benefit from your experiences with it.
Also, this script is a derivative of a slightly more complex one that I use for all my stuff and though that more complex script has gone through many rounds of testing this one has not - mainly I've not had a chance to test in a multi-server environment and have only had time to do a single server deploy (though the changes related to the servers were very small and, if they were to fail, would likely have failed on the single server). Mainly try to remember that the product is still in beta so you should expect that things may either change between now and RTM or things may just not work from one environment to the next.
Good luck and happy scripting!
This is one that I've been wanting to address for a while and I finally decided to sit down and just do it. If you've had your environment in place long enough to have to change the passwords you know that you can change most of the passwords using the out of the box STSADM commands - many refer to this support article from Microsoft on how to do this: http://support.microsoft.com/kb/934838.
Because there's so many accounts to change and so many places to visit this is definitely one of those things you want to have scripted (just be careful where you store your script). If you look at the article though you'll notice that it doesn't address updating the user profile import account and it mentions that you have to manually change the default content access account. I already have a command to change the user profile import account but I didn't have anything for changing the default content access account and having scripts with manual steps just kind of defeats the purpose in my opinion. So, I created a new command which I called gl-updatedefaultcontentaccessaccount.
Setting the default content access account through code is real easy - you just call the SetDefaultGatheringAccount method of an instance of the Content class which can be obtained by calling the static GetContext method from the SearchContext object:
The help for the command is shown below:
C:\>stsadm -help gl-updatedefaultcontentaccessaccount stsadm -o gl-updatedefaultcontentaccessaccount Sets the account to use as the default account when crawling content. This account must have read access to the content being crawled. To avoid crawling unpublished versions of documents, ensure that this account is not an administrator on the target server. Parameters: [-ssp <SSP name>] -username <DOMAIN\name> -password <password>
The following table summarizes the command and its various parameters:
|Command Name||Availability||Build Date|
|gl-updatedefaultcontentaccessaccount||MOSS 2007||Released: 8/15/2008|
|Parameter Name||Short Form||Required||Description||Example Usage|
|ssp||No||The SSP that the account is associated with. If omitted then the default SSP is used.||-ssp SSP1|
|username||u||Yes||The username of the account to use. The account must have read access to the content being crawled. To avoid crawling unpublished versions of documents, ensure that the account is not an administrator on the target server.||-username "domain\sspcontent"
|password||pwd||Yes||The password associated with the specified username.||-password "pa$$w0rd"
The following is an example of how to set the default content access account:
stsadm -o gl-updatedefaultcontentaccessaccount -username "domain\sspcontent" -password "pa$$w0rd"
I'll follow up this post with a sample password change script that I use which includes this command.
Our previous environment had just one web application and no existing search scopes beyond the default ones. With our upgrade we wanted to (finally) take advantage of search scopes to help filter the result sets and make searches more relevant. In order to make the creation of scopes scriptable I needed three new commands: gl-createsearchscope, gl-updatesearchscope, and gl-addsearchrule. I thought about creating commands to support editing and deleting but as I don't currently have the need for that I decided against it (with the exception of the gl-updatesearchscope command which I needed to be able to assign my shared search scope to groups on the various web applications). For some reason I was expecting this to be more difficult than it was but after digging into it I found it to be rather easy. The commands I created are detailed below.
The code to work with search scopes is really straight forward. You obtain a "Microsoft.Office.Server.Search.Administration.Scopes" object which is effectively your scope manager object. From this you use the AllScopes property (which is a ScopeCollection object) and call the Create method passing in appropriate parameters. Once you've got your scope created you can add it to relavent groups by getting the ScopeDisplayGroup object via the GetDisplayGroup() method of the Scopes object. Note that the scope can be owned by a site collection or the SSP. If a null value is passed into the Create method for the owningSiteUrl parameter then the scope will be owned by the SSP (it will be a shared scope available to all site collections belonging to the SSP which is determined by the passed in url parameter which loads the appropriate SPSite object). The core code is shown below:
The syntax of the command can be seen below:
C:\>stsadm -help gl-createsearchscope stsadm -o gl-createsearchscope Sets the search scope for a given site collection. Parameters: -url <site collection url> -name <scope name> [-description <scope description>] [-groups <display groups (comma separate multiple groups)>] [-searchpage <specific search results page to send users to for results when they search in this scope>] [-sspisowner]
Here's an example of how to create a shared search scope (owned by the SSP):
stsadm –o gl-createsearchscope -url "http://sspadmin/ssp/admin" -name "Search Scope 1" -description "A really helpful search scope." -groups "search dropdown, advanced search" -sspisowner
Note that the group assignments will not show up on other web applications - you must use the updatesearchscope command to associate the scope with groups on each web application of interest.
This code is almost identical to that of the gl-createsearchscope command - the main difference is that I'm updating individual properties rather than calling the Create method and I have to clear out existing groups before adding the newly assigned groups:
The syntax of the command can be seen below:
C:\>stsadm -help gl-updatesearchscope stsadm -o gl-updatesearchscope Updates the specified search scope for a given site collection. Parameters: -url <site collection url> -name <scope name> [-description <scope description>] [-groups <display groups (comma separate multiple groups)>] [-searchpage <specific search results page to send users to for results when they search in this scope>]
Here's an example of how to update a web application to assign the shared scope created above to appropriate groups:
stsadm –o gl-updatesearchscope -url "http://intranet" -name "Search Scope 1" -groups "search dropdown, advanced search"
Once you have a search scope created you can now add rules to it. This command is slightly more complex due to the different types of rules that can be created. In general there are four types: AllContent, ContentSource, PropertyQuery, and WebAddress. The ContentSource is typically only used with shared scopes (you can create a ContentSource rule on a scope that is not shared using this tool but you cannot do it via the browser - I'm honestly not sure if the rule will work correctly though). To manage the rules of a scope you simply grab the Rules property of the Scope object and call the appropriate method (there's one for each type of rule except for ContentSource which is effectively just a PropertyQuery rule that uses the ContentSource managed property):
The syntax of the command can be seen below:
C:\>stsadm -help gl-addsearchrule stsadm -o gl-addsearchrule Adds a search scope rule to the specified scope for a given site collection. Parameters: -url <site collection url> -scope <scope name> -behavior <include | require | exclude> -type <webaddress | propertyquery | contentsource | allcontent> [-webtype <folder | hostname | domain>] [-webvalue <value associated with the specified web type>] [-property <managed property name>] [-propertyvalue <value associated with the specified property or content source>]
Here's an example of how to add a rule to the scope created above which will prevent content from the HR site collection from being returned in the results:
stsadm –o gl-addsearchrule -url "http://intranet" -scope "Search Scope 1" -behavior exclude -type webaddress -webtype folder -webvalue "http://intranet/hr"
This was a pretty simple command to create and only took me a few minutes. Basically I needed to be able to set the search center for a site collection. You can do this via the browser by going to http://[portal]/[site collection]/_layouts/enhancedSearch.aspx. The command I created is called gl-setsearchcenter.
The code for this is beyond simple - it was just a matter of setting a string based property via the AllProperties collection: