Friday, March 27, 2015

Get all Office 365 Video Channels, Groups and Delve Boards with REST


Office 365 has introduced 3 new portals recently: Videos, Groups and Delve. Behind the scenes, the architecture of Videos and Groups is such that each Video channel is a site collection and so is each Group. For Delve boards, each board is saved as a Tag and when you add a document to a board, the document is tagged with the name of the board.

If you are working on a solution for Office 365 and want to integrate Videos, Groups or Delve, here is how you can get a list of all of them using the SharePoint REST API:

1) Get all Office 365 Video Channels with REST API:'contentclass:sts_site WebTemplate:POINTPUBLISHINGTOPIC'&SelectProperties='WebTemplate,Title,Path'&rowlimit=50

2) Get all Office 365 Groups with REST API:'contentclass:sts_site WebTemplate:Group'&SelectProperties='WebTemplate,Title,Path'&rowlimit=50

3) Get all Delve Boards with REST API:'(Path:"TAG://PUBLIC/?NAME=*")'&Properties='IncludeExternalContent:true'&selectproperties='Path,Title'&rowlimit=50

SharePoint 2010 Public Site Navigation not working on latest version of Safari for the Mac

The link below provides the fix, which is an update to the compat.browser file.

Wednesday, October 24, 2012

onclick attributes do not fire on custom search XSLT on a Publishing page

I had a requirement for a client this week to create a search solution for a custom list (for Contractors). Basically the client wanted to use SharePoint Search and a custom XSLT to render the list items. The two challenges I faced (without reverting to full on development):
  1. Make the results sort by the Company name. Note that OOB, the Core Search Results web part does not support this. It only supports sorting by date or relevance (which is the default).
  2. Open the results in a modal popup window, rather than redirecting the whole page to the list item.

To solve point 1, I found the blog entry which allowed me to change the sort order to company.
The second problem was more frustrating to solve. I thought that within the XSLT I would add the onclick element, calling a function to load the given URL into a SharePoint modal window. But every time I tried, the onclick event just wouldn't fire. I went back to simple onclick="alert('hello')" and it still didn't work.  Within the body of the page I added straight links, switched to HTML mode and added the onclick events there, but every time I did this, SharePoint would strip it out. The blog entry at then gave me a clue. Basically SharePoint was blocking basic onclick events from firing or removing them all together. So to solve the problem, I removed the onclick attributes, then embedded a jquery function within the XSLT to iterate through all links and add the onclick event. This time it worked. FYI, below is the XSLT block:   

<xsl:text disable-output-escaping="yes">  
<script type="text/javascript" src="/_layouts/Inflow/cqwp.js"></script>  
<script type="text/javascript">  
$(document).ready(function () {    
  if ($("#results-table").length) {  
    $("#results-table a").click(function() {   
      return false;  
<div class="srch-results" accesskey="W">  
<table id="results-table" border="1"> <tr><th>Company</th><th>Active / Inactive</th><th>Full Name</th></tr>  
<xsl:apply-templates select="All_Results/Result">    
<!-- The xsl-sort needs operate upon a single field - it doesn't work if the sort has to evaluate child nodes-->    
<xsl:sort select="company" />  
<xsl:call-template name="DisplayMoreResultsAnchor" /> </xsl:template>
<!-- This template is called for each result -->
<xsl:template match="Result">  
<xsl:variable name="id" select="id"/>  
<xsl:variable name="currentId" select="concat($IdPrefix,$id)"/>
<xsl:variable name="url" select="url"/>
<a id="{concat($currentId,'_Title')}">          
<xsl:attribute name="href">            
<xsl:value-of select="$url"/>          
<xsl:attribute name="title">            
<xsl:value-of select="company"/>          
<xsl:value-of select="company"/>        
<td><xsl:value-of select="activeinactive" /></td>
<td><xsl:value-of select="fullname" /></td>

Monday, September 24, 2012

How to fix “Add Host to Workflow Farm problem” when installing Windows Azure Workflow in SharePoint2013 Preview

I had this issue today when installing Windows Azure Workflow in my SharePoint 2013 environment.

In my case, the error message was as follows:
System.Management.Automation.CmdletInvocationException: Could not successfully create management Service Bus entity 'WF_Management/WFTOPIC' with multiple retries within timespan of 00:02:05.5769235. ---> System.TimeoutException: Could not successfully create management Service Bus entity 'WF_Management/WFTOPIC' with multiple retries within timespan of 00:02:05.5769235. ---> System.UnauthorizedAccessException: The remote server returned an error: (401) Unauthorized. Authorization failed for specified action: Manage..TrackingId:3e0f0351-14d0-4620-b80e-c506156b6f7a,TimeStamp:9/24/2012 1:15:07 AM ---> System.Net.WebException: The remote server returned an error: (401) Unauthorized.

Other posts talk about ensuring your RunAs account uses a FQDN. E.g., which I had been doing. Instructions from CriticalPathTraining also say that this account should have dbcreator and securityadmin rights at the SQL level, that TCPIP should be enabled through SQL Configuration Manager, and that the RunAs account is a member of the local administrator group (which it was).

To fix this, I first went to each of the workflow databases and explicitly set the membership for the sp_content database so that it had db_owner and also was a member of the respective application roles that had been created for each database. E.g. Store.Operators and Store.Administrators in the SbManagementDB database. I then logged into the SharePoint server as the sp_content account and re-ran the Workflow configuration to join the existing farm. This time it worked!

Sunday, July 24, 2011

SharePoint Search Exception: The server did not provide a meaningful reply

Recently I had to configure search sometime after the initial build, which was done using AutoSPInstaller. To ensure consistency of approach, I created a slightly modified version of AutoSPInstallerMain.ps1 and also XML AutoSPInstallerInput files. The only real difference in the XML file was that I was instructing AutoSPInstaller to provision search, and also that I was instructing it to not provision Central Admin (as this was already provisioned and I have experienced the script failing when CA had already been created). The main difference in the AutoSPInstallerMain.ps1 was that I commented out a lot of the entries in the Setup-Services function (i.e. to only run StartSearchQueryAndSiteSettingsService and CreateEnterpriseSearchServiceApp).

Shortly after the installation/configuration of Search, users were experiencing a correlation exception when performing searches. The ULS log returned the error 'Internal server error exception: System.ServiceModel.CommunicationException: The server did not provide a meaningful reply; this might be caused by a contract mismatch, a premature session shutdown or an internal server error.'. Subsequent messages also included 'A runtime exception was detected. Details follow. Message: Thread was being aborted'.

To fix this issue I had to do the following:
1) On each SharePoint Server, modify the permissions to the C:\Windows\Temp directory to ensure that the WSS_WPG group had both read and write access (by default after the Search installation, this group only had read access)
2) Restart the 'SharePoint Server Search 14' search on each server.

Once this was done, search began to work. Hope this helps someone.

Saturday, May 07, 2011

Upgrading a Custom Search Results XSLT from 2007 to 2010

Whilst upgrading a public web site that I had previously created in SharePoint 2007, during testing I noticed this strange number appearing at the end of my upgraded search results page. After the last result I was seeing a number like 68050. This number changed as I performed more searches.
I hadn't touched my custom search results XSLT at all during the upgrade, so I was wondering why this number started appearing. To troubleshoot, I first edited the search results page and changed the XSLT so that I could see the Search Results XML. The XSLT looked like:

<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="1.0" xmlns:xsl="">
<xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes"/>
<xsl:template match="/">
<xmp><xsl:copy-of select="*"/></xmp>

This revealed to me 2 new elements right before the end of the </all_results> tag. The extra elements were 'totalresults' and 'numberofresults'.

As SharePoint 2010 now uses Federated Search Locations with custom XSLT allocated to each location, I decided to look to see what the OOB 'Local Search Results' location was doing. Turns out that it now includes a couple of extra XSLT templates:

<xsl:template match="TotalResults">
<xsl:template match="NumberOfResults">

So I added these to extra XSLT templates into my custom XSLT and voila! No more strange number at the end of my search results.

Wednesday, April 20, 2011

What to do when Content Deployment fails

I had to help out some guys from Accenture today with the setup of content deployment for a global SharePoint 2010 Internet site. Basically the first content deployment from the Authoring to Production environment didn't send across all of the published pages. As a result, settings like 'Welcome Page' were lost. Further content deployments did not seem to help or fix what was previously broken. As SharePoint's content deployment is incremental by default (UI) I created a full content deployment job through PowerShell, but then we started seeing the error 'Unable to import folder _catalogs/masterpage/Forms/Page Layout. There is already an object with the Id in the database from another site collection'.

I used my friend Google and found this, which helped me solve the problem.

So it seems that information about what has been deployed before is not contained within the Authoring environment, but the destination environment, and deleting the site collection and re-creating it with an empty template doesn't help. In my case I deleted the web application entirely and re-created it through script. In the article, you can also detach and re-attach the content database through Central Admin.

Tuesday, March 29, 2011

Fixing the Health Analyzer SPTraceV4 issue

Within a SharePoint farm configuration (multiple servers), the SharePoint Health Analyzer will first complain that the SPTraceV4 account should not be running as a local service. If you manually fix this by setting a domain account for the trace service (on each server) and restarting the service, sometimes you will still get the message 'Built-in accounts are used as application pool or service identities' within the Health Analyzer. To fix this, do the following:

1) Create the Trace Account (e.g. DOMAIN\svc-SP_TRC) as a managed account.
2) Start up the SharePoint 2010 Management Shell in administrator mode
3) Type the following Powershell script:

$servicename = "SPTraceV4"
$managedaccountname = "DOMAIN\svc-SP_TRC"
$farm = Get-SPFarm
$SPTimerv4 = $farm.Services | Where {$_.Name -eq $servicename}
$SPTimerV4NewAccount = Get-SPManagedAccount $managedaccountname
$SPTimerv4.ProcessIdentity.CurrentIdentityType = "SpecificUser"
$SPTimerv4.ProcessIdentity.ManagedAccount = $SPTimerV4NewAccount

4) Goto to each server in the farm and restart the tracing service. Then do an IISRESET /noforce on each SharePoint Server
5) Open up Central Administration, view the problem report and re-analyze the built-in accounts Health Analyzer message.
6) Close the report message and refresh the page. The error should now be gone.
7) If you have moved your logs to another (e.g. non-System) drive, on the directory where logs are to be written to, ensure that the nominated domain Trace account has full read/write permissions to it. You may have to restart the trace service and do an IISRESET again to notice that activity is now being written to the LOGS directory. Note that this approach also fixes the issue where you see lots of log file entries, all 0KB in length.