Quantcast
Channel: Office 365 Archives - Microsoft 365 Security
Viewing all 50 articles
Browse latest View live

Office 365 ADFS Client Access Policy Builder.

$
0
0

Earlier this year, Microsoft released an update to ADFS (RU1) which introduced some new features of which Client Access Policies were one. Client Access Policies allow you to restrict access to Office 365 services based on the location (IP) of your clients.

To find out more, have a look at the following TechNet article:

Unfortunately, configuring these policies is rather difficult and cumbersome, due to the regular expressions it uses. At least that was until recently, when I came across this interesting tool that helps building (complex) Client Access Policies.

The policy builder tool will allow you to graphically design and configure policies in an easy and straightforward manner:

    

To find out more about the tool and to download it, visit the following page:

Until later!



Create a function to connect to and disconnect from Exchange Online

$
0
0

Office 365 allows you to connect remotely to Exchange online using PowerShell. However, if you had to type in the commands to connect every time, you would be losing quite some time:

$session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri <a href="https://ps.outlook.com/PowerShell">https://ps.outlook.com/PowerShell</a> -Authentication Basic -Credential (Get-Credential) -AllowRedirection

Import-PSSession $session

Equally, to disconnect, you’d have to type in the following command each time

Get-PSSession  | ?{$_.ComputerName -like "*.outlook.com"} | Remove-PSSession

However, it is relatively easy to add both commands into a function which you can afterwards add them into your profile:

Function Connect-ExchangeOnline{
   $session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri <a href="https://ps.outlook.com/powershell">https://ps.outlook.com/powershell</a> -Authentication Basic -AllowRedirection -Credential (Get-Credential)
   Import-PSSession $session
}

Function Disconnect-ExchangeOnline{
   Get-PSSession  | ?{$_.ComputerName -like "*.outlook.com"} | Remove-PSSession
}

To add these functions to your PowerShell profile, simply copy-past them into your profile. To find out where your profile-file is located, type the following:

PS C:\> $profile
C:\Users\Michael\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1

To start using the functions, all you have to do is to call them from the PowerShell prompt:

PS C:\> Connect-ExchangeOnline

cmdlet Get-Credential at command pipeline position 1
Supply values for the following parameters:
Credential

WARNING: Your connection has been redirected to the following URI:
https://pod51014psh.outlook.com/PowerShell-LiveID?PSVersion=3.0

PS C:\> Disconnect-ExchangeOnline

Note   there is no output for the Disconnect-ExchangeOnline function


Disappearing (online) archives after moving your mailbox to Office 365…

$
0
0

Office 365 offers great ways to enhance the functionalities of your on-premises deployment. By running the Hybrid Configuration Wizard (which Steve Goodman explains in this article) you can configure both environments to act as one; allowing you to make use of features such as e.g. Online Archives (EOA).

With Exchange Online Archives, your primary mailbox stays in your on-premises Exchange server, whereas the archive will – as the name might have given away – be hosted in Office 365. If you’re interested in finding out more about Online Archives, I suggest that you take a look at Bharat Suneja’s session at TechEd this year: “Archiving in the cloud with Exchange Online Archiving

The problem

To me, one of the most interesting things about a hybrid deployment is the flexibility it offers. You can put a few mailboxes in Office 365, try them out and move more to the service if you like it.

If you are looking to take that approach, this information might be interesting for you!

Imagine the following: you are trying out Office 365 and decide to use Online Archives to start with. You provision the archives and life is great! After a while you decide you want to use more and you decide to move some mailboxes to Office 365. However, after your users have been moved to Office 365 they start complaining that their archive is empty.

It seems that – although this scenario is supported – there are some issues with the provisioning process when you move a user to Office 365 that previously already had an Online Archive: it get’s “wiped”. At least, that’s how it looks like.

At first, I though the data would reappear after a while, so I made sure that I waited long enough. Unfortunately even after a few days, the archives was still empty.

I decided to do some tests, to make sure this wasn’t a standalone case. Perhaps something went wrong during the move. To my surprise, tests confirmed what was going on: although the archive contained items prior to the move, they are now empty.

To explain what happens, let me describe the process I used to reproduce this issue.

This first screenshot show the details of the on-premises mailbox that has a cloud-based archive (EOA) enabled. This archive contains 4 (test) items:

image

Afterwards, I moved the mailbox through the Exchange Management Console using the “New Remote Move Request”-wizard.

Because on-premises only a mailbox exists, you don’t have the option to move an archive (which is normal):
image

The move completed successfully, and after having waited long enough (DirSync etc.). I verified the mailbox’s settings:

image

The interesting part here is that the Archive, although having the same GUID, appears to have been moved to the same database as the mailbox. Before the archive resided in database “EURPRD04DG032-db055” whereas now it’s in “EURPRD04DG030-db041”.

To ascertain myself that this wasn’t causing problems, I decided to do another test. When executing the MoveRequest, I specified to what database the archive should be moved to. I made sure that the target database of the Online Archive was set to the database it was already residing in before moving the mailbox:

New-MoveRequest “Testmivh5” –RemoteHostName “hostname.company.com” –targetdeliverydomain “tenant.mail.onmicrosoft.com” –<strong>ArchiveTargetDatabase </strong>“EURPRD04DC032-db055” –RemoteCredential Get-Credential)

Note   this cmdlet was executed from a remote PowerShell connection to Exchange Online.

After the move completed (successfully btw), I – again – waited long enough for DirSync/replication/provisioning to occur. I deliberately didn’t force DirSync to ensure that wasn’t causing any issues either. But alas, none of that helped: the archive was again empty.

A quick look at the object’s attributes revealed that – although a target database parameter was provided – the archive still got moved to the same database as the user:

image

Then, I was thinking that the ‘old’ archive perhaps got disabled and that a new one was created. Although this would be strange since the GUID of the archive remains the same, I thought it was worth a try. Again: no joy! No disconnected mailboxes were to be found.

After all this testing, I had reasons enough to call Microsoft Support. After a few calls back and forth, they recently came back to me confirming that this is a known issue and that they’re currently working on it.

Until today I’m still not sure what the cause of the problem is. I haven’t received any feedback yet either. Of course, I will keep you posted as soon as I find out more!

Temporary workaround

It might sound too obvious, but the workaround is simple: either create both archive and mailbox in the cloud or create the (both!) on-premises first and move them together to the cloud. Both cases work just fine!

Conclusion

Although the last thing you’d want to experience is data-loss, I’m well aware that only a few customers, world-wide, would try this scenario. Nevertheless, it’s an issue that should be addressed quickly.

In our case, we have lost only a single archive worth a few hundred megabytes of emails. I can imagine that losing the wrong kind of emails might be a real big issues for some companies. I haven’t asked, but I’m pretty confident that – even though the emails seem lost – Microsoft can somehow recover the data so that you don’t really “lose” anything. I honestly cannot imagine otherwise.

Does this mean that I discourage using features like EOA? Absolutely not. I still have my hosted archive and I am pretty happy with it. Apart from some inconveniences which I will write about another time, it provides me with everything I need. Furthermore, it allows us to give everyone a relatively large archive without having to bear the costs of additional storage.

Until later!


My first impressions of Office 365

$
0
0

Today, Microsoft announced an entire battery of new products from their range of Office products, including the new Lync, Exchange and Sharepoint Server. Along with these new products, they also announced an upgrade to Office 365 which will now be based on the newer version of the aforementioned products.

Along with the improvements that come along with the newer versions, Office 365 itself seems to have matured as well. Not only does the new UI look clean, but Microsoft seems to have done a pretty good job listening to the feedback from its customers.

One of the things that immediately caught my attention, for example, is the ability to turn on/off downloads for your end-users. Before, you couldn’t do this which left the possibility for your end users to download Office from the portal. A 850MB which – if you’re running low on bandwidth – was not really something you liked:

image

Of course new features in Exchange, Lync and SharePoint will also contribute to a much richer experience. I look forward playing a bit with Office 365 in the upcoming weeks.

Make sure to check back as I will be regularly posting updates on my findings!

Until later!

Screenshot of the new admin portal:

image


Update: Disappearing (online) archives after moving your mailbox to Office 365

$
0
0

Update

After a few weeks of mailing back and forth with Microsoft’s support, I was today (finally) able to confirm that the issue which I described below is now solved.

It seems that Microsoft rolled out a hotfix/code change for their Exchange Online service. Although, at first, I thought the issue was related to a bug in EMC for not correctly issuing all parameters when initiating a remote mailbox move, it seems the issue had more to it than that. Basically, what happened is that when MRS moved the mailbox from on-premises environment to Office 365, it wouldn’t keep the link to the already-existing archive. This caused a new (empty) archive to be created and could possibly cause data loss.

I’m happy to see what time and effort Microsoft has put into solving this issue. It proves that Microsoft is concerned about the quality of their product / service. In fact, it would surprise me if they weren’t. A bug that could cause data-loss is not really something you’d want to carry around for a long time!

Thanks to everyone involved and kudo’s to Philippe Phan Cao Bach (Sr. Escalation Engineer) who was working with me on this case.

Original Post

Office 365 offers great ways to enhance the functionalities of your on-premises deployment. By running the Hybrid Configuration Wizard (which Steve Goodman explains in this article) you can configure both environments to act as one; allowing you to make use of features such as e.g. Online Archives (EOA).

With Exchange Online Archives, your primary mailbox stays in your on-premises Exchange server, whereas the archive will – as the name might have given away – be hosted in Office 365. If you’re interested in finding out more about Online Archives, I suggest that you take a look at Bharat Suneja’s session at TechEd this year: “Archiving in the cloud with Exchange Online Archiving

The problem

To me, one of the most interesting things about a hybrid deployment is the flexibility it offers. You can put a few mailboxes in Office 365, try them out and move more to the service if you like it.

If you are looking to take that approach, this information might be interesting for you!

Imagine the following: you are trying out Office 365 and decide to use Online Archives to start with. You provision the archives and life is great! After a while you decide you want to use more and you decide to move some mailboxes to Office 365. However, after your users have been moved to Office 365 they start complaining that their archive is empty.

It seems that – although this scenario is supported – there are some issues with the provisioning process when you move a user to Office 365 that previously already had an Online Archive: it get’s “wiped”. At least, that’s how it looks like.

At first, I though the data would reappear after a while, so I made sure that I waited long enough. Unfortunately even after a few days, the archives was still empty.

I decided to do some tests, to make sure this wasn’t a standalone case. Perhaps something went wrong during the move. To my surprise, tests confirmed what was going on: although the archive contained items prior to the move, they are now empty.

To explain what happens, let me describe the process I used to reproduce this issue.

This first screenshot show the details of the on-premises mailbox that has a cloud-based archive (EOA) enabled. This archive contains 4 (test) items:

image

Afterwards, I moved the mailbox through the Exchange Management Console using the “New Remote Move Request”-wizard.

Because on-premises only a mailbox exists, you don’t have the option to move an archive (which is normal):
image

The move completed successfully, and after having waited long enough (DirSync etc.). I verified the mailbox’s settings:

image

The interesting part here is that the Archive, although having the same GUID, appears to have been moved to the same database as the mailbox. Before the archive resided in database “EURPRD04DG032-db055” whereas now it’s in “EURPRD04DG030-db041”.

To ascertain myself that this wasn’t causing problems, I decided to do another test. When executing the MoveRequest, I specified to what database the archive should be moved to. I made sure that the target database of the Online Archive was set to the database it was already residing in before moving the mailbox:

New-MoveRequest “Testmivh5” –RemoteHostName “hostname.company.com” –targetdeliverydomain “tenant.mail.onmicrosoft.com” –ArchiveTargetDatabase “EURPRD04DC032-db055” –RemoteCredential (Get-Credential)

Note   this cmdlet was executed from PowerShell connected to Exchange Online.

After the move completed (successfully btw), I – again – waited long enough for DirSync/replication/provisioning to occur. I deliberately didn’t force DirSync to ensure that wasn’t causing any issues either. But alas, none of that helped: the archive was again empty.

A quick look at the object’s attributes revealed that – although a target database parameter was provided – the archive still got moved to the same database as the user:

image

Then, I was thinking that the ‘old’ archive perhaps got disabled and that a new one was created. Although this would be strange since the GUID of the archive remains the same, I thought it was worth a try. Again: no joy! No disconnected mailboxes were to be found.

After all this testing, I had reasons enough to call Microsoft Support. After a few calls back and forth, they recently came back to me confirming that this is a known issue and that they’re currently working on it.

Until today I’m still not sure what the cause of the problem is. I haven’t received any feedback yet either. Of course, I will keep you posted as soon as I find out more!

Temporary workaround

It might sound too obvious, but the workaround is simple: either create both archive and mailbox in the cloud or create the (both!) on-premises first and move them together to the cloud. Both cases work just fine!

Conclusion

Although the last thing you’d want to experience is data-loss, I’m well aware that only a few customers, world-wide, would try this scenario. Nevertheless, it’s an issue that should be addressed quickly.

In our case, we have lost only a single archive worth a few hundred megabytes of emails. I can imagine that losing the wrong kind of emails might be a real big issues for some companies. I haven’t asked, but I’m pretty confident that – even though the emails seem lost – Microsoft can somehow recover the data so that you don’t really “lose” anything. I honestly cannot imagine otherwise.

Does this mean that I discourage using features like EOA? Absolutely not. I still have my hosted archive and I am pretty happy with it. Apart from some inconveniences which I will write about another time, it provides me with everything I need. Furthermore, it allows us to give everyone a relatively large archive without having to bear the costs of additional storage.

Until later!


Error: You cannot synchronize the ADFS configuration database after adding a secondary federation server

$
0
0

Introduction

There are multiple ways to setup a highly available ADFS server farm. One possibility is to install multiple federation servers using the default Windows Internal Database.
In that case, the first federation server is designated as being the ‘primary’ federation server. Every subsequent federation server that is added to the farm will be a ‘secondary’ federation server.

These secondary federation servers periodically poll the primary federation server for configuration changes and replicate these changes across. By default this is every 5 minutes.

This scenario is especially useful if you do not have a SQL server available or if you cannot make your SQL server highly available but still want to increase resiliency for your federation server farm.

Note   when using the Windows Internal Database instead of SQL, you are limited to a maximum of 5 federation servers in a farm.

If you want more information, read my previous article on the implications of a database choice in ADFS:

The issue

When installing a secondary federation server, you might see the following error in the AD FS 2.0 Application Event Log when the server tries to contact the primary federation server to replicate the configuration database:

EventID: 344
Source: AD FS 2.0s

There was an error doing synchronization. Synchronization of data from the primary federation server to a secondary federation server did not occur.

Additional data

Exception details:
System.IO.InvalidDataException: ADMIN0023: Incorrect value for property LastPublishedPolicyCheckTime: 12/31/1899 11:00:00 PM.
   at Microsoft.IdentityServer.PolicyModel.PropertyTypes.DateTimeProperty.Validate(Object context)
   at Microsoft.IdentityServer.PolicyModel.PropertyTypes.PropertySet.ValidateProperties(Object context)
   at Microsoft.IdentityServer.PolicyModel.Client.ClientObject.GetData()
   at Microsoft.IdentityServer.PolicyModel.Client.ClientObject.OnReadFromStore()
   at Microsoft.IdentityServer.PolicyModel.Client.SearchResult..ctor(SearchResultData data, PropertyFactoryBase factory)
   at Microsoft.IdentityServer.Service.Synchronization.SyncAdministrationManager.DoSyncForItems(List`1 itemsToSync)
   at Microsoft.IdentityServer.Service.Synchronization.SyncAdministrationManager.Sync(Boolean syncAll)
   at Microsoft.IdentityServer.Service.Synchronization.SyncAdministrationManager.Sync()
   at Microsoft.IdentityServer.Service.Policy.PolicyServer.Service.SqlPolicyStoreService.DoSyncDirect()
   at Microsoft.IdentityServer.Service.Synchronization.SyncBackgroundTask.Run(Object context)

User Action
Make sure the primary federation server is available or the service account identity of this machine matches the service account identity of the primary federation server.

image

The solution

In this specific case, the customer decided to geographically spread the different AD FS servers to increase the (site) resiliency of their federation server farm. However, this particular secondary federation server was located in a different time zone than the primary federation server. It seems that AD FS cannot handle the time zone difference by itself (unlike e.g. Active Directory that reduces time back to UTC).

After changing the time zone on the secondary AD FS server to match the time zone of the primary AD FS server, replication started working.


You get an error: “The connector cycle has stopped. Object with DN failed…”

$
0
0

As part of setting up a hybrid configuration between Exchange on-premise and Exchange online (or when configuring Exchange Online Archiving), you also need to setup DirSync.

In these scenarios DirSync fulfills an important role as it will also configure the write-back of some attributes in your local Active Directory. This “write-back” is required for Hybrid/EOA to work. For a list of attributes that are sync to/from Office 365, have a look at the following article: http://support.microsoft.com/kb/2256198

As part of the best-practices when installing DirSync, you should always run the Office 365 Deployment Readiness tool which will scan your local Active Directory and search for incompatible objects. The tool will create a report in which incompatible objects are mentioned. This will allow you to modify these object before configuring DirSync.

However, sometimes object can still contain incompatible object attributes, which might cause issues for DirSync. In such case, you’ll likely be presented with the following error in the application event log. Please note that this example mentions an issue with the “TargetWebService” Management Agent. It could very well be that you’ll encounter an issue in the SourceAD Management Agent.

The TargetWebService Connector export cycle has stopped.  Object with DN CN=<guid> failed validation for the following attributes: proxyAddresses. Please refer to documentation for information on object attribute validation.

This error contains 2 important items:

  1. The Distinguished name (CN=<guid>)
  2. The attribute that is causing issues

However, matching the guid to a user-account isn’t very easy. The best way to go about is to open the MIIS Management Interface and work from there. Usually, the client can be found in the following directory:

C:\Program Files\Microsoft Online Directory Sync\SYNCBUS\Synchronization Service\UIShell

image

After opening the client, navigate to Management Agents, right-click the management agent mentioned in the error message and select Search Connector Space:

image

In the “Search Connector Space” window, select DN or Anchor from the drop-down list under Scope and specify the Distinguished Name from the error message. Afterwards, click Search:

image

The search should return a single object. Double-click it to view additional information. Search for the attribute that was mentioned in the event log entry to review its value(s):

image

In this particular case, one of the proxy addresses contained an illegal character which caused the Management Agent to fail. Once you determined what the issue was, correct the value in AD and re-start synchronization. Normally, synchronization should happen successfully now.


How to check what the version of your tenant is in the cloud (Office 365)

$
0
0

I sometimes get the question how one can verify what the version of Exchange they’re running in the cloud. Although it should be pretty obvious based on the GUI (Exchange 2010 vs. Exchange 2013) and the fact that the latter isn’t generally available yet, it could come in handy once it does. According to some sources, the release might be sooner than later.

Additionally, when you’re planning on going hybrid with the new version of Exchange in Office 365, you’ll have to make sure your tenant isn’t in the process of being upgraded and is running version 15.

To check the version, open up a PowerShell session to your Office 365 tenant and run the following command:

Get-OrganizationConfig | ft AdminDisplayVersion,IsUpgradingOrganization

With the command for connecting to Office 365 via PowerShell, that would look something like this:

$session = New-PSSession –ConnectionUri https://ps.outlook.com/powershell –AllowRedirection –Authentication Basic –Credential (Get-Credential) –ConfigurationName Microsoft.Exchange

Import-PSSession $session

Get-OrganizationConfig | ft AdminDisplayVersion,IsUpgradingOrganization

Running these commands would then lead up to a result similar to the following:

Get-OrganizationConfig | ft AdminDisplayVersion,IsUpgradingOrganization -Autosize

AdminDisplayVersion IsUpgradingOrganization
------------------- -----------------------
0.10 (14.16.190.8)                    False


Exchange Online Archiving (EOA): a view from the trenches – part 1

$
0
0

What is Exchange Online Archiving?

I’ve been meaning to write this article for quite a while now, so I’m glad it’s finally “ready”. First, let me start by introducing what Exchange Online Archiving (EOA in short) actually is.
This feature, first available since Exchange Hybrid, allows you to provision an cloud-based archive for an on-premises mailbox. While having an Exchange archive isn’t something new, at least not since Exchange 2010, the fact that the archive doesn’t have to be hosted within your own organization is pretty interesting.

Archives can be useful in many ways. One of the primary reasons why archives are used is to keep historical data for a longer period of time without cluttering a user’s primary mailbox. This could, for instance, be the case when you have to meet some compliance requirements which e.g. state that corporate data should be kept for 5 years. Although Exchange doesn’t have a problem with handling very large mailboxes including a high item count per folder, it’s usually the human component that cannot handle the overload of information that comes with having large amounts of data – at least that’s my experience. Keeping email inherently means that you’ll have to increase disk space to support the sometimes huge amounts of data that is involved. Although disk space has become quite cheap and Exchange 2013 is a great candidate to be used in combination with those cheap disks, there’s still a significant overhead involved in keeping that additional piece of infrastructure up and running.

This is where Exchange Online Archives could come in handy. First of all, there is no feature difference between an on-premises archive or a cloud-based (Office 365) archive. From a user’s point-of-view they both act and look the same. In fact, you are only offloading the task of storing archives to Office 365. The Exchange Online Plan 2 subscription automatically includes the right to provision unlimited-sized archives for your users. Although I don’t expect many people to run into the issue of filling up the initial 100GB, which you get provisioned to start with, any time soon, it’s very hard to match that offer for only  8$ per user per month… If you are only interested in EOA, there are specific EOA licenses as well which cost only a fraction of the full Exchange online license. Of course, this license will only allow you to use EOA and nothing more.

How does it work?

As briefly touched upon earlier, being able to use Exchange Online Archives is a by-product from having a hybrid Exchange deployment. A hybrid deployment, as the name stipulates, is the process of ‘pairing’ your On-Premises Exchange organization to Office 365; essentially creating one large “virtual Exchange organization”. As a result, having a (fully functional) Hybrid Deployment is the first requirement to abide to… Technically speaking it would be possible to setup a sort of minimalistic Hybrid deployment in which you leave out functionalities that you do not necessarily need to make Online Archives work (like e.g. cross-premises mail flow). Nonetheless I strongly encourage to still setup the full monty. It might save you some time afterwards if you decide to deploy cloud-based mailboxes anyway.

A very import part of the setup is set aside for DirSync. As you might remember, if you tick the “Hybrid Deployment” checkbox during DirSync setup, you allow it to write back some attributes into your on-premises organization. One of these attributes is the msExchArchiveStatus attribute. This attribute is a flag telling the on-premises organization whether an online archive has been provisioned or not. As we will see later in this section, this attribute is particularly important during the creation of an archive.

One of the questions I get asked regularly is whether you are required to deploy ADFS when setting up a hybrid deployment. The short answer is no. On the other hand, there are many good reasons why you would want to deploy ADFS, or rather: there are many good reasons why you would want to have some sort of single/same sign on. One reason I can think of it to simplify using online archives from an end user’s perspective. That way they won’t need to manage another set of credentials. Of course this isn’t only valid for online archives, it’s the same for each cloud-based workload in Office 365. ADFS can be one way of providing SSO, Password Sync is another. Both are valid options, neither are required and won’t be discussed here.

From a functional point-of-view, Online Archives have the exact same requirements as on-premises archives. You at least need Office 2007 SP3 Professional edition or later. Since we are running archives from Office 365, you also need to make sure to be up to speed with the latest required updates. For more information on what updates are needed, have a look at the following web page: http://office.microsoft.com/en-us/office365-suite-help/software-requirements-for-office-365-for-business-HA102817357.aspx

Now that we got the prerequisites covered, let’s have a look at how the provisioning process works from a high-level perspective:

image

As you can derive from the image above, there are two DirSync operations needed. The first one is used to “tell” Office 365 to create an archive for user “X”. The second DirSync operation is used to sync back the msExchArchiveStatus attribute which will now have a value of 1 instead of 0. This is to tell the on-premises organization the archive has been created. A good way to verify whether this process has completed is to run the Get-Mailbox | fl *arch* command:

image

Here you can see that the archive was created successfully (ArchiveStatus = Active). However, we are missing a part of the information. This is because the on-premises organization cannot provide the information from Office 365 (which is essentially another Exchange organization). To fetch the missing information, you’ll have to open up a remote PowerShell session to Exchange Online and run the Get-MailUser | fl *arch* command:

image

Conclusion

This is it for part one of this article.
In the following part, I will talk about some of the gotchas, do’s and don’ts. Stay tuned!


Updated DirSync can now be deployed on a Domain Controller.

$
0
0

Microsoft recently released a new version of Windows Azure Active Directory Sync, better known as DirSync. As the information on the Version Release History page of the tool depicts, this new build allows you to deploy DirSync on a Domain Controller.

Along with this new ability, this new version (6553.0002) also includes some fixes:

  • Fix to address Sync Engine memory leak
  • Fix to address "staging-error" during full import from Azure Active Directory
  • Fix to handle Read-Only Domain Controllers in Password Sync

The latest version can be downloaded from the following page:

Have fun!


Publishing multiple services to the internet on a single IP address using a KEMP Load Balancer and content switching rules

$
0
0

A few days ago, someone suggested I write this article as it seems many people are struggling with ‘problem’. In fact, the solution which I’m going to explain below is the answer to a problem typically found in “home labs” where the internet connection doesn’t always have multiple IP addresses. This doesn’t mean that it’s only valid for home-us or testing scenarios only. Given that IPv4 addresses are almost depleted, it’s a good thing not to waste these valuable resources if it’s not necessary.

Basically, what I’m going to explain is how you can use a KEMP Load Master to publish multiple services/workloads to the internet using only a single (external) IP address. In the example below, I will be publishing Exchange, Office Web Apps and Lync onto the internet.

The following image depicts how the network in my example looks like. It also displays the different domain names and IP addresses that I’m using. Note that – although I perfectly could – I’m not connecting the Load Master directly onto the internet. Instead, I mapped an external IP address from my router/firewall to the Load Master:

image

How it works

The principle behind all this is simple: whenever a request ‘hits’ the Load Master, it will read the host header which is used to connect to the server and use that to determine where to send the request to. Given that most of the applications we are publishing use SSL, we have to decrypt content at the Load Master. This means we will be configuring the Load Master in Layer 7. Because we need to decrypt traffic, there’s also a ‘problem’ which we need to work around. The workloads we are publishing to the internet all use different host names. Because we only use a single Virtual Service, we can assign only a single certificate to it. Therefore, you have to make sure that the certificate you will configure in the Load Master either includes all published host names as a Subject (Alternative) Name or use a wildcard certificate which automatically covers all the hosts for a given domain. The latter option is not valid if you have multiple different domain names involved.

How the Load Master handles this ‘problem’ is not new – far from it. The same principle is used in every reverse proxy and was also the way how our beloved – but sadly discontinued – TMG used to handle such scenarios. You do not necessarily need to enable the Load Master’s ESP capabilities.

Step 1: Creating Content Rules

First, we will start by creating the content rules which the Load Master will use to determine where to send the requests to. In this example we will be creating rules for the following host names:

  • outlook.exchangelab.be (Exchange)
  • meet.exchangelab.be (Lync)
  • dialin.exchangelab.be (Lync)
  • owa.exchangelab.be (Office Web Apps)
  1. Login to the Load Master and navigate to Rules & Checking and click > Content Rules:image
  2. Click Create New…
  3. On the Create Rule page, enter the details as follows:image

Repeat steps 2-3 for each domain name. Change the value for the field Match String so that it matches the domain names you are using. The final result should look like the following:

content rules

Step 2: creating a new Virtual Service

This step is fairly easy. We will be creating a new virtual service which uses the internal IP address that is mapped to the external IP address. If you already have create a virtual service previously, you can skip this step.

  1. In the Load Master, click Virtual Services and the click > Add New:image
  2. Specify the internal IP address which you have previously mapped to an external IP address
  3. Specify port TCP 443
  4. Click Add this Virtual Service 

    image

Step 3: Configuring the Virtual Service

So how does the Load Master differentiate between the different host headers? Content Rules. Content rules allow you to use Regular Expressions which the Load Master will use to examine incoming requests. If a match is found through one of the expressions, the Load Master will forward the traffic to the real server which has been configured with that content rule.

First, we need to enable proper SSL handling by the Load Master:

  1. Under SSL Properties, click the checkbox next to Enabled.
  2. When presented with a warning about a temporary self-signed certificate, click OK.
  3. Select the box next to Reencrypt. This will ensure that traffic leaving the Load Master is encrypted again before being sent to the real servers. Although some services might support SSL offloading (thus not reencrypting traffc), it’s beyond the scope of this article and will not be discussed.
  4. Select HTTPS under Rewrite Rules.image

Before moving to the next step, we will also need to configure the (wildcard) certificate to be used with this Virtual Service:

  1. Next to Certificates, click Add New
  2. Click Import Certificate and follow the steps to import the wildcard certificate into the Load Master. These steps include selecting a certificate file, specifying a password for the certificate file (if applicable) and setting an identifying name for the certificate (e.g. wildcard). 

    image

  3. Click Save
  4. Click “OK” in the confirmation prompt.
  5. Under Operations, click the dropdown menu VS to Add and select the virtual service.
  6. Now click Add VS 

    image

You’ve now successfully configured the certificate for the main Virtual Service. This will ensure the Load Master can decrypt an analyze traffic sent to it. Let’s move on to the next  step in which we will define the “Sub Virtual Services”.

Step 4: Adding Sub Virtual Services

While still on the properties pages for the (main) Virtual Service, we will now be adding new ‘Sub Virtual Services’. Having a Sub Virtual Service per workload allows us to define different real servers per SubVS as well as a different health check. This is the key functionality which allows to have multiple different workloads live under a single ‘main’ Virtual Service.

  1. Under Real Servers click Add SubVS…
  2. Click OK in the confirmation window.
  3. A new SubVS will now have appeared. Click Modify and configure the following parameters:
  • Nickname (makes it easier to differentiate from other SubVSs)
  • Persistence options (if necessary)
  • Real Server(s)

Repeat the steps above for each of the workloads you want to publish.

Note: a word of warning is needed here. Typically, you would add your ‘real servers’ using the same TCP port as the main Virtual Service, being TCP 443, in this case. However, if you are also using the Load Master as a reverse proxy for Lync, you will need to make sure your Lync servers are added using port 4443 instead.

Once you have configured the Sub Virtual Services, you still need to assign one of the content rules to it. Before you’re able to do so, you first have to enable Content Rules.

Step 5: enabling and configuring content rules

In the properties of the main Virtual Service, Under Advanced Properties click Enable next to Content Switching. You will notice that this option has become available after adding your first SubVS.

image

Once Content Switching is enabled, we need to assign the appropriate rules to each SubVS.

  1. Under SubVSs, Click None in the Rules column for the SubVS you just are configuring. For example, if you want to configure the content rule for the Exchange SubVS:image
  2. On the Rule Management page, select the appropriate Content Matching rule (created earlier) from the selection box and then click Add: 

    image

  3. Repeat these steps for each Sub Virtual Service you created earlier

Testing

You can now test the configuration by navigating your browser to one of your published services or by using one of the service. If all is well, you should now be able to reach Exchange, Lync and Office Web Apps – all using the same external IP Address.

As you can see, there’s some fair amount of work involved, but it’s all in all relatively straightforward to configure. In this example we published Exchange, Lync and Office Web Apps, but you could just as easily add other services too. Especially with the many Load Balancing options you have with Exchange 2013, you could for instance use multiple additional Sub Virtual Services for Exchange alone. To get you started, here’s how the content rules for that would look like:

content rules exchange

Note: if you are defining multiple Sub Virtual Services for e.g. Exchange, you don’t need to use/configure a Sub Virtual Service which uses the content rule for the Exchange domain name “^outlook.domain.com*”. If you still do, you’d find that – depending on the order of the rules – your workload-specific virtual services would remain unused.

I hope you enjoyed this article!

Until later,

Michael


Exchange Online Archive (EOA): a view from the trenches – part 2

$
0
0

A bit later than expected, here’s finally the successor to the first article about Exchange Online Archiving which I wrote a while ago.

Exchange Online Archives and Outlook

How does Outlook connect to the online archive? Essentially, it’s the same process as with an on-premises archive. The client will receive the archive information during the initial Autodiscover process. If you take a look at the response, you will see something similar in the ouput:

image

Based on the SMTP address, the Outlook client will now make a second Autodiscover call to retrieve the connection settings for the archive after which it will try connecting to it. What happens then is exactly the same as how Outlook would connect to a regular mailbox in Office 365. Because Exchange Online is configured to use basic authentication for Outlook, the user will be prompted to enter their credentials. It’s particularly important to point this out to your users as the credential window will have no reference to what it’s used for. If you have deployed SSO, users will have to use their UPN (and not domain\username !) in the user field.

Experiences

So far we have covered what Exchange Online Archiving is all about, what the prerequisites are to make it work and how things come together in e.g. Outlook. Now, it’s time to stir things up a little and talk about how things are actually perceived in real life.

First, let me start by pointing out that this feature actually works great, IF you are willing to accept some of the particularities inherent to the solution. What I mean with particularities?

Latency

Unlike on-premises archives, your archives are now stored ‘in the cloud’. Which means that the only way to access them is over the internet. Depending on where you are connecting from, this could be an advantage or a disadvantage. I’ve noticed that connectivity to the archive and therefore the user-experience is highly dependent on the internet access you have. Rule of thumb: the more bandwidth/lower latency the better it gets. This shouldn’t be a surprise, but can be easily forgotten. I have found on-premises archives to be more responsive in terms of initial connectivity and retrieval of content. This brings me to the second point: speed.

Speed

As you are connecting over the internet, the speed of fetching content is highly dependent on the speed of your internet connection (you see a similarity here?). The bigger the message/attachment you want to download is, the longer it will take. Truth be told, you’ll have the same experience while accessing your on-premises archive from a remote location, so it’s not something exclusive to Office 365.

Outlook

To be honest, Outlook does a relative good job of working with the archive – at least when you deal with it the way it was designed. If you let Exchange sync expired items to your archive using the Managed Folder Assistant, your life will be great! However, if you dare to manually drag & drop messages from your primary mailbox into the archive, you’ll be in for a surprise. Outlook treats such an operation as a “foreground” action, which means that you will have to wait for this action to complete before you can do anything else in Outlook. The problem here is that if you choose to manually move a 4Mb message to the archive, it could take as long as 20 – 30 seconds (depending on your internet connection) before the action completes. To make things worse: during this operation Outlook freezes and if you try clicking something it’ll (temporarily) go into a “Not Responding…” state until the operation completes. According to Microsoft’s support, this is by design. So, as a measure of precaution: advise your users to NOT drag & drop messages, just let Exchange take care of it; something it does marvelously by the way.

I have found that proper end-user educations is also key here. If they are well informed about the how the archive works and have had some training on how to use retention tags, they’ll be on their way in no time!

Provisioning

As part of the problem I described above, the initial provisioning process can be a problem. When you first enable an archive, chances are that a lot of items will be moved to the archive. Although this process is handled by the MFA, if you mailbox is open whilst the MFA processes the mailbox, Outlook might become unresponsive or extremely slow at the least – this because the changes are happening and Outlook needs to sync those changes to the client’s OST file (when running in cached mode at least). Instead, it’s better to provision the archive on-premises, let the MFA do it’s work and then move the archive to Office 365. The latter approach works as a charm and doesn’t burden the user with an unresponsive Outlook client. If you are going to provision archives on-premises first, you might find it useful to estimate the size of an archive before diving in, heads first.

Search

This is a short one. Search is great. Because Outlook and Exchange can do cross-premises searches, you will be able to search both your primary mailbox and archive mailbox at once. Didn’t have much issues here. So: thumbs up!

Other Tips & Tricks

General (best) practices

Other than the particularities above, you shouldn’t do anything else compared to ‘regular’ on-premises archives. Try not to overwhelm your users with a ginormous amount of retention tags. Instead offer them a few tags they can used and – if necessary – adapt based on user feedback.

Autodiscover

Given the dependency from both Outlook and Exchange to make the online archive work, you should make sure that Autodiscover is working for your Exchange deployment AND that your Exchange servers are able to query Office 365’s Autodiscover service successfully as well.

This is especially important if you are using Outlook Web App (OWA) to access your online archive. In this case, it’s not Outlook but Exchange that will perform an Autodiscover lookup and connect to the archive. If your internet connection isn’t working properly or you have some sort forward authenticating proxy server in between, things could not (or intermittently) work.

Implement it gradually

As described above, it’s a bad idea to grant everyone with a new cloud-based archive at once. It will not only put a heavy load on your internet connection, but it will also affect your users. Instead, try to gradually implement the solution and request feedback from your users. Start with on-premises archives and move them to the cloud in batches, for instance.

DirSync is utterly important!

As described in the prerequisites sections, DirSync is very important to online archives. So make sure that you closely monitor how it’s doing. If you have issues with DirSync, you will inadvertently also have issues with creating archives. Issues with DirSync won’t interfere with archives that have already been provisioned though.

Conclusion

Is Exchange Online Archiving going to solve all your issues? Probably not. Is it a good solution. Yes, absolutely! I have been using Exchange Online Archiving for quite a while and I’m quite happy with it. I rarely encounter any issues, but I also have learnt to live with some of the particularities I mentioned earlier. Also, I treat my archive as a real archive. The stuff that’s in there are usually things I don’t need all that often. So the little latency-overhead that I experience whilst browsing/searching my archives is something I’m not bothered with. However, if I’d had to work with items from my archive day in, day out in; I’d probably have a lot more issues with adjusting to the fact that it’s less snappier than an on-premises archive.

So remember, set your (or your customer’s) experiences straight and you’ll enjoy the ride. If not, there might be some bumpy roads ahead!


Free/Busy in a hybrid environment fail and Test-Federationtrust returns error “Failed to validate delegation token”

$
0
0

Following an issue with Free/Busy in Exchange online, earlier this week, I was troubleshooting the exchange of Free/Busy information in some of my hybrid deployments as Free/Busy information was still not working.
After having checked some (obvious things) like the Organization Relationships and whether or not Autodiscover was working properly, I discovered an issue when running the Test-FederationTrust cmdlet.

In fact, the cmdlet completed almost entirely successful, except for the very last step in the process:

Id         : TokenValidation
Type       : Error
Message    : Failed to validate delegation token.

This also explained why I was seeing 401 Unauthorized messages when running the Test-OrganizationRelationship command.

I then checked the same in some of my other deployments and found out the all had the same issue. At least, there was some common ground to start working from.
I turned to co-MVP Steve Goodman and asked him to run the same command in one of his labs in order to have a point of reference. At the same time, he asked me to run a command which might help:

Get-FederationTrust | Set-Federationtrust –RefreshMetaData

After running the command, I re-ran the Test-FederationTrust command which now completed successfully.

Conclusion

Although the Free/Busy issues in Office 365 should be solved, some customers might still experience problems exchanging Free/Busy information. In this case, the problem manifests itself by e.g. online users not being able to request on-premises user’s availability information.


You get an error “the connection to the server could not be completed” when trying to start a hybrid mailbox move in Exchange 2013.

$
0
0

As part of running through the “New Migration Batch”-wizard, the remote endpoint (the on-premises Exchange server) is tested for its availability. After running this step, the following error is displayed:

image

By itself, this error message does not reveal much information as to what might be causing the connection issues. In the background, the wizard actually leverages the “Test-MigrationServerAvailability” cmdlet. If you run this cmdlet yourself, you will get a lot more information:

image

In this particular case, you’ll see that the issue is caused by 501 response from the on-premises server. The question is of course: why? We recently moved a number of mailboxes and then we did not encounter the issue. The only thing that had changed between then and now is that we reconfigured our load balancers in front of Exchange to use Layer 7 instead of Layer 4. So that is why I shifted my attention to the load balancers.

While reproducing the error, I took a look at the “System Message File” log in the KEMP load balancer. This log can be found under Logging Options, System Log Files. Although I didn’t expect to see much here, I saw the following message which drew my attention:

kernel: L7: badrequest-client_read [157.56.251.92:61541->192.168.2.130:443] (-501): <s:Envelope ? , 0 [hlen 1270, nhdrs 8]

A quick lookup learned that the 157.56.251.92 address was indeed coming from Microsoft. So now I knew for sure that something was wrong here. A quick search on the internet brought me to the following article which suggested to change the 100-Continue Handling in the Layer 7 configuration of the Load Master: http://blog.masteringmsuc.com/2013/10/kemp-load-balancer-and-lync-unified.html

After changing the value from its default (RFC Conformant), I could now successfully complete the wizard and start a hybrid mailbox move. So the “workaround” was found. But I was wondering, why does the Load Master *think* that the request coming from Microsoft is non-RFC compliant?

The first thing I did is ask Microsoft if they could clarify a bit on what was happening. I soon got a reply that – from Microsoft’s point of view – they were respecting the RFC documentation regarding the 100 (Continue) Status. No surprise here.

After reading the RFC specifications I decided to take some network traces to find out what was happening and maybe understand how the 501 response was triggered. The first trace I took, was one from the Load Master itself. In that trace, I could actually see the following:

image

Effectively, Office 365 was making a call to the Exchange Web Services and using the 100-continue status. As described per the RFC documentation, the Exchange on-premises server should now respond appropriately to the 100-continue status. Instead, we can see that in the entire SSL conversation, exactly 5 seconds go by after which Office 365 makes another call to the EWS virtual directory without having received a response to the 100-continue status. At the point, the KEMP Load Master generated the “501 Invalid Request”.

I turned back to the (by the way, excellent) support guys from KEMP and explained them my findings. Furthermore, when I tested without Layer 7 or even without a Load Master in between, there wasn’t a delay and everything was working as expected. So I knew for sure that the Exchange 2013 on-premises was actually replying correctly to the 100-continue status. As a matter of fact, without the KEMP LM in between, the entire ‘conversation’ between Office 365 and Exchange 2013 on-premises was perfectly following the RFC rules.

So, changing the 100-continue settings from “RFC Conformant” to “Ignore Continue-100” made sense as now KEMP would just ignore the 100-continue “rules”. But I was still interested in finding out why the LM thought the conversation was not RFC conformant in the first place. And this is where it gets interesting. There is this particular statement in the RFC documentation:

“Because of the presence of older implementations, the protocol allows ambiguous situations in which a client may send “Expect: 100- continue” without receiving either a 417 (Expectation Failed) status or a 100 (Continue) status. Therefore, when a client sends this header field to an origin server (possibly via a proxy) from which it has never seen a 100 (Continue) status, the client SHOULD NOT wait for an indefinite period before sending the request body.”

In fact, that was exactly what is happening here. Office 365 (the client) sent an initial 100-continue status and waited for a response to that request. In fact, it waits for exactly 5 seconds and sends the payload, regardless of it having received a response. In my opinion, this falls within the boundaries of the scenario described above. However, talking to the KEMP guys there seems to be a slightly different interpretation of the RFC which caused this mismatch and therefore the KEMP issuing the 501.

In the end, there is still something we haven’t worked out entirely: why the LM doesn’t send back the Continue-100 status back to Office 365 even though it receives it back almost instantaneously from the Exchange 2013 server.

All in all, the issue was resolved rather quickly and we know that changing the L7 configuration settings in the Load Master solves the issue (and this workaround was also confirmed as being the final solution by KEMP support, btw). Again, changing the 100-continue handling setting too “Ignore” doesn’t render the configuration (or the communication between Office 365 or Exchange on-premises) non-RFC compliant. So there’s no harm in changing it.

I hope you found this useful!

-Michael


The limitations of calendar federation in a hybrid deployment

$
0
0

Recently, Loryan Strant (Office 365 MVP) and myself joined forces to create an article for the Microsoft MVP blog regarding some of the limitations of calendar federation in a hybrid Exchange deployment. In this article we discuss how running a hybrid deployment might affect calendar sharing with other organizations and what your options are to work around this limitation.

To read the full article, please click here.

Enjoy!

Michael



Why MEC is the place to be for Exchange admins/consultants/enthusiasts!

$
0
0

In less than a month, the 2014 edition of the Microsoft Exchange Conference will kick off in Austin, Texas. For those who haven’t decided if they will be going yet, here’s some reasons why you should.

The Value of Conferences

Being someone who frequently attends conferences, I *think* I’m in a position I can say that conferences provide great value. Typically, you can get up-to-date with the latest (and greatest) technology in IT.

Often, the cost for attending a conference are estimated higher than a traditional 5-day course. However, I find this not to be true – at least not all the time. It is true that – depending on where you fly in from – Travel & Expenses might add up to the cost. However, I think it is a good thing to be ‘away’ from your daily work environment. That typically leaves one less tempted to be pre-occupied with work rather than soaking in the knowledge shared throughout the conference. The experience is quite different from a training course. Conferences might not provide you the exact same information as in a training, but you’ll definitely be able to learn more (different) things. Especially if your skills in a particular product are already well-developed, conferences are the place to widen your knowledge.

On top of that, classroom trainings don’t offer you the same networking capabilities. In case of MEC, for instance, there will be a bunch of Exchange MVPs and Masters who you can talk to. All of them very knowledgeable and I’m sure they won’t mind a good discussion on Exchange! This could be your opportunity to ask some really difficult questions or just hear what their opinion is on a specific issue. Sometimes the insights of a 3rd person can make a difference…!

It is also the place where all the industry experts will meet. Like I mentioned earlier, there will be Masters and MVPs, but also a lot of people from within Microsoft’s Exchange Product Group will be there. What better people are there to ask your questions to?

Great Content

Without any doubt, the Exchange Conference will be the place in 2014 to learn about what’s happening with Exchange. Service Pack 1 – or Cumulative Update 4, if you will – has just been released and as you might’ve read there are many new things to discover.

At the same time, it’s been almost 1.5 years since Exchange 2013 has been released and there are quite some sessions that focus on deployment and migration. If you’re looking to migrate shortly, or if you’re a consultant migrating other companies, I’m sure you’ll get a lot of value from these sessions as they will be able to provide you with first-hand information. When MEC 2012 was held – shortly before the launch of Exchange 2013 – this wasn’t really possible as there weren’t many deployments out there.

Sure, one might argue that the install base for Exchange 2013 is still low. However, if you look back at it, deployments for Exchange 2010 only really kicked of once it was past the SP1 era. And I expect nothing else to happen for Exchange 2013.

As a reference: here’s a list of sessions I definitely look forward to:

And of course the “Experts unplugged” sessions:

I realize that’s way too many sessions already and I will probably have to make a choice which ones I will be able to attend…
But the fact that I have so many only proves that there’s so much valuable information at MEC…

Great speakers

I’ve had a look through who is speaking at MEC and I can only conclude that there is a TON of great speakers. All of which I am sure they will make it worth the wile. While Microsoft-speakers will most likely give you an overview of how things are supposed to work, many of the MVPs have sessions scheduled which might give you a slight less biased view of things. The combination of both makes for a good mix to get you started on the new stuff and broaden your knowledge of what was already there.

Location

Austin, Texas. I haven’t been there myself. But based on what Exchange Master Andrew Higginbotham blogged a few days ago; it looks promising!

Microsoft has big shoes to fill. MEC 2012 was a huge success and people are expecting the same – if not better – things from MEC 2014. Additionally, for those who were lucky enough to attend the Lync Conference in Vegas earlier this month, that is quite something MEC has to compete with. Knowing the community and the people behind MEC, I’m pretty confident this edition will be EPIC.

See you there!

Michael


This was MEC 2014 (in a nutshell)

$
0
0

As things wind down after a week full of excitement and – yes, in some cases – emotion, MEC 2014 is coming to an end. Lots of attendees have already left Austin and those who stayed behind are sharing a few last drinks before making their way back home as well. As good as MEC 2012 in Orlando was, MEC 2014 was E-P-I-C. Although some might state that the conference had missed its start – despite the great Dell Venue Pro 8 tablet giveaway – you cannot ignore the success of the rest of the week.

With over 100 unique sessions, MEC was packed with tons and tons of quality information. To see that amount of content being delivered by the industry’s top speakers is truly an unique experience. After all, at how many conferences is the PM or lead developer presenting the content on a specific topic? Also, Microsoft did a fairly good job of keeping a balance between the different types of sessions by having a mix of Microsoft-employees presenting sessions that reflected their view on things (“How things should work / How it’s designed to be”) and MVPs and Masters presenting a more practical approach (“How it really works”).

I also like the format of the “unplugged” sessions where you could interact with members of the Product Team to discuss a variety of topics. I believe that these sessions are not only very interesting (tons of great information), but they are also an excellent way for Microsoft to connect with the audience and receive immediate feedback on what is going out “out there”. For example, I’m sure that the need for some better guidance or maybe a GUI for Managed Availability is a message that was well conveyed and that Microsoft should use this feedback to maybe prioritize some of the efforts going into development. Whether that will happen, only time will tell..

This edition wasn’t only a success because of the content, but also because of the interactions. It was good to see some old friends and make many new ones. To  me, conferences like this aren’t only about learning but also about connecting with other people and networking. There were tons of great talks – some of which have given me food for thought and blog posts.

Although none of them might seem earth-shattering, MEC had a few announcements and key messages; some of which I’m very happy to see:

  • Multi-Factor Authentication and SSO are coming to Outlook before the end of the year. On-premises deployments can expect support for it next calendar year.
  • Exchange Sizing Guidance has been updated to reflect some of the new features in Exchange 2013 SP1:
    • The recommended page file size is now 32778 MB if your Exchange server has more than 32GB of memory. It should still be a fixed size and not managed by the OS.
    • CAS CPU requirements have increased with 50% to accommodate for MAPI/HTTP. It’s still lower than Exchange 2010
  • If you didn’t know it before, you will now: NFS is not supported for hosting Exchange data.
  • The recommended Exchange deployment uses 4 database copies, 3 regular 1 lagged. FSW preferably in a 3rd datacenter.
  • Increased emphasis on using a lagged copy.
  • OWA app for Android is coming
  • OWA in Office 365 will get a few new features including Clutter, People-view and Groups. No word if and when this will be made available for on-premises customers.

By now, it’s clear that Microsoft’s development cycle is based on a cloud-first model which – depending on what your take on things is – makes a lot of sense. This topic was also discussed during the Live recording of The UC Architects, I recommend you have a listen at it (as soon as it’s available) to hear how The UC Architects, Microsoft and the audience feels about this. Great stuff!

It’s also interesting to see some trends developing/happening. “Enterprise Social” is probably one of the biggest trends at the moment. With Office Graph being recently announced, I am curious to see how Exchange will evolve to embrace the so-called “Social Enterprise”. Features like Clutter, People View and Groups are already good examples of this.

Of course, MEC wasn’t all about work. There’s also time for fun. Lots of it. The format of the attendee party was a little atypical for a conference. Usually all attendees gather at a fairly large location. This time, however, the crowd was shattered across several bars in Rainey Street which Microsoft had rented off. Although I was a little skeptical at first, it rather worked really well and had tons of fun.

Then there was the UC Architects party which ENow graciously offered to host for us. The Speakeasy rooftop was really amazing and the turnout even more so. The party was a real success and I’m pretty confident there will be more in the future!

I’m sure that in the course of the next few weeks, more information will become available through the various blogs and websites as MVPs, Masters and other enthusiasts have digested the vast amount of information distributed at MEC.

I look forward to returning home, get some rest and start over again!

Au revoir, Microsoft Exchange Conference. I hope to see you soon!


Windows Server 2012 R2 update enables ADFS to use alternative login ID, possibly removing the need to have an internet-routable UPN

$
0
0

Recently, Microsoft released an update to Windows Server 2012 R2 which – next to a bunch of bug fixes – also includes new features to some of the Operating System’s components. Amongst these new features there’s one that I found particularly interesting, more specifically the update to the AD FS 3.0 component which enables customers to use a different attribute to identify federated uses in Windows Azure AD. The feature itself is better known as “Alternate Login ID”.

As the TechNet documentation on this topic describes, it would now be possible to use a different attributed from the User Principal Name to identify federated users in Office 365. This helps customers who aren’t able to change their UPNs from the current value (like e.g. domain.local or domain.corp) to an internet-routable domain (like domain.com). Even though that in many situations changing the UPN isn’t a big of a deal, some customers leverage the existing UPN in third party applications and therefore might not be able to make this change easily.

If you want to deploy this feature, you’ll have to figure some things out by yourself. The documentation that is currently available doesn’t explain all the steps. At least, that is if you want to implement it right away. I expect the documentation to become available shortly. Also mind that I haven’t seen any official statement that the use of “Alternate Login ID” is already supported by Office 365 today, but the documentation certainly hints to it and if I recall correctly, it was also announced at the Microsoft Exchange Conference, last week.

The configuration itself requires you to jump through a few hoops, including modifying DirSync to refer to the new attribute you’ve selected as being the Alternate Login ID instead of the UPN. Personally, I would still recommend changing the UPN – if possible. But there’s an alternative now and having alternative is always good thing, isn’t it?

I’ll definitely have a go at this later this week and will post my findings here.

-Michael

 

 


Help! Where do I put my Hybrid server?

$
0
0

As part of a hybrid Exchange server deployment, you also deploy the so-called Hybrid Server(s). The name itself might be a little misleading though. After all it’s not some sort of new Exchange server role, nor is it an Exchange server that you deploy specifically to be able to configure a hybrid environment – at least not if you’re already running Exchange 2010 or Exchange 2013 on-premises.

In fact, once you configure a hybrid environment, every Exchange Server in your environment becomes part of that hybrid deployment and will perform one, or more, functions in that regard. However, when referring to Hybrid Exchange servers, we actually mean the Exchange servers which are directly involved in hybrid functions. More specifically these will be the servers that you select during the Hybrid Configuration Wizard.

Exchange 2003 / 2007

If you have still Exchange 2003 on-premises (shame on you!), than your only option is to deploy at least one Exchange 2010 SP3 server and use that one to setup a hybrid deployment. The reason why you have to use an Exchange 2010 server is because Exchange 2013 cannot coexist with Exchange 2003.

Once you installed the Exchange 2010 server, it is the only server capable of understanding the hybrid logic; and therefore considered to be the Hybrid Server. There’s also another reason why a server would be referred to as your Hybrid Server, but more about that later when we’ll talk about the free Hybrid Server license key.

Hybrid Server License Key

Microsoft offers eligible customers free Hybrid Edition/Server licenses. Yes, indeed: multiple licenses if needed. In fact, you’ll get a single license key which you are allowed to deploy on multiple Exchange servers, for as long as you abide to the license requirements. This allows you to maintain high availability – also for hybrid functionality.

The license requirements tell you that you cannot use these ‘dedicated’ Hybrid Servers for anything else but that: you should not host any mailboxes on them. If you do, you are required to purchase a proper Exchange Server license. Once you assigned a Hybrid License to an Exchange server, that server also becomes a Hybrid Server in the pure sense of the word.

Hybrid Server Placement

When you are doing things by the book, introducing a new Exchange Server version could be a rather disruptive action. First, you have to prepare your environment for it (Active Directory schema updates etc) and then, once you have deployed the server, you are expected to point all client access traffic to it. This means that you will have to consider all the things involved with setting up coexistence. In smaller environments this might be a trivial task, but the larger the environment gets, the bigger the implications might be.

Although I prefer this approach (“by the book”), there are times where this isn’t appropriate. Even more, doing this might cause all sorts of issues which you might want to avoid – especially if you’re just looking for a quick way to move to the cloud. If so, the placement of the Hybrid Exchange can become a game changer.

One approach that I have used in the past is to install the new server into the Exchange organization and provide it with its own hybrid namespace. This hybrid namespace is nothing more than a dedicated namespace for hybrid functionality. By doing so, I prevent having to point client access traffic to the new servers and possibly disrupt my existing environment. I can then use the Hybrid Server(s) only     for mailbox moves, hybrid mail flow etc.

Multiple Internet-Connected sites

One of the tasks of hybrid servers is to facilitate mailbox moves to and from Exchange Online. The endpoint that you use for mailbox moves is normally discovered automatically using AutoDiscover. However, sometimes you might want to use Exchange Servers in a different location to perform the mailbox move. One of the reasons why you would want to do this is because that other server is maybe closer to the mailbox or it might have more bandwidth available.

When you want to use other internet-facing Exchange servers for mailbox moves, you must make sure that the MRS Proxy is enabled on those internet-facing servers. You can enable the MRS Proxy on each of these servers by executing the following command:

Set-WebServicesVirtualDirectory <identity> –MRSProxyEnabled:$true

Secondly, you could specify a new migration endpoint using PowerShell. This will allow you to pick your desired endpoint from the Mailbox Migration wizard as well (see image below). You can create new migration endpoints through PowerShell, using New-MigrationEdpoint cmdlet.

Once you have defined multiple migration endpoints, this is how it looks like in the GUI:

One thing to note here is that – regardless of the amount of migration endpoints you create – the sum of value of the “MaxConcurrentMigrations” attribute for all endpoints cannot exceed 100. The default endpoint (created automatically) will already have that set to 100. So make sure that you modify that first before creating additional endpoints.

The following image depicts the primary endpoint (outlook.domain.com) and the new secondary (and manually created) endpoint “migrationendpoint2.domain.com”:

Alternatively – if you don’t want to create additional endpoints or you plan on using that endpoint only once – you can create the move requests with PowerShell and specify the -RemoteHostname parameter manually.

Conclusion

Either approach outlined above should work just fine. Which one you choose greatly depends on your current deployment and the effort that goes with introducing a newer Exchange version into your environment. Whenever possible, try to take the by-the-book approach as it might save you some headaches further down the road.


New Hybrid Configuration Wizard features in Exchange 2013 CU5

$
0
0

As posted here, Microsoft today released Cumulative Update 5 for Exchange 2013. At first sight, this update doesn’t appear to make lots of changes – at least not visibly. However, it does contain a lot of fixes and, as you will find out, there have been some changes to the Hybrid Configuration Wizard as well.

New options in the Hybrid Configuration Wizard

Whenever you enable an organization for a hybrid deployment in CU5, you will find the following new option:

21Vianet is Microsoft’s partner which offers Office 365 in China. You could say that they “host” Office 365 for Chinese customers as outlined in this Press Release

MRS Proxy now configured automatically

This is one of my personal asks for quite a long time now. Although the HCW already did an excellent job configuring all the components for a hybrid deployment, it did not enable the MRS Proxy on the Exchange Web Services Virtual Directory. Even though you could do it yourself with only a single command, I’m a big fan of having the HCW take care of this. It’s one less thing you can forget yourself!

OAuth now configured automatically

You’ll also notice that towards the end, the Hybrid Configuration Wizard will now prompt you to configure oAuth automatically:

The wizard will then automatically redirect you to a webpage where you’ll be asked to start the configuration (again):

Once you click configure, you will be asked to download an application which will automatically configure oAuth for you. Because it seems to be browser-integrated, you cannot run this step from a computer other than your Exchange Server and then copy over the executable. Beware and make sure that you run the HCW from the Exchange server itself instead from a remote workstation, like I tried the first time…

Once the first application was downloaded, you’ll be asked to run it:

Note: make sure that *.configure.office.com is added to your trusted sites or that you at least allow content to be downloaded from that website.

Then, after this first application ran, you’ll be prompted for an identical, second, application. Only this time the application (or assistant, if you will) will be a bit bigger: 22.2 MB instead of 18MB.

Once the second assistant completed successfully, you’ll see the following:

In fact, all that these “applications” do, is configure oAuth as outlined in the following article: http://technet.microsoft.com/en-us/library/dn594521(v=exchg.150).aspx

Note The configuration of the Intra-Organization Connector is the only thing that’s already handled by the Hybrid Configuration Wizard itself.

It’s definitely a good thing this is now done automatically. However, I would love to see it be more integrated with the HCW. At the moment, these changes don’t show up in the Hybrid Configuration Wizard logs.

Conclusion

It was already clear that Microsoft is moving forward with oAuth; potentially to replace other technologies currently used in Hybrid deployments. Personally, I wouldn’t be too surprised to see oAuth take over the duties from Microsoft’s Federation Gateway in the future. Not sure if this will actually happen, but it seems like a good thing. If you have ever been in a discussion with a pesky security administrator you would understand why… But don’t expect that to happen in a few months’ time though – as long as Exchange 2010 is officially supported, I reckon Microsoft will have to keep the MFG around.

It’s surely a good thing to move forward with oAuth as it has the potential to solve some long-standing issues regarding the handling of authentication and security in a cross-premises scenario like a hybrid deployment.


Viewing all 50 articles
Browse latest View live