Beginning IoT – Installing Windows 10 IoT Core on an x86/x64 Device

This is the second part in a 2-part series on how to install Microsoft Windows 10 IoT Core on an Internet-of-Things (IoT) device. Part 1 described how to install Windows 10 IoT Core on a Raspberry Pi 3 device. This article will focus on the steps required to install Windows 10 IoT Core on an x86/x64 device.


What is Windows 10 IoT Core?

Windows 10 IoT Core is a version of Microsoft’s Windows 10 operating system that has been optimized for smaller devices that can run on either ARM or x86/x64 devices. These devices can run with or without a display device.

When we talk about the different IoT devices, the processor type needs some explanation. ARM devices are called Advanced RISC Machines, with RISC standing for Reduced Instruction Set Computer. What this means is that the processor has been slimmed down to only include a reduced set of commands it can process. While this means that the processor can’t do certain things, it requires a low amount of power to execute what it can do, so that translates to increased battery life. The Raspberry Pi is classified as an ARM device.

Devices with the x86/x64 architecture are classified as CISC processors, which stands for Complex Instruction Set Computer. These processors do not have their instruction sets slimmed down, so they can perform more complex operations, at the cost of increased power consumption (and therefore lower battery life). Intel’s Baytrail devices running the Intel Atom processor E3800 is an example of an x64 device.



Before you can install on an x86/x64 device, you need to make sure you have a PC that is running Windows 10 1507 (version 10.0.10240) or higher. You can find out what version you are running by clicking on the search box (next to the Start button) and typing ‘winver’. This will display a dialog as shown here:



For certain x86/x64 device you can use IoT Core Dashboard to run through the installation process. However, since the process is similar using IoT Core Dashboard (which I already covered in Part 1), I am going to go through the installation process steps using the Windows ADK IoT Core Add-Ons.

You will need an IoT device to install on – here are some options that Microsoft supports:


We will be using the Intel Atom E3800 (aka Baytrail) built on an industrial PC for examples in this article. Typically we would need a micro SD card for storage but since the industrial PC comes with onboard memory storage, we do not need an SD card.


For software, we will need to install the following on the PC we build the image on:


Installation Steps

Once you have the prerequisites installed, you are now ready to begin the installation process. We will be building a basic image, which involves combining the Windows IoT Core packages along with a board support package for the target hardware (Baytrail device) into a flashable file (FFU file).

First off, you need to set your OEM name, which will help you distinguish your created packages from packages other manufacturers have created. Edit the setOEM.cmd file, located at C:\IoT-ADK-AddonKit and set the OEM_NAME variable accordingly. Please note you can only use alphanumeric characters.



You should now open the IoTCoreShell.cmd file, which is a specialized command-line window that you will be doing a lot of the work for building the image. This file is located in the directory where you installed the IoT Core ADK Add-Ons (C:\IoT-ADK_AddonKit). Open an administrator-elevated privilege command window and navigate to the C:\ IoT-ADK_AddonKit directory. Type in IoTCoreShell.cmd to open the IoT Core Shell. This application will prompt you to select the architecture you are working with (1 for ARM, 2 for x86, 3 for x64). We are creating an x64 image so select x64.



At this point, you need to install certificates which will be used to sign the package binaries. Since this article is focused on a test image, you can run the installoemcerts.cmd command to install test certificates in the root certificate store of the PC you are building the image on. This only needs to be done the first time you are building an image.



The next step is to extract the board support package (BSP) files for the device you are building an image for, and run the buildpkg.cmd command to build the package files used in creating the image. You can download and extract this BSP zipfile and copy to C:\IoT-ADK-AddonKit\Source-x64\BSP directory to begin using for building an image.



You can now begin to create the packages and build the FFU image for the Baytrail x64 device. Go back to the IoTCoreShell.cmd command window and enter buildpkg all which will build the .cab files for all the BSP directories the program sees under C:\IoT-ADK-AddonKit\Source-x64\BSP. Please note that if you had selected x86 when you ran the IoTCoreShell.cmd, running this command to build all the packages would look in the C:\IoT-ADK-AddonKit\Source-x86\BSP to build any BSP files located there.



Once the program finishes building all the BSP packages, you can now create a new project by entering the following, where Product_Name and BSP_Name are the name of the product you would like and the BSP name, respectively.

newproduct <Product_Name> <BSP_Name>

So, for example, entering newproduct MyBayTrailDevice BYTx64 will create a project and its files under C:\IoT-ADK-AddonKit\Build\amd64\MyBayTrailDevice for the BYTx64 board support package files.



You are now ready to build the actual FFU flashable image file. This can be done by entering buildimage <Product_Name> Test, replacing Product_Name with your product name (MyBayTrailDevice in our example). The second parameter specifies whether you are building a Test or Retail image. This process takes about 20-30 minutes to complete and once finished you will have a file named flash.ffu created under the C:\IoT-ADK-AddonKit\Build\amd64\ MyBayTrailDevice\Test subdirectory.



If you encounter any errors, the buildimage process will error out and specify a log file that has detailed information on the error.

Now that you have a flashable FFU image file, you will need to flash it onto the IoT device you are working with. For our example, this is the Baytrail device and since it has onboard storage space we need to use a bootable USB thumbdrive with the FFU file on it. In order to create this, we will use Windows PE to create a bootable disk and then copy the flash.ffu file onto it. Here are instructions to create a bootable WinPE thumbdrive:


Copy the flash.ffu file to the root of this bootable drive once you’ve created it. You are now ready to insert this USB thumbdrive to the IoT device and power it up. Make sure you specify in the IoT device’s BIOS to boot first from a USB drive.


WinPE will boot up and open a command window for you at the x: drive. Change to the d: drive and enter dir to see your flash.ffu file. WinPE comes with DISM, the Deployment Image Servicing and Management tool and we will be using this to flash the FFU file onto the IoT device. Enter the following in the command line to flash the FFU file:

dism.exe /apply-image /ImageFile:Flash.ffu /ApplyDrive:\\.\PhysicalDrive0 /skipplatformcheck



Once DISM has successfully completed the flashing process, you can power down the IoT device and remove the USB thumbdrive. Turn on the IoT device and have it boot normally off its storage. After a few minutes you should see the Windows 10 IoT Core startup screen and it should prompt you to select a language and whether you want Cortana activated. Once you make these selections the default application will appear.



Congratulations! You have successfully installed Windows 10 IoT Core on a x64 IoT device! Future steps can now be to modify the image to have it include your custom application or you can add drivers to the image if you need other functionality (such as Bluetooth or serial communications).




Beginning IoT – Installing Windows 10 IoT Core on a Raspberry Pi

This is the first part in a 2-part series on how to install Microsoft Windows 10 IoT Core on an Internet-of-Things (IoT) device. This article will focus on the steps required to install Windows 10 IoT Core on a Raspberry Pi 3. Part 2 will focus on installation on an x86/x64 device.


What is Windows 10 IoT Core?

Windows 10 IoT Core is a version of Microsoft’s Windows 10 operating system that has been optimized for smaller devices that can run on either ARM or x86/x64 devices. These devices can run with or without a display device.

When we talk about the different IoT devices, the processor type needs some explanation. ARM devices are called Advanced RISC Machines, with RISC standing for Reduced Instruction Set Computer. What this means is that the processor has been slimmed down to only include a reduced set of commands it can process. While this means that the processor can’t do certain things, it requires a low amount of power to execute what it can do, so that translates to increased battery life. The Raspberry Pi is classified as an ARM device.

Devices with the x86/x64 architecture are classified as CISC processors, which stands for Complex Instruction Set Computer. These processors do not have their instruction sets slimmed down, so they can perform more complex operations, at the cost of increased power consumption (and therefore lower battery life). Intel’s Baytrail devices running the Intel Atom processor E3800 is an example of an x64 device.



Before you can install on the Raspberry Pi, you need to make sure you have a PC that is running Windows 10 1507 (version 10.0.10240) or higher. You can find out what version you are running by clicking on the search box (next to the Start button) and typing ‘winver’. This will display a dialog as shown here:



You will also need to download and install the Windows 10 IoT Core Dashboard from here.

Of course you will also need a Raspberry Pi device to install onto. There are several different kits available on Amazon – I used the Canakit Starter Kit here.

Finally, you will need an SD card reader so that you can write the installation files to the SD card that will be placed in the Raspberry Pi.


Installation Steps

Once you have the prerequisites, you are now ready to begin the installation process.

First off, run the Windows 10 IoT Core Dashboard program, and click on Set up a new device from the menu on the left. This will display a screen that allows you to select the Device Type, OS Build and other information to configure as part of the installation.


Select Broadcomm [Raspberry Pi 2 & 3] as the Device Type, and Windows 10 IoT Core (17134) as the OS Build. You can also select Windows Insider Preview or Custom if you want to install a preview build of Windows 10 IoT Core, or a custom image file (flash.ffu file).

Next, insert your SD card into the Windows 10 PC you are using. Be aware that your SD card should be at least 8GB in size and I prefer formatting it prior to this step (this is optional – understand that the installation process will overwrite any pre-existing data on the SD card). The IoT Core Dashboard program should recognize the SD card and display it in the Drive selection.

You can then enter values for the Device Name and Password, and whether you want to use a Wi-Fi Network Connection when the Raspberry Pi starts up with our installation.


Check the box to accept the software license term and click on the Download and Install button. This will begin the installation process. During this process, Windows 10 IoT Core will be downloaded and the installation process will flash the files to the SD card.


If you see a prompt for UAC (User Access Control), click Yes to continue. The process may then open a command window to clean the SD card (if you had something on it previously) before it flashes the new installation onto it.


The installation process will then run the DISM program (Deployment Image Servicing and Management tool) to flash the installation files onto your SD card.


Once this is complete, the command window will close, and IoT Core Dashboard will state that your SD card is ready to be placed in the Raspberry Pi and started up.


Eject the SD card from your PC and place it in your Raspberry Pi. Connect an HDMI cable to a display source (monitor) and then plug in the power to start the device. If you didn’t choose to use a Wi-Fi network connection on startup you will need to plug in an Ethernet cable if you want Internet access.



You will first see the Windows logo with a spinner when the device is first powered on:



Let the device power itself on, it usually takes a minute or two (and might reboot itself). Once you get to the following screen, plugin a USB mouse and make your language selection.


Clicking Next will display a screen asking if you want to configure Cortana – I selected Maybe Later.


Windows 10 IoT Core will then run the default application, which looks like this:


Congratulations! You have now completed the installation process and you have a standard Windows 10 IoT Core installation on your Raspberry Pi! You are now ready to begin deploying your applications to this device!

Happy coding!



Using IoT on a Beer Kegerator

Being born and raised in the great state of Wisconsin, beer has been a part of most of my adult life. Couple that with my love of technology, I always wondered how I could leverage some cool tech with a beer theme. Since the proliferation of inexpensive hardware and the Internet of Things (IoT), it has now become easy (and cheap!) to provide solutions that can be used to monitor (among other things) beer-related activities. This article will describe and detail the steps I took to create a solution for monitoring beer consumption on a beer kegerator.

The first thing I needed to do before building anything is to understand and design what it is I wanted to build. Since I want to monitor beer consumption from a kegerator, I needed to draw out the major parts of my solution. Once I know that I can then begin to build and test the different parts of the system. The drawing below shows the major parts of my solution:


As you can see, when someone taps a beer from the kegerator, an inline flow meter sensor sends information to an IoT device, which then processes the information and sends it to the cloud, where it is stored for data analysis.

Now that I have an idea of my overall architecture, I can begin to think about what hardware and software I need to create my solution.



For hardware, I chose to use a Raspberry Pi as my IoT device. The Pi is a low-power, inexpensive device that met my needs for this project (built-in ethernet network, multiple GPIO pins, easy to install apps). Please note that I also considered using the ESP8266 chip for this project – this little chip is great for simple IoT project as it’s really cheap, has built-in wireless networking (with a full TCP/IP stack!), and multiple GPIO pins for use. The main drawback for this project is that this chip only provides 3.3v for power and I needed 5v for the flow sensor, so it was easier to use the Pi. The other drawback is that I can’t install Windows 10 IoT Core on the ESP8266, so using a Pi simplified my design.

The other piece of hardware I need is a flow sensor to measure the flow of beer through the line when it’s being tapped. Initially I chose a really cheap sensor designed for coffee-makers but found out that these won’t work for measuring beer flow (see Testing section), so I went with a more expensive sensor. I chose the Swissflow SF-800 (link), which is about $60 USD. This flow sensor sends digital pulses when a liquid is flowing through it, so that allows me to measure how much beer is being dispensed. This sensor requires +5Vdc to power it properly, so that required me to use a Raspberry Pi (which also provides +5Vdc).



The software selections I made were driven (in part) by my hardware choices, but also by what apps I wanted to provide. I wanted to have an app that runs on the Raspberry Pi and processes the incoming pulse data from the SF-800 sensor and then send that data to Azure. I also wanted this app to have a user interface that displayed how much beer is left in the current keg, along with the ability for an administrator to “reset” the app (when the keg is empty and is changed out for a full one).

Windows 10 IoT Core provides the operating system for the Raspberry Pi, and this also allows me to easily deploy and manage any apps I want running on the device. Please review this link on how to install Windows 10 IoT Core on the Raspberry Pi.


The app that I am creating for this solution is a Universal Windows Platform (UWP) app and is designed for running on IoT devices that have Windows 10 IoT Core on them. This app will process the incoming digital pulses from the SF-800 and send them to Azure IoT Hub.


The following code snippet shows how I receive the incoming digital pulses from the SF-800 flow sensor. I have this sensor connected to GPIO pin 5 from the Raspberry Pi so that when the value on that pin changes it triggers an event in my app to signal that a pulse was sent by the SF-800.


I also have a timer on another thread that ticks every 0.5 second and looks to see if any incoming pulses have been received by the SF-800 flow sensor. If there have, it sends them off to Azure IoT Hub for storage.


The software in the Azure cloud that I will be leveraging is Azure IoT Hub, Stream Analytics and Azure SQL. Azure IoT Hub provides the mechanism to receive incoming telemetry data from my IoT device and route it for processing and storage. I am having Azure IoT Hub route my data to Stream Analytics, which then will process it and save it in an Azure SQL database. Once in the database, I am free to consume it in a number of ways, such as PowerBI or any custom app that can consume data from SQL.


As incoming telemetry data is received from the Raspberry Pi, Azure IoT Hub receives that data and Stream Analytics is used to process that incoming data and save it in an Azure SQL database. This is done through the Stream Analytics interface by setting up and input (Azure IoT Hub) and an output (Azure SQL database) and configuring a query to do any processing needed at that time.



Once I created the software components and connected the hardware that I have, it is time to test the functionality of my solution. I first tested the solution by connecting my Raspberry Pi (with my UWP app installed) to a breadboard where I have the SF-800 flow sensor connected. I also have a couple of LEDs to indicate a heartbeat pulse (green) and to indicate flow sensor pulses (red).


I configured Azure IoT Hub and started my Stream Analytics job so that incoming data from my IoT device will be received and processed properly. Testing this way involved blowing air through the SF-800 device (I used my breath – GENTLY!), making sure the air flow was in the proper direction (going the wrong way can damage the sensor).

Once I knew this was working I wanted to validate the accuracy of the digital pulses of the SF-800. To do this, I got some plastic tubing of the same size being used in the kegerator along with a funnel. I then measured out 1 cup of water and then proceeded to pour it through the flow sensor while everything was running.



Now that I have tested my solution, it is ready for deployment! This included placing the flow sensor inline with the actual kegerator tubing on the line I wanted to monitor. I still kept the breadboard as this was not a fully productized solution (meaning I didn’t create the wiring on a PCB).


I encountered a testing issue I failed to realize until after I deployed my solution for the first time. I was originally using a cheap flow sensor designed for coffee makers, and when I deployed this to the beer line I noticed that it made the beer foam as it was passing through the sensor. This was something I didn’t test for prior to deployment so it forced me to rethink my design (and what sensor to use). I eventually found the SF-800 sensor and this worked much better when I deployed it with my solution.

In conclusion, now that this solution is connected to the kegerator, I can monitor how much beer is left in the current keg! I can also enhance my solution by leveraging an Azure Webjob to send an email notification when the keg is getting low. How great is that? No more tapping a beer just to find out that there isn’t any left!



Disabling Windows Update in Windows 10 IoT Core

If you’re working with Windows 10 IoT Core on your devices and have wondered how you can disable Windows Update, well wonder no more! This article will detail out the steps needed to disable this service.

First off, I’d like to state that I don’t recommend that you disable Windows Update on your devices running Windows 10 IoT Core, as this will prevent any future updates from being installed on your devices. Doing so may expose your devices to security vulnerabilities that would be potentially resolved in an update of IoT Core.

However, there may be some situations where you need to disable Windows Update on your devices, such as controlling the amount of data downloaded to your device (if connected to the Internet through a metered connection), or you need a completely stable environment that doesn’t ever change (giving an administrator the ability and control to update only when absolutely necessary).

Having said the above, here are the steps needed to disable Windows Update on devices running Windows 10 IoT Core:


  1. Install and Run IoT Core Dashboard

Download and install IoT Core Dashboard (link) to a Windows 10 PC that is on the same network that your target device is connected to. Once you have installed IoT Core Dashboard you can run it by typing ‘IoT Core Dashboard’ in the Windows 10 Start menu search box:


  1. Launch Remote PowerShell session on Device

Once IoT Core Dashboard is running, you should see a device entry listing (under ‘My Devices’) for the target gateway device. Right mouse click this entry to display a context menu and click on ‘Launch PowerShell’. This will launch a remote PowerShell window to the target device – you will be prompted for a username and password to access the device.


  1. Verify PowerShell session windows is displayed

Once you enter a username and password for the target device, you will see a PowerShell window for the target device, similar to this:


  1. Enter PowerShell commands to disable Windows Update

The following PowerShell commands will disable Windows Update on the target device:

sc.exe config wuauserv start=disabled

sc.exe stop wuauserv


Here is a screenshot of the results of running these commands. This is what you should see if the commands have executed successfully:



  1. Verify Windows Update is disabled

The following PowerShell commands will verify that Windows Update has been disabled on the target device:

sc.exe query wuauserv

reg.exe query HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\wuauserv /v Start


Here are some screenshots of the results of running these commands. The first command should show a STATE of 1 (STOPPED), while the reg.exe query should show a REG_DWORD value of 0x4:




You are finished! Windows Update should now be disabled on the target device.

There is another option for disabling Windows Update on your devices running Windows 10 IoT Core, and that involved disabling this when creating the OS image for your device (the .FFU file). Disabling Windows Update with this option would mean that after flashing IoT Core on the device you would have it permanently disabled right from the get-go, meaning you would not have to run the PowerShell command described above.

If you are using Windows Configuration Designer (link) when creating your IoT Core image, there are options that allow you to disable Windows Update and other types of updates on the device. These settings are under Runtime Settings/Policies/Update and there are 4 different settings to control updates (see highlighted areas in the screenshot below):


I hope this article helps in understanding the options and steps in disabling Windows Update on your devices running Windows 10 IoT Core.


Thanks for reading!



Displaying and Automating Legacy Data using PowerBI

One of the more common requests involving business intelligence projects is surfacing data from legacy, line-of-business systems in an automated fashion. I recently was involved with a project where the client was making this exact request. This blog will explain how I tackled the problem and used a number of Microsoft technologies to come up with a solution.

The Problem

The client had a large set of data from their AS/400 mainframe system that they needed to present on a weekly basis to their executives. This data is used to make critical business decisions, so the accuracy and timeliness of presenting this data is crucial to the client’s business.

A manual process was implemented, performed by an employee, to extract this data. Microsoft Excel 2013 was used for this process, and was performed on a weekly basis. The employee created used PowerPivot to create tabular list representations to show the necessary data. A PowerPivot data model was created as part of this manual process.

There were two main issues causing pain with this process: The data used to create the PowerPivot data model was very large, and necessitated multiple passes of extraction from the AS/400 system into temporary Excel spreadsheets, as the in-memory limits of Excel were being reached, causing application crashes. The second issue was that the resulting tabular data generated was not visual and the client wanted a graphical and interactive experience when presenting this data.

The Solution

In order to solve these problems, as well as automate the process, I utilized the PowerBI technology stack, along with SQL Server 2012 and SharePoint 2013. The PowerBI tools (PowerPivot and PowerView) provide the data model and interactive graphical dashboards. SharePoint provides a central area for the PowerView dashboards. SQL Server Integration Services (SSIS) and SQL Server Analysis Services (SSAS) provide the ability to extract, transform and load the data from the AS/400 system for consumption by the PowerBI tools. Finally, SQL Server Agent jobs automate the update of the data, so that any data changes on the AS/400 side are propagated to the SSAS data model.

The first step in the solution was to surface the data from the AS/400 system to SQL Server, so that other SQL Server technologies could be leveraged to manipulate the data. A linked server in SQL was created to do this. The steps to do this are beyond the scope of this post, but here is a link that explains the steps:

Once the linked server to the AS/400 data was in place, I needed to use SQL Server Integration Services (SSIS) to massage the data coming from the AS/400. There were a number of data fields that needed to be formatted properly (mostly dates), so I decided to create a dedicated SQL Server database to hold this formatted data. SSIS allows you to create a package that you can design to perform various actions, including the data formatting that was required.

The SSIS package created references the linked server I created, performs the data manipulation steps and then loads the results into tables in the dedicated SQL database. This package is published to the SQL Server environment and can be run on demand by an administrator. This extraction and transformation was then automated by creating a SQL Server Agent job that schedules the execution of the SSIS package.

Now that I had the formatted data in my dedicated SQL Server database, I was ready to process this data for use with the PowerBI tools. In order to do this, I needed to create a data model using SQL Server Analysis Services (SSAS) that can be used by the PowerBI tools.

Using SQL Server Data Tools (SSDT) I created a tabular data model, which allowed me to reference the dedicated SQL Server database that was created as a source. Using Data Analysis Expresssions (DAX) along with measures, the data model was designed to perform the necessary calculations to present the data in its final form in the PowerView dashboards. Once the data model was compiled and published to SQL Server, I automated the processing of these calculations using a SQL Server Agent job on a scheduled basis (very similar to the agent job created for the SSIS package).

Finally, I used SharePoint 2013 as a central location to access the PowerView reports I created, displaying the data. The PowerPivot add-in for SharePoint needed to be installed, and SharePoint Server 2013 was also required.

Before I could create a PowerView report in SharePoint, I needed to create a data connection to the SSAS data model I created in the previous step. Since I’m hosting these reports in SharePoint, I used the Microsoft BI Semantic Model for Power View connection type ( when creating the data connection.

Once the data connection was created, I am now able to create the PowerView report. I did this by clicking the ‘Create Power View Report’ in the menu for the data connection, as shown here:

This creates a blank PowerView report and displays the report editor, ready for design of the report. At this point, I was able to create different views for the report I needed using the PowerView designer. Below is an example report that can be created with this designer.

As you can see, this process involved a number of Microsoft technologies that came together to provide a scalable, automated process for displaying legacy data in SharePoint using PowerBI. SQL Server Analysis Services and Integration Services provide the ETL capabilities and calculation power for manipulating the data for consumption by PowerBI. Users viewing the PowerView reports can be guaranteed the displayed data is fresh, since the reports query the Analysis Services model data, which is kept up-to-date by SQL Server Agent jobs.


Dynamics 365 – A First Look

Microsoft recently announced their next generation of Azure-hosted business services, titled Dynamics 365. General availability is scheduled for November 1st, 2016. Here is a first look at the features and functionality being provided as part of this new offering.

If you would like further information on how to prepare for the upcoming release of Dynamics 365, please contact us.

Using TraversedPath with Branching Business Flows

Using TraversedPath with Branching Business Process Flows


I recently ran into an issue with Opportunity records and branching Business Process Flows (BPFs) where all the stages were being displayed on the record, instead of what stages the record had gone through. The solution was interesting and I’d like to share the issue and resolution that I came up with.



The scenario I was working with involved using an external SQL Server Integration Services (SSIS) package that was creating new Opportunity records that were associated with a branched BPF in Dynamics 365. This external process would create the opportunity record and attempt to set the current stage to one of the stages in a particular branch. This was being done by setting the StageID field to the GUID of the stage that I wanted to make active.

The problem was that when viewing the created opportunity record, the BPF would display ALL the stages from both branches (I had 2 separate branches). Obviously, this was not desirable and was very confusing.



After analyzing the fields in Dynamics 365 for the opportunity record I had created, it turns out the reason why this issue arises is due to a field on the Opportunity entity called TraversedPath. This field is a comma-separated list of GUIDs that represents the BPF stages the opportunity record “traversed” through. I was not setting this field at all when creating the opportunity record, and therefore Dynamics 365 was displaying all the BPF stages from both branches (due to no entry in this field).

Once I modified my SSIS process to grab all the BPF stages I wanted the opportunity record to traverse through and set the TraversedPath field, the BPF stages (for the specific branch) were set and displayed properly.

I hope this article helps others in quickly determining the cause of the Business Process Flow issue I ran across.

Thanks for reading!



No-Code Configurations for Dynamics CRM – Part 2

This is the second article on how you can configure Dynamics CRM without writing code. In Part 1, I described the configuration options with System and Custom entities, as well as explaining how to use Business Rules. In this article, I will describe how you can configure Dynamics CRM using Workflows and Dialog, Business Process Flows, and Dashboards/Reports.


Workflows and Dialogs

Dynamics CRM provides an engine that can run business processes automatically or interactively. This engine allows administrators to create and manage these automated business processes in order to provide customized functionality to the CRM environment.

Workflows are business processes that run automatically, with little or no interaction with the user. These processes can be configured to run in the background or in real-time, and can be triggered automatically (based on a specified condition). They can also be started manually by a user.

For example, I can create a workflow by opening a solution in CRM and navigating to the Processes area. Once there, I can click on the New button to create a new process (see Figure 1).


Figure 1: Creating a new Workflow


New workflows start in a Draft state, which means they are not activated in the system. This allows you to create the workflow and add your logic to it, before you activate it. Figure 2 shows an example of a workflow in Draft state.


Figure 2: Workflow in Draft state

There are a number of areas when configuring a workflow that define what it does and how it runs. First off, you give it a name so you can recognize your workflow. Under the Available to Run section, you see there are a few checkboxes that allow you to define how the workflow will run in the CRM system. Selecting the Run this workflow in the background option will specify that the workflow runs in the background asynchronously. Unchecking this box specifies that the workflow will run as a real-time workflow, which means it runs synchronously (immediately).

The checkbox under the Workflow Job Retention section allows you to save (or delete automatically) the completed workflow jobs after it runs each time. Checking this box clears these records immediately, so that you can save disk space in your system. Be careful though – I have found that it’s nice to have these records around for a period of time, as you can determine if the workflow is running successfully.

The Entity and Category fields tell you what entity in CRM you are running the workflow against, and that the Category is a Workflow.

Under the Options for Automatic Processes section, you can define the workflow scope. This tells the workflow to run for a specific set of records, as listed below:

  • User – workflow only runs against the records owned by the current user
  • Business Unit – workflow only runs against the records owned by the business unit the current user belongs to
  • Parent: Child Business Units – workflow only runs against the records owned by business unit and the child business units the current user belongs to
  • Organization – workflow runs against any record in the organization

There are also checkboxes that specify what triggers the workflow. You can select when a record is created, record status changes, record is assigned, or when a record is deleted. You can also select when specific fields on the entity record change.

Once you’ve configured how the workflow should run, the main part of the workflow is defining what it does when it runs. This is done by creating one or more steps. Figure 2 shows the area in the workflow that allows you to create steps. Here is where you can add steps and their conditions, along with actions on what each step does. For example, Figure 2 shows a step that indicates that if the Opportunity:Presales Resource field = Yes and the Opportunity:Presales Resource field contains data, then the workflow should then check to see of the Opportunity:Probability <= 20. If it is, the action of sending an email is performed.

Once you’ve added all your steps and have saved the workflow, you can publish it to activate it into the CRM system. At this point, you would activate your workflow in a development system to test its functionality (before deploying it to a production system). You also have the ability to see when your workflow has executed (in your solution that contains the workflow), as shown in Figure 3. This view allows you to see what instances of your workflow has run, along with the status of their execution. You can drill into each record to see the details of the workflow execution, such as what steps performed successfully, or where it failed.


Figure 3: Workflow Process Sessions


Dialogs are very similar to workflows, as they are also business processes. However, these processes are designed to run and interact with the user. You create a dialog in the same way as a workflow process (except you select Dialog instead of Workflow category), and you will notice in Figure 4 that there are less configuration options than in workflows.


Figure 4: Configuring Dialogs

You still create steps in a dialog with actions, but you will notice there is a Prompt and Response selection, under a Page step. This is where you create and configure the dialog prompts that the users will see when the dialog runs. Figure 5 shows an example of a dialog prompt. You also have the ability to define what the prompt will say and how the user will respond.


Figure 5: Dialog Response Prompt Configuration


Business Process Flows

Business Process Flows are another type of process in Dynamics CRM, but they differ in how they provide functionality to users. I like to think of Business Process Flows (BPFs) as a guide in Dynamics CRM that helps users follow a set of steps to complete a process. BPFs have a visual component as well as back-end functionality that perform tasks or actions when the user is interacting with the BPF. They differ from workflows in that they have a UI component that users interact with while using the system. They also differ from dialogs in that they are persist on specific record types, meaning they are part of the displayed form. They are not triggered by an action and therefore do not “popup” a window to request that users add information at that point.

Administrators have the ability to create custom BPFs in Dynamics CRM, or you can use one of the following standard BPFs that come with Dynamics CRM. Some examples of standard BPFs are:

  • Lead to Opportunity Sales Process
  • Opportunity Sales Process
  • Phone to Case Process

The standard BPFs are good for general processes that you can have CRM users follow, but the real power of BPFs are displayed when you create a custom BPF that matches your business process. Figure 6 shows the Opportunity Sales Process that comes with Dynamics CRM.


Figure 6: Opportunity Sales Process BPF


Creating a Custom Business Process Flow

Creating a BPF doesn’t require specific coding skills, as CRM has provided an editor that allows administrators to create these processes without having to write code. You can navigate to the Settings area in CRM and click on Processes. This will navigate you to the My Processes view, which will show you a listing of processes owned by you:


Figure 7: My Processes View

From here, you can see which processes are in the system that are owned by you. This includes both workflows and BPFs. You can create a new BPF by clicking on the New button in the upper right. Once you click this, a designer is displayed that allows you to create a custom Business Process Flow (see Figure 8).

Business Process Flows consist of stages and steps. Stages are the different parts of the BPF that users can navigate back and forth from. Stages also contain one or more steps. Each step represents a piece of data that can be entered for the entity record the BPF is associated with. Figure 8 shows an example of a custom BPF with stages and steps:


Figure 8: Custom BPF with Stages and Steps

Note that when you create steps for a stage, you specify a field on the entity record the BPF is associated with. You can also designate whether the step is required. This means that the user cannot advance to the next stage until a value is entered for each of the required step fields in the stage (commonly known as stage-gating).

You will also notice from Figure 8 that there is a branched stage in the BPF. Branching BPFs gives the ability for a stage to only be displayed (and active) if a certain condition is met. In our example, the Procurement Approval stage is only displayed if the Budget Amount value (Step in Stage 1) is greater than $100,000. Figures 9 and 10 show what this branching looks like when the Budget Amount is above or below the $100,000 threshold.


Figure 9: Custom BPF with Branch Hidden


Figure 10: Custom BPF with Branch Displayed

Once you’ve created your custom BPF, you must activate it for it to be used in the system. All new BPFs you create start in a Draft state. You can have more than one BPF active in the system for an entity, and if you do there is an order to which BPF will be attached to the entity record (based on the first BPF the user has access to, via security roles). Figure 11 shows the Process Order Flow dialog for a custom BPF:


Figure 11: Process Flow Order Dialog


Dashboards and Reports

There are two different ways Dynamics CRM displays data in the system: Dashboards or Reports. This section will describe each and how they are used.



Dashboards display data from CRM as an overview of business data. Users can see data that they can perform actions against, and this data is usually displayed across the organization. If you wish to see CRM data at a glance, creating a dashboard will help you do that.

There are two types of dashboards in CRM: System Dashboards and User Dashboards. System Dashboards are created by a System Administrator or Customizer and these dashboards are visible to the entire organization. User Dashboards are created by CRM users and are only visible to the CRM user who created it. User Dashboards can be shared by the CRM user to other users, so that others can view the dashboard.

For either type of dashboard, CRM users have the ability to set the default dashboard they wish. System dashboards can be marked as the default dashboard for a specific area (Sales, Marketing, etc.) but if a user marks a different dashboard as their default (System or Custom) it will override the System dashboard that is marked as the default. Figure 12 shows an example of a default System dashboard.


Figure 12: Dashboards in Dynamics CRM

Dashboards can also be created to display either tabular data (like a view) or graphical data (like charts). Other types that can be included in a dashboard are Web Resources or IFrames. Users also have the ability to drill down into a dashboard chart to see the tabular data it represents. This gives the user the added functionality of being able to examine specific sets of data in a dashboard in more detail. Figure 13 shows a System dashboard with both tabular data and a chart:


Figure 13: Charts and Tabular Data in Dashboards in Dynamics CRM

One thing to remember when viewing dashboards: The data it displays must be accessible by the user viewing the dashboard. What this means is that if a user doesn’t have the security role to view a particular set of data, the dashboard itself will still be accessible but the data it displays will not be. Usually an error is displayed in the specific dashboard window that states the user doesn’t have the proper permissions to view the displayed data.



Dynamics CRM also has the ability for users to create and run reports. These differ from dashboards as they are a snapshot of data at the time the report is run. They are also typically used to print out (or export) data from the CRM system. Creating reports in CRM is a large topic, so I will focus on how users can run reports.

Users have the ability to run a report that is in the system. This is done by navigating to the Reports section (i.e. Sales -> Reports) which will display a view of the Available Reports the user can see:


Figure 14: Available Reports View in Dynamics CRM

Clicking on one of these reports displays a dialog that allows the user to modify the (pre-defined) criteria for the report, and then run the report (by clicking the Run Report button in the lower right):


Figure 15: Running a Report in Dynamics CRM

Once the Run Report button is clicked, CRM processes the request and will display the report results. At this point, the user has the ability to perform a number of actions on the report, such as modifying the filtering criteria (depends on the report), saving the report results to a file (Excel, PDF, etc.), or refreshing the data. Figure 16 shows an example of a displayed report:


Figure 16: Report Results in Dynamics CRM

The above description for running a report in CRM is for reports that run on all sets of data in the system. There is another type of report in Dynamics CRM that allows users to run a report against a single record in the system. These types of reports are useful for displaying a snapshot of detailed information about a specific record. This is done by opening a specific record in CRM (such as an Account record, for example). If you click on the command bar on the top of the page, you will see a section titled Run Report (see Figure 17). Clicking on this will expand a sub-menu, which will then display a listing of all the reports you can run against this record. Note that sometimes there may not be anything listed here if these types of reports have not been created in the system. Clicking on a selection here will display the report results. The user then has the ability to perform specific actions against the report results, as described earlier.


Figure 17: Selecting a Current Record Report in Dynamics CRM



As you can see, there are a multitude of different ways users and administrators of Dynamics CRM can configure the system without having to write a single line of code. These no-code configurations make Dynamics CRM a very powerful platform for tailoring the system to a business’s needs, and can take you very far into the realm of a custom CRM system for your organization.



No-Code Configurations for Dynamics CRM – Part 1

Microsoft has architected the Dynamics CRM platform to allow administrators to customize the environment to fit business needs, without requiring complex coding skills of a developer. This no-code configuration option is very powerful and allows the CRM environment to be built with a large portion of custom functionality before having to rely on custom scripting or coding practices.

This is the first article (of two) where I am going to detail out the different areas of no-code configurations, to give you an idea of what you can do with this type of customization in Dynamics CRM. Keep in mind that this list is not all-inclusive; I am only covering the larger areas of no-code configurations. I will be covering System and Custom Entities, along with Business Rules in this first part. Part 2 will cover Workflows and Dialogs, Business Process Flows, and Dashboards/Reports.


Configuring System Entities

Dynamics CRM comes with a set of system entities when installed. These entities make up the core of an out-of-the-box install. While you cannot delete a system entity (which is a good thing), you are able to configure them to extend their functionality. Here are some of the things you can do with system entities:

  • Create custom fields
  • Modify the Display Name of system fields
  • Change whether system fields are required/optional
  • Change the minimum/maximum range values on fields of certain data types (decimal number, for example)

There are some restrictions when configuring system entities, which are in place to protect and maintain system stability. Here are some of the things you cannot do:

  • Delete a system entity
  • Delete a non-custom field on a system entity
  • Change the data type of a field
  • Change the name of a field once it has been created

You can also create custom forms and views to extend how a system entity is displayed in the environment. Custom forms can be created to replace system forms to provide a custom look-and-feel. Custom fields can be added to system forms if a totally new form is not needed. System views can be configured to display additional fields or their filter criteria can be modified to provide custom data results. Custom views can also be created to be used in place of system views if other criteria or entity fields need to be displayed in the system. Please note that the filter criteria you can create are limited to what you can configure in the dialog window.


Figure 1: Example of System Form editor with Custom Fields

Another item to keep in mind is to make sure you publish any changes you make to the system, when you are ready to test them. Dynamics CRM allows you to make changes to the system without actually committing them. This allows you to load/add your changes and make edits before you actually apply them to the system. If you make changes and don’t publish them, they will not work as expected.


Figure 2: Example of Default Solution with Publish All Customizations button

Creating Custom Entities

If the system entities that Dynamics CRM provides is not enough for your requirements, you can also create your own custom entities. This is where CRM shines in terms of no-code customizations – you are able to create fully custom entities in the system without writing a single line of code! This allows you to configure your CRM system to fit almost any complex business requirements you may have.

There are a vast number of configuration possibilities here, but the most common I have seen is where custom entities are created to support the standard entities in the system. Sure, you can use custom entities completely, but why not leverage the functionality that Microsoft provides with Dynamics CRM, and then extend that functionality by creating custom entities that fit your need?

Please note that you are still not able to do certain things in the system, even with custom entities. For example, you are not able to change the data type on a field once it has been created. You are also not able to change the name of a field once it has been created.

Let’s go through an example. Suppose that we want to have our CRM system be able to display and keep track of automobiles owned by contacts. Since CRM doesn’t provide an entity out of the box to record automobiles (ignore the Products entity for this example J ), I can extend the functionality by creating a custom entity of my own to handle this.

The first thing I want to do is to design my custom entity, so it meets my business requirements. I would like to provide the following:

  • Allow a contact in CRM to own one or more automobiles
  • An automobile record in CRM has the following properties:
    • Make
    • Model
    • Year
    • Mileage
    • Owner
  • An automobile record in CRM can only be owned by one contact

Once I have these requirements, I can create the custom entity in CRM. This is done by creating an unmanaged solution (in the Settings->Solutions area) – I will call the solution AutomobileTest:


Figure 3: Creating a custom entity

Once you save your new custom entity, the dialog then allows you to create fields for the entity. Dynamics CRM creates some fields by default for you, such as the Name, CreatedBy, CreatedOn and Status.

As you create custom fields on your entity, note that the system prepends a publisher designator (“cncy_” in our example). A publisher is needed in order to create solutions in Dynamics CRM, and they are used to help manage customizations (managed or unmanaged).


Figure 4: Custom AutomobileTest solution with custom Automobile entity

Once the fields for our custom entity are defined, we can define the relationships between AutomobileTest and Contact entities. Since the requirements state one Contact can own multiple automobiles, and that each automobile record can only be owned by one Contact, that defines a N:1 relationship on our custom entity.


Figure 5: N:1 relationship between Automobile and Contact entities

Note that when creating this relationship, you must define the Primary and Related entities, along with a Display name. You can specify other attributes about the relationship, such as if it’s searchable, or the type of behavior (parental or referential).

There are 3 types of relationships available in Dynamics CRM:

  • 1:N (One-to-Many) – relates one entity record (the Primary) to one or more different entity records (the Related).
  • N:1 (Many-to-One) – the converse of 1:N relationships. This is really the same type of relationship as 1:N, just from the perspective of the related entity.
  • N:N (Many-to-Many) – relates many entity records to many different related entity records. Sometimes referred to as an Intersect entity.

Entity relationships are an important part of the Dynamics CRM schema, as they allow you to relate records to each other. This also allows the ability to define the schema to fit your business processes.


Business Rules

We now know how to configure standard or custom entities in Dynamics CRM that function in a standard and consistent manner. What do we do if we want to apply some custom logic on these entities? In the past the answer would have been to write some Javascript script code to implement this custom logic. Fortunately, Microsoft has introduced a declarative interface that allows you to create custom logic without writing Javascript. These are called Business Rules and are new (as of CRM 2013).

Business rules provide an easy way to evaluate business logic in Dynamics CRM, without the requirement of using custom code scripts. This can be performed on either the client or the server side. CRM provides a declarative interface that allows creation of these business rules, which can be applied against entity or entity fields. Please be aware that Business Rules in CRM do not replace custom scripting in CRM, as they are not able to perform all the things that custom scripting can perform. They can perform some of the more common things that scripting was providing in the past, as listed here:

  • Set field values
  • Clear field values
  • Set field requirement levels
  • Show or hide fields
  • Enable or disable fields
  • Validate data and show error messages

Also note that Business Rules will only work for Updated Entities ( or custom entities.

In order to create and configure a Business Rule in CRM, you need either the System Administrator or System Customizer security role. You will also need the Activate Business Rules privilege in order to activate a Business Rule. The rules you create are not applied until you activate them. Conversely, if you wish to edit a rule, you need to deactivate it first.

There are a few different ways you can access Business Rules in Dynamics CRM.

  1. Via Solution (Entity level)6
  2. Via Solution (Entity field level)7
  3. Form Editor (Entity level)8
  4. Form Editor (Entity field level)9


Business Rules have a scope – this determines the context of where the rule runs in Dynamics CRM. The different scope selections are:

  • Entity – runs on all forms and server
  • All Forms – runs on all forms
  • Specific Form – runs on that specific form

Note that you cannot select a scope of multiple specific forms. If you choose Specific Form, the rule will run on that one form only. This is also the default scope if you create a Business Rule via the form editor. Choosing All Forms means the rule will run on all the Main forms as well as the Quick Create forms, provided all the referenced fields are present on those forms.

Support for If/Else branches and AND/OR logic is supported for Business Rules. You can create logic in your business rules to check for certain conditions and perform specific actions if those conditions are met.


Figure 6: Business Rule with If/Else branch

There are a few limitations to be aware of. First off, nested If/Else statements are not supported. You can only have one level of If/Else statements. Grouping of expressions in a condition is not supported. Expressions can be combined using AND or OR, but not both. You also have a limit of 10 if/else conditions for a single Business Rule.

Conditions and actions are also supported for Business Rules. Conditions are the checks that you create to determine when specific actions are executed. Once a condition is met, the Actions you create will run as part of the rule. If a condition is not met, then the logic of the rule will skip over the action for when that condition is met.

There are a number of actions available:

  • Show error message – used to set error message text when a field is not valid
  • Set field value – used to set the value of the field. There are 3 types here:
    • Field – sets the value of one form field with the value of another field
    • Value – sets the value of one form field with an entered value
    • Formula – sets the value of one form field with a simple calculated field. Only valid for numeric or date data types
  • Set business required – used to change the requirement level for the field. Options are Business Required and Not Business Required
  • Set visibility – used to change whether the field is displayed/not displayed on the form. Options are Show Field and Hide Field
  • Lock or unlock field – used to change whether the field is enabled/disabled on the form. Options are Lock or Unlock

There are some general limitations when creating business rules:

  • Business Rules only run when the form loads or when field values change. They don’t fire on save, with the exception of when the scope is set to entity level
  • Business Rules only work with fields. If you wish to perform logic against tabs and sections you have to use custom scripts
  • If you set a field via a Business Rule, the system will not fire that field’s OnChange This is to protect against circular references
  • Business Rules that reference fields not present on a form will not run. No error will be displayed, so use caution if you’re removing fields from a form – always check that they are not being used on a Business Rule

Finally, you should be aware of the order in which logic is applied to Business Rules, in relation to system and custom scripts. First off, any system scripts are applied first. Next, any logic in custom form scripts is then applied. And finally, logic in Business Rules is now applied. If there are multiple Business Rules, they are executed in the order that they were activated.

In conclusion, you can see that using no-code configurations in Dynamics CRM has enormous power and flexibility in tailoring the environment to almost any business need. Part 2 of this series will discuss workflows and dialogs, business process flows, as well as dashboards and reports.






Customization Options for Dynamics CRM

This is the first post in a multi-part series on how to create customizations in Dynamics CRM. If you’ve ever used Dynamics CRM, you quickly learn that while it’s got a lot of features and functionality out-of-the-box, the system really needs to be customized to your specific environment to fully meet your business needs.

Once you realize this, your next questions are probably “How do I customize CRM?”, “What are my options to customize CRM?”, or “Do I have the skills to customize CRM?”. These are all great questions, and you should be asking them to ensure your efforts are set on a path to success. I will list out the different ways you can customize your Dynamics CRM environment at a higher level, so you can get a general idea of what area fits your level of customization (and skill).

There are three main areas of customization for Dynamics CRM, described below:

No-Code Configurations

The first place to start when you are considering customizing your Dynamics CRM environment. Dynamics CRM provides a very robust platform to create customizations on the out-of-the-box entities, forms and fields, without having to write complex code solutions.

You can get very far with these types of no-code configurations. I’ve seen full CRM implementations that use no-code configurations as their only customizations, and they don’t have one line of custom code at all! Here are some of the things you can do with no-code configurations:

  • Extend standard entities to add custom fields
  • Create your own custom forms to extend standard entities
  • Create your own custom entities
  • Extend standard views to display additional information
  • Create your own custom views
  • Create your own custom business process flows
  • Create workflows that provide custom logic
  • Create business rules that enforce data integrity

Part 1 can be found here.

Javascript Customizations

If you find that no-code configurations do not satisfy the customizations you need, the next area you should consider is creating custom Javascript files. Dynamics CRM uses Javascript files to perform a multitude of things related to how the forms look/operate, as well as other UI considerations.

If you need to customize Dynamics CRM to perform things when a form loads, or when a field entry value changes, then Javascript files are a good fit. The drawback here is that you need to have the skills to write Javascript code in order to properly implement this type of customization, so it is not as easy as the no-code configuration option.

Please note that Business Rules were introduced in Dynamics CRM 2013 as Microsoft’s way to “lower the bar” for creating Javascript customizations. Business Rules are a no-code way to perform the same (or similar) functionality that Javascript customizations currently provide. CRM provides a designer “surface” to create these rules, so that CRM administrators can create these rules without having to know Javascript.

Some of the things you can do with Javascript files or Business Rules are listed here:

  • Set the value of a field on a form, based on some criteria
  • Make a field on a form required when a value in another field is changed
  • Show an error message when an invalid entry is entered for a field
  • Show or hide a field depending on criteria
  • Enable or disable a field depending on criteria

Code Customizations

If no-code configurations or Javascript customizations do not fit the custom functionality you need to implement, the final option you have is code customizations. This requires the highest level of skills for customizing Dynamics CRM, as the person implementing the customizations (usually a developer) needs to know how to write code.

Code customizations usually are created as plugins, custom workflow activities, or custom web applications that can be embedded in CRM forms. Many companies offer 3rd party application solutions that you can add to your Dynamics CRM environment to extend the out-of-the-box functionality. These 3rd party solutions are custom-coded by their own developers, but if you purchase their application you don’t have to worry about writing code for the functionality (as they have already done that) – you just install the application and you’re good to go!

Custom plugins are designed to execute when an event or action occurs in Dynamics CRM, such as an update or delete of a certain type of record (for example, when an Account record is deleted). The plugin uses a code software development kit (SDK) provided by Microsoft to access the CRM environment to perform whatever custom logic needed.

Custom workflow activities are similar to plugins, but they are not executed when an event or action occurs in the CRM system. They are designed to be added to workflows in CRM, so that when a CRM administrator is creating a no-code workflow they can select a custom workflow activity to perform specific functionality. This extends CRM workflows to perform almost anything you can imagine in the system!

Custom web applications are server-hosted web applications written with custom code (ASP.NET for example) that are hosted on a web server that your Dynamics CRM environment can access. These custom web application pages can be embedded into a standard CRM form, to customize that form in CRM. There are many possibilities here on what can be created, so I will address that in a later post.


In conclusion, I hope you can see that the customization options for Dynamics CRM are numerous and these choices help make the Dynamics CRM an extremely customizable platform to fit any business need.

Please stay tuned for further details on customizing Dynamics CRM, as I will be updating this article with links to future blog articles. Happy customizing!