Develop 1 Limited Blog | Microsoft Dynamics 365 Solutions

Microsoft Dynamics 365 Solutions
  1. React version 18 has recently been pushed into npm which is great if all of your components support it, however, if you are working with Fluent UI then you may stumble across the following error:

    npm ERR! code ERESOLVE
    npm ERR! ERESOLVE unable to resolve dependency tree
    npm ERR!
    npm ERR! While resolving: my-app20@0.1.0
    npm ERR! Found: @types/react@18.0.8
    npm ERR! node_modules/@types/react
    npm ERR!   @types/react@"^18.0.8" from the root project
    npm ERR!
    npm ERR! Could not resolve dependency:
    npm ERR! peer @types/react@">=16.8.0 <18.0.0" from @fluentui/react@8.67.2
    npm ERR! node_modules/@fluentui/react
    npm ERR!   @fluentui/react@"*" from the root project
    npm ERR!
    npm ERR! Fix the upstream dependency conflict, or retry
    npm ERR! this command with --force, or --legacy-peer-deps
    npm ERR! to accept an incorrect (and potentially broken) dependency resolution.
    npm ERR!
    npm ERR! See C:\Users\...\AppData\Local\npm-cache\eresolve-report.txt for a full report.
    npm ERR! A complete log of this run can be found in:
    npm ERR!     C:\Users\...\AppData\Local\npm-cache\_logs\....-debug-0.log

    This might happen if you are doing either of the following:

    1. When creating a standard PCF project using pac pcf init and then using npm install react followed by npm install @fluentui/react
    2. Using create-react-app with the standard typescript template, followed by npm install @fluentui/react

    The reason in both cases for the error is that once React 18 is installed, Fluent UI will not install since it requires a version less than 18. The Fluent UI team are working on React 18 compatibility but I do not know how long it will be until Fluent UI supports React 18. 

    These kinds of issues often crop up when node module dependencies are set to automatically take the newest major version of packages.

    How to fix the issue?

    Fundamentally the fix is to downgrade the version of React and the related libraries before installing Fluent UI:

    pac pcf init

    If you are using standard controls - you might consider moving to virtual controls.
    Doing this actually requires a specific version of React and Fluent UI to be installed and so there is no issue.
    Check out my blog post on how to convert to a virtual control and install the specific versions required.

    Alternatively, if you are installing react after using pac pcf init with a standard control you can install version 17 specifically using:

    npm install react@17 react-dom@17 @types/react@17 @types/react-dom@17

    After you've done that, you can install fluent as usual using:

    npm install @fluentui/react@latest


    Create-react-app is a command-line utility that is commonly used to quickly create a react app - and is often used for testing purposes when building pcf components. Now that React 18 has been released, using create-react-app will also install react-18. The scripts and templates have all be updated accordingly. 

    Unfortunately, you can't use an older version of create-react-app that used the older version of react (e.g. npx create-react-app@5.0.0) because you will receive the error:

    You are running `create-react-app` 5.0.0, which is behind the latest release (5.0.1).
    We no longer support global installation of Create React App.

    The Fluent UI team are actually working on a create-react-app template for fluent that specifically installs react17 - but until then you will need to follow these steps:

    1. Use create-react-app as usual:
      npx create-react-app my-app --template typescript
    2. After your app has been created use:
      cd my-app
      npm uninstall react react-dom @testing-library/react @types/react @types/react-dom
      npm install react@17 react-dom@17 @testing-library/react@12 @types/react@17 @types/react-dom@17
    3. Since the latest template is designed for React 18 you will need to make some minor modifications to index.ts:
      Replace import ReactDOM from 'react-dom/client'; with import ReactDOM from 'react-dom';
      Replace the following code:
      const root = ReactDOM.createRoot(
        document.getElementById('root') as HTMLElement
          <App />
      With the code:
          <App />
      This is required because React 18 does not support the ReactDOM.render method anymore.

    Once Fluent UI has been updated to support React 18, these steps will not be required - however, if you are using Virtual Controls, then until the platform is updated, your controls will continue to need to use React 16.8.6.

    Hope this helps!


  2. The long-awaited 'virtual control' feature is finally in preview which means you can start to try converting your controls to be virtual - but what does this actually mean?

    What are virtual code component PCF controls?

    Virtual controls are probably better named React code components since this is their defining feature. Using them has the following benefits:

    1. Uses the host virtual DOM - The code component natively is added to the hosting apps 'Virtual DOM' instead of creating its own. This has performance benefits when you have apps that contain many code components. See more about the React virtual DOM: Virtual DOM and Internals – React (
    2. Shared libraries - When using React and Fluent UI which is the best practice for creating code components, the libraries are bundled into the code components bundle.js using web-pack. If you have many different types of code components on your page, each with its own bundled version of these libraries it can lead to a heavy footprint, even when using path-based imports. With shared libraries, you can re-use the existing React and Fluent UI libraries that are already made available by the platform and reduce the memory footprint.

    You can create a new virtual control to see this in action using the Power Platform CLI with:

    pac pcf init -ns SampleNamespace -n VirtualControl -t field -npm -fw react

    The key parameter is -fw react which indicates to use the new virtual control template:

    But how do you convert your existing code-components to virtual controls?

    If you have a code component that uses React and Fluent UI today, then you can follow the steps below to convert them and benefit from the points above. If you would prefer a video of how to do this you can check out my youtube tutorial on react virtual controls.

    1. Set control-type to virtual

    Inside the ControlManifest.Input.xml, update the attribute control-type on the control element from standard, to virtual

    For example, from:

    <control namespace="SampleNamespace" constructor="CanvasGrid" version="1.0.0" display-name-key="CanvasGrid" description-key="CanvasGrid description" control-type="standard" >


    <control namespace="SampleNamespace" constructor="CanvasGrid" version="1.0.0" display-name-key="CanvasGrid" description-key="CanvasGrid description" control-type="virtual" >

    2. Add platform-library references

    Again, inside the ControlManifest.Input.xml, locate the resources element add the platform libraries for React and Fluent. This will tell the platform that the component needs these libraries at runtime.

          <code path="index.ts" order="1"/>
          <platform-library name="React" version="16.8.6" />
          <platform-library name="Fluent" version="8.29.0" />

    Note: It is important to ensure that the version of React and Fluent that you are using is supported by the platform.

    3. Ensure you are using the same version of Fluent and React as the platform

    To ensure you are using the correct versions of React and Fluent you can uninstall your previous ones and then add the specific version referenced above:

    npm uninstall react react-dom @fluentui/react
    npm install react@16.8.6 react-dom@16.8.6 @fluentui/react@8.29.0

    Note: If you are using deep path based imports of fluent - check you are using the root library exports as I describe in a previous post - this is to ensure the exports will be picked up correctly.

    4. Implement ComponentFramework.ReactControl

    The key part of the change to index.ts is that now we must implement a new interface ComponentFramework.ReactControl<IInputs, IOutputs>  instead of ComponentFramework.StandardControl<IInputs, IOutputs>

    Locate the class implementation in index.ts and update the implements interface to be:

    export class YourControlName implements ComponentFramework.ReactControl<IInputs, IOutputs>

    5. Update the signature of updateView

    The old method signature of updateView returned void, but now you must return a ReactElement so that it can be added to the virtual DOM of the parent app. Update the signature to be:

    updateView(context: ComponentFramework.Context<IInputs>): React.ReactElement

    6. Remove ReactDOM.render

    Since we are using the virtual DOM of the parent app, we no longer need to use ReactDOM. You will normally have code similar to:


    Replace this now with simply:

    return React.createElement(MyComponent);

    7. Remove calls to unmountComponentAtNode

    Previously you would have had to dismount the React Virtual DOM elements in the code component's destroy method. Locate the destroy method and remove the line:


    8. Make sure you are using the latest version of the Power Apps CLI

    To ensure that your Power Apps CLI supports virtual controls, ensure it is updated to the latest version. I recommend doing this using the VSCode extension if you are not already using it and removing the old MSI installed version. You will need to do a npm update pcf-scripts pcf-start to grab the latest npm modules that support react virtual controls!

    That's it!

    It really is that simple. If you now use npm start watch you'll see your component rendered, but the bundle.js size will be smaller and when you deploy, it'll be faster in apps that contain many components.

    Check out the official blog post about this feature for more info.

    Hope this helps!



  3. If you are using Fluent UI in your code components (PCF) you probably are also using path-based imports to reduce your bundle size. This technique ensures that when you build your code component, the bundle doesn't include the entire Fluent UI library, but instead just the components that you need. With the recent update to Fluent UI, you might receive an error similar to the following:

    ERROR in ./somefile.tsx 
    Module not found: Error: Package path ./lib/components/CommandBar is not exported from package C:\src\CommandBar\node_modules\@fluentui\react (see exports field in C:\demo4\CommandBar2\node_modules\@fluentui\react\package.json)

    This is probably caused by your paths pointing to a folder that is not included in the new explicit export paths that have been added to the Fluent UI react package

    To ensure that you maintain compatibility with each update to the Fluent UI library, instead of using:

    import { CommandBar } from '@fluentui/react/lib/components/CommandBar';

    You should instead use:

    import { CommandBar } from '@fluentui/react/lib/CommandBar';

    See more information in the docs: Best practices for code components - Power Apps | Microsoft Docs

    That's all for now!


  4. One of the biggest causes of unexpected bugs in canvas apps is the delegation of queries. For instance, if you want to sort by the owner of an account, you can use the Power Fx query:

    Sort(Accounts,'Created By'.'Full Name', Ascending)

    You will get a delegation warning on this since the sorting will only happen in memory and not on the server. This means if you have the delegation limit set to 500, only the first 500 records will be sorted instead of sorting the entire dataset on the server-side. This may not show up whilst you are developing the app, but it might not work as expected in production once deployed.

    Perhaps more concerningly, if you are using data-shaping to add a column (AddColumns) to then sorted using this column, the delegation warning will not even show up.

    When using PCF (code-components) inside canvas apps, a nice feature is that we have much more control over the paging/filtering/sorting/linking of Dataverse queries. This is part of the declarative vs imperative story (but that’s for another time).

    When binding a dataset to a Dataverse connector, we can use OData style operators to manipulate the query, rather than the Power Fx Sortand Filter functions and their associated delegation challenges.

        onSort = (name: string, desc: boolean): void => {
            const sorting = this.context.parameters.records.sorting;
            while (sorting.length > 0) {
                name: name,
                sortDirection: desc ? 1 : 0,
        onFilter = (name: string, filter: boolean): void => {
            const filtering = this.context.parameters.records.filtering;
            if (filter) {
                    conditions: [
                            attributeName: name,
                            conditionOperator: 12, // Does not contain Data
                } as ComponentFramework.PropertyHelper.DataSetApi.FilterExpression);
            } else {

    This is awesome since we don’t need to worry about any delegation provided that the query can be translated into an OData query.

    But…watch out

    If you have a grid that performs a dynamic sort operation using each column name, and the user sorts on the account.createdby column which is of type Lookup.Simple - you might think that this would be a matter of using the column name:

                name: "createdby",
                sortDirection: desc ? 1 : 0,

    After all, createdby is the column name that is given to us by Power Apps in the dataset metadata:

      "name": "createdby",
      "displayName": "createdby",
      "order": 5,
      "dataType": "Lookup.Simple",
      "alias": "createdby",
      "visualSizeFactor": 1,
      "attributes": {
        "DisplayName": "Created By",
        "LogicalName": "createdby",
        "RequiredLevel": -1,
        "IsEditable": false,
        "Type": "Lookup.Simple",
        "DefaultValue": null

    Strangely, this does not cause any errors a run-time and the data is actually sorted so it looks like it’s working - but on closer examination, the query that is set to the server is:


    Seems legit? But the response is actually an exception:

    The $orderby expression must evaluate to a single value of primitive type.

    The reason being is that the createdby logical name is not expected by the WebApi when sorting, instead, it expects the name _createdby_value.
    What appears to be happening is that after the query fails,  canvas apps uses a fallback approach of performing the sorting in-memory in a non-delegated fashion - but this is not reported in an obvious way. The only indicators are the network trace and the somewhat confusing errorMessageon the dataset object of Invalid array length.

    To get around this, we can’t pass the column name that is used in our dataset, but instead to use the OData name expected:

                name: "_createdby_value",
                sortDirection: desc ? 1 : 0,

    This seems slightly unusual until you remember that we are simply hitting the OData endpoint - bringing us back into the imperative world with a bit of a bump! Remember it won’t do all that fancy caching and performance optimizations that Power Fx does for you!

    Hope this helps,

  5. At some point, over the last few months, a change was introduced to the Power Platform CLI such that if you have the ESLint VS Code add-in installed, after using pac pcf init, you may see an error in VS code:

    • 'ComponentFramework' is not defined.eslint(no-undef)

    This might look something like this in the index.ts file:

    The reason for this is that the pac pcf init template now includes an .eslintrc.json however, it is configured to use JavasScript rules rather than TypeScript ones.

    To fix this you simply need to edit the .eslintrc.json  file.

    Find the extends section and replace the standard ruleset with;

    "extends": [

    You also might see some other odd errors such as:

    • Unexpected tab character.eslint(no-tabs)
    • Mixed spaces and tabs.eslint(no-mixed-spaces-and-tabs)

    The reason for this is that the template used to create the index.ts file contains a mix of tabs and spaces for indentation. eslint is warning about this - so you can either add the following lines to the top of the file, or you can change the indentation to use spaces using the Format Document command.

    /* eslint-disable no-mixed-spaces-and-tabs */
    /* eslint-disable no-tabs */

    Hope this helps!

  6. You might have seen the announcement about Modernized Business Units in Microsoft Dataverse.

    I made a video on it as well to show you the opportunities that it opens up when designing Microsoft Dataverse security models for both model-driven and canvas apps.

    In summary, the change can be broken down into two parts:

    1. You can now assign a Security Role from a business unit outside of the user's own business unit - this allows users to access records in different business units as though they were a member of that business unit. This could replace more traditional approaches that might have previously involved sharing and/or team membership.
    2. Records can now have an owning business unit that is different from the business unit of the Owning Users/Team. This means that when users move between business units, there are potentially fewer scenarios where you need to re-assign ownership of those records, and the user can maintain access to their records without complex workarounds.

    Check out my videoand the official docs for more info.

    Whilst I was exploring this new feature it occurred to me that this was perhaps the way that the Security Role assignment and Owning Business Unit was always meant to work from the start. Here are my reasons:

    1. The owningbusinessunit field has always been there in table definitions - but shrouded in mystery!
      1. It was automatically updated by the platform to match the business unit of the owning user or team.
      2. You couldn't add this field to forms.
      3. You couldn't always use it in view search criteria because it was set to being non-searchable for some entities - but enabled for others.
      4. There was always mystery surrounding this field since there were limitations to its use - but if you wrote a plugin, you could set it inside the plugin pipeline to a value different to the owning user/team's business unit - but with unknown consequences!
      5. Alex Shlega even wrote a blog post about this mysterious field a few years ago.
    2. Security Roles have always been business unit specific:
      1. If you have ever had to programmatically add a Security Role to a user - you'll have had to first find the specific Security Role for the user's Business Unit since each Security Role created is copied for every single business unit - with a unique id for each.
      2. When moving a user between Business Units, their Security Roles were removed, because they were specific to the old business unit (This is now changing with this new feature thankfully!)
      3. I can't be 100% certain - but I have some dim-and-distant memory that when using a beta version of CRM 4.0 or maybe CRM 2011, there was the option to select the business unit of a Security Role when editing it - as it is today you can't do this and receive the message 'Inherited roles cannot be updated or modified :

        Now that would introduce some interesting scenarios where you could vary the privileges of the same role inside different business units.

    Maybe I dreamt that last point - but it certainly seems that whoever originally designed the data model for Business Units and Security Role assignment wanted to allow for users to have roles assigned from different business units - or at least supporting varying role privileges across different business units. Or maybe, it was a happy coincidence that the data model already supported this new feature!
    I wonder if there is anyone who worked on the original code who can comment!



  7. As you know, I'm 'super excited'* about the new Power Fx low-code Command Bar buttons (First Look Video) (Ribbon Workbench compared to Power Fx) - especially the ease at which you can update multiple records in a grid. To allow the user to select a number of records on a grid, and then perform an operation on each, in turn, would have taken plenty of pro-code TypeScript/JavaScript before but now can be turned into a simple ForAll expression. 

    * That one's for you @laskewitz 😊

    The one thing that always gets left out - Error Handling!

    It's easy to assume that every operation will always succeed - but my motto is always "if it can fail, it will fail"!

    Just because we are writing low-code, doesn't mean we can ignore 'alternative flows'. With this in mind, let's add some error handling to our updates. 

    Step 1 - Add your Grid Button

    Using the Command Bar editor, add a button to an entity grid and use the formula similar to :

    With({updated:ForAll(Self.Selected.AllItems As ThisRecord, 
        If(Text(ThisRecord.'Credit Hold')="No", 
            Patch(Accounts, ThisRecord, { 'Credit Hold':'Credit Hold (Accounts)'.Yes });"Updated"),
        updatedCount: CountRows(Filter(updated,Value="Updated")),
        skippedCount: CountRows(Filter(updated,Value="Skipped"))
        Notify("Updated " & Text(updatedCount) & " record(s) [" & Text(skippedCount) & " skipped ]");

    In this example code, we are updating the selected Account records and marking them as on 'Credit Hold' - but only if they are not already on hold. 

    Imagine if we had some logic that ran inside a plugin when updating accounts that performed checks and threw an error if the checks failed. This code would silently fail and the user would not know what had happened. To work around this we can use the IsError function and update the code accordingly:

    With({updated:ForAll(Self.Selected.AllItems As ThisRecord, 
        If(Text(ThisRecord.'Credit Hold')="No", 
            If(IsError(Patch(Accounts, ThisRecord, { 'Credit Hold':'Credit Hold (Accounts)'.Yes }))=true,"Error","Updated"),
        updatedCount: CountRows(Filter(updated,Value="Updated")),
        errorCount: CountRows(Filter(updated,Value="Error")),
        skippedCount: CountRows(Filter(updated,Value="Skipped"))
        Notify("Updated " & Text(updatedCount) & " record(s) [ " & Text(errorCount) & " error(s) "  & Text(skippedCount) & " skipped ]");

    Save and Publish your Command Button. This creates/updates a component library stored in the solution that contains your model-driven app.

    Step 2- Open the component library and enable 'Formula-level error management'

    Since we are using the IsError function we need to enable this feature inside the component library. This along with the IfError function can be used to check for errors when performing Patch operations.

    Inside your solution, edit the command bar Component Library(it will end with _DefaultCommandLibrary), then select Settings and toggle on the Formula-level error management feature:


     Make sure you save, then publish your component library.

    Step 3 - Re-open the Command Bar Editor and publish

    After editing the Component library, it seems to be necessary to always re-publish inside the command bar editor (You will need to make a small change to make the editor enable the Save and publish button). You will also need to refresh/reload your model-driven app to ensure the new command bar button is picked up.

    Done! You should now have error handling in your command bar buttons 😊

    Hope this helps,


    P.S. SUMMIT NA 2021 is next week! I can't believe it! I'll be speaking about custom pages and Power FX command buttons - if you are able, come and check out my session on Next Gen Commanding.


  8. If you have canvas apps that use code components then you will be used to the hard link between the namespace of the code component and the canvas apps that use it. Also, if you have your canvas apps in a solution, then there are now solution dependencies added for the code components used to ensure that they are installed before you import the solution to a target environment. You can read more about code component ALM in the Microsoft docs.

    How do we swap out components easily?

    Occasionally, you may need to change the namespace of a control (or perhaps change a property name) but it is very time-consuming to remove the control from all the apps, then re-add it after the update. This is especially painful if you have lots of Power Fx code referencing it.

    Power Apps CLI to the rescue

    I have long been an advocate of storing everything as code - and canvas apps are no exception here. The once experimental canvas app sopa tool (Power Apps as source code) is now all grown-up and is included in the Power Platform CLI. 😊

    In this post, I am going to show you how to take a solution, unpack it, update the code components, and then re-pack so that they will use the new components. This is made possible largely by canvas apps now being unpacked into Power Fx.

    Step 1 - Export your solution

    Export your solution that contains the canvas apps - but don't include the code components. Let's imagine you have two apps that both use two code components and the solution is named

    Step 2 - Unpack your solution

    Make sure you have the Power Platform CLI installed - I find it really easy to use the Power Platform Tools VSCode extension.

    Using PowerShell (assuming you are in the root folder that contains the solution zip):

    pac solution unpack --zipfile --folder Solution

    This will create a folder named Solution that contains all the unpacked elements. There will also be a folder named CanvasApps that contains the msapp files and the corresponding metadata for your apps.

    Step 3 - Unpack the canvas apps to Power Fx

    Now we can unpack are two apps into the Power Fx that we will need to edit. There are lots of files created in addition to the Power Fx source code that contains metadata about the elements used in the app.

    Using PowerShell (assuming you are in the root folder that contains the solution zip):

    Get-ChildItem -Recurse -force -Path $dir | where-object { $_.Extension -eq '.msapp' } | ForEach-Object { 
        pac canvas unpack --msapp $_.FullName --sources "CanvasAppsSrc\$($_.BaseName)"

    You will now how your Power Fx source for each of your apps under the CanvasAppsSrc folder!

    There is an entropy folder that is created for round-trip rebuilding, but we can safely delete these folders using PowerShell:

    Get-ChildItem -Recurse -Directory -Path $dir | where-object { $_.BaseName -eq 'Entropy' } | ForEach-Object { 
        Write-Host "Removing Entropy Folder $($_.FullName)"
        Remove-Item $_.FullName -Force -Recurse

    Step 4 - Search/Replace

    This step is the more tricky part - we need to change all the references to the code component's old namespace/solution prefix and replace it with the new one. This is easier if you have names that are unique!

    Open up the root folder in VSCode and use the global search and replace function (with case sensitivity turned on) to preview what you are doing. You will find the replacements needed in jsonyaml and xml files.

    Step 5 - Rename files

    When changing the namespace/publisher prefix of your code components, it is also necessary to rename the resource files that are part of your code components since each file is prefixed with the namespace. Additionally, the app resource file names are prefixed with the solution publisher.

    You can use the PowerShell:

    $oldnamespace = "Samples";
    $newnamespace = "MyNamespace"
    $oldpublisher = "samples_";
    $newpublisher = "myprefix_"
    Get-ChildItem -Recurse -force -Path $dir | where-object { $_.Name.StartsWith($oldnamespace) } | ForEach-Object {
        rename-item -LiteralPath $_.FullName $_.FullName.Replace($oldnamespace,$newnamespace) 
    Get-ChildItem -Recurse -force -Path $dir | where-object { $_.Name.StartsWith($oldpublisher) } | ForEach-Object {
        rename-item -LiteralPath $_.FullName $_.FullName.Replace($oldpublisher,$newpublisher) 

    Step 6 - Repacking

    You can now re-pack the apps and the solution!

    Get-ChildItem -Recurse -force -Path $dir | where-object { $_.Extension -eq '.msapp' } | ForEach-Object { 
        pac canvas pack --msapp $_.FullName --sources "CanvasAppsSrc\$($_.BaseName)"
    pac solution pack --zipfile --folder Solution

    Step 7 - Update code components & Import solution

    Ensure that you have published the new versions of your re-build code components to your environment making sure to increment the control versions. It is very important to increment the version of the code components so that the canvas apps will detect the new version upon opening up after the update. This will force the update to the new code-component resource files.

    You can now import the into your environment.

    When you open each canvas app you will get the upgrade component prompt and your app will be using the new code components under the updated namespace!

    If you updated the code-component resource files then theoretically you could perform the same on a managed solution and remove the need to open and republish each app, but I've not tried this!

    Hope this helps!



  9. If you are developing code components (PCF) for canvas apps you'll be used to using the 'Get more components' panel. When adding the code-component to the canvas app, occasionally you will receive the somewhat unhelpful error:

    Couldn't import components

     There are no more details provided in the expanded down arrow area:

    I'm putting this here for my future self (who always seems to forget about this issue) and anyone else who comes across this error.

    The cause is usually because you have a property-set that has the same name as an in-build property or one of your own (e.g. Enabled/Selected)

    The resolution is simply to prefix your property-set names with the name of the data-set:

    <data-set name="Items"... >
    <property-set name="ItemsSelected" .. />
    <property-set name="ItemsEnabled" .. />

    Hope this helps!



  10. One of the longest-standing mottos of the Power Platform has been ‘no cliffs’. This somewhat odd phrase has definitely stood the test of time due to its simplicity and its powerful message. That is - you shouldn’t ever find yourself developing solutions using the Power Platforms ‘low-code’ features and suddenly finding that you hit an un-passable ‘cliff’. Instead, there are multiple avenues forwards using various degrees of ‘pro-code’ features. I think the message also hints at being able to start as a small app and grow into a larger enterprise one without hitting cliffs – although this story is still unfolding.

    In parallel to the Power Platform ‘no-cliffs’ story is the saga of the 1st party Dynamics 365 apps that sit on top of the Dataverse and Power Platform (tale of three realms). Originally, Dynamics 365 was the platform, but since the two have mostly separated, they are on somewhat separate development cycles. That said, of course, many features are built by the Power Platform team for a specific need of the Sales/Service/Marketing apps – or at least, they use the features before they are publicly available. This creates a rather nice testing ground and promotes a component based architecture – rather than the original monolithic platform approach of the Dynamics CRM days.

    Black-box user interfaces

    But here’s the thing. With each release of the Dynamics 365 apps comes user interface features that are both beautiful, but alas painfully out of reach of pure Power Platform Apps unless you want to do some awesome pro-coding!

    There have been some amazing steps made recently that makes it much easier to create beautiful low-code model-driven app user interfaces:

    1. App convergence - custom pages
    2. Power Fx commanding
    3. Model-driven side panels
    4. Fluent UI controls in canvas apps
    5. PCF code components in canvas apps

    These awesome features allow us to create beautiful low-code user interfaces and then extend them using pro-code components. Using PCF code components inside Custom Pages makes it possible to create some really complex user interfaces using libraries like React and Fluent UI – but it’s certainly not low-code!


    Take the new Deal Manager in 2021 Wave 2 of the Dynamics 365 Sales App. It has some rather juicy-looking user interface. Underneath all that beauty is some awesomely productivity functionality such as opportunity scoring & custom metrics.

    My point? I would love it if the platform allowed us to build low-code user interfaces with ease and efficiency to look just like this – or at least similar. If you have the appetite and the willingness to build/support custom user interfaces, then the components used by the 1st party apps should be there to use in custom apps instead of having to revert to pro-code. The primary reasons to buy licenses for the 1st party apps should be about the functionality and features that it provides. The user interface should be provided by the platform. Furthermore, if we wanted to customise the 1st party user interface, they should be easily extendable. The Deal Manager currently is one monolithic closed Power Apps Component Framework control that has very limited customisability.

    I would love for the ethos of the 1st party apps to be about delivering as much functionality as possible using low-code features rather than reverting to pro-code – this would benefit both the Platform and its customers.

    Extendable using the low-code platform

    I hope in future releases there will be more focus on component-based user interfaces where screens like the 1st party Deal Manager screens are actually composite screens built using mostly standard components that are provided by the platform – with only very exceptional 1st party specific user interfaces being inside PCF components.

    This would make these screens editable/extendable by customers if desired instead of the black-box that they mostly are today. If a completely different user interface is required that looks similar, then the same components should be able to be added using low-code to glue them together.

    This is needed so that we don’t return to the days when it was very common for ‘out of the box’ features to be black-boxes and not usable or extendable by customizers.

    Starting from Primitives is hard to maintain

    Canvas Apps often get referred to as ‘pixel perfect’ apps. You can certainly build beautiful user interfaces using primitives such as lines, rectangles, galleries, and buttons. But this comes at a cost. The Dataverse for teams sample apps are visually stunning – but if you look at how they are written you will see what I mean. Starting from primitives in this way to build apps that look like model-driven apps is very complex and hard to maintain. The trouble is that it starts to undermine the benefits of a low-code platform when you have apps of this level of complexity. When we build business apps, we need components that hide us from most of the complexity and leave us to write the functionality needed. This is what app convergence is really about – being able to have the best of both worlds - model-driven and canvas apps:

    1. Build complex metadata-driven user interfaces quickly that are consistent and easy to maintain
    2. Create custom layouts using drag-and-drop components with Power Fx code to glue functionality together.

    So, at what point can we say that the apps have converged?

    I don’t think converged apps means the end of stand-alone canvas apps – there is always going to be a time and place for ‘pixel-perfect’ low code apps that have a very custom look and feel. Instead, let’s hope we see canvas components such as forms, editable grids, command bars, tab controls & model-driven charts that can be glued together with Power Fx, to create beautiful custom composed pages so we can focus on building business features, and having to hand-craft every grid, command bar and form. This is not a new idea - check out my app convergence predictions from back in 2019 (Why do we have 2 types of apps) where I describe my idea of being able to ‘unlock’ a model-driven page so that it turns into a canvas page, composed with all the model-driven app components – now wouldn’t that be cool!

     UG Summit NA

    Those nice people at Dynamic Communities have asked me to be a Community Ambassador for SUMMIT NA 2021. This is going to be an exciting in-person event (fingers crossed) where I'll be speaking about custom pages and Power FX command buttons - it would be great to see you there if you are at all able to travel to Houston, Texas. I will be talking more about app convergence and doing some demos of where we are today - App Convergence is Almost Here with Customer Pages in Model-Driven Apps.

    If you are in North America and feel comfortable yet with an in-person event you can also register using the Promo code Durow10 to get a 10% discount if you've not already registered! 






  11. Until recently I didn’t consider attending in-person events at all. I saw calls for speakers for live events - but I just moved on. Sound familiar? I have been avoiding the idea of in-person events altogether!

    Thanks to the nudge from a few folks, I realised that now is the time to re-boot – and suddenly I realised how much I’ve missed attending this kind of in-person event. Somehow, I’d convinced myself that virtual was all we will ever need. Back in 2020, I was booked to attend Community Summit in Europe just before the world closed down, and like many of you, was left out of pocket when the event was cancelled. This naturally left a bit of a sour taste, and in-person events became tainted by the scramble that followed. Of course, I presented at the virtual UG summit, but it really didn’t feel that different from any of the other user group virtual meetings that I regularly attend. 

    Was it the end?

    Was this the end of the Dynamics Community UG events as we knew them? Here is a photograph from the archives - a UG meeting back in 2011 held at the Microsoft Reading UK Offices. Can you spot a very fresh-faced me? Can you spot yourself?! Anyone else you know?

     Crowd of people at 2011 CRMUG meeting - Scott is looking freshface and young!

    Here is a picture of Simon Whitson with me, trying to re-create the updated UG logo!

     Scott and Simon are chest bumping to look like the Dynamics Community logo that is two arcs with a circle.

    Looking back at past community summits has reminded me how important these events are to being part of the community, connecting, learning and sharing together in person - something that I now realise I had forgotten about amongst the focus on running virtual events.

    ⏩Fast-forward to today, and I’m so excited to be attending the Community Summit NA on October 12-15 that is being held in Houston, Texas. Having been part of community summits in Europe and helping run the UK community days for so many years, these types of events have played a huge part in my professional development.

    I moved to Canada last year, and so now it’s my opportunity to attend the North American version of this yearly community event. I think it’s also time to start planning some face-to-face meetings with the Vancouver Power Apps User Group

    Mileage may vary 

    I realise that there are some people that are still not ready (or even able) to attend in-person meetings, and I really don’t want to offend anyone. I especially feel for those that are entering into yet another lock-down. If this is you, then please accept my sincerest apologies for even talking about in-person events. I think the success and benefits we have enjoyed from virtual events during these difficult months will mean that they will continue to be successful alongside in-person ones.

    So what can we learn? πŸ€”

    Community events should have a true sense of ‘belonging’, where we can all come together, connect, and learn from one another. We have learnt so much from these past months on how to be more inclusive and more accessible during events, I’m looking forwards to learning how to apply this experience forwards to make in-person community events even better! Here are a few topics that I’ve started to think about:

    • Medic Stations – how can we make these more approachable and less intimidating? Not everyone wants to engage in this way – how can we offer alternatives so you can get your questions answered without having to compete with others and wait in line?
    • Breakout sessions – are these as accessible as they could? Is the language always inclusive? Q&A time sometimes can be such a rush at the end, especially where there is another session starting afterwards. Is the session description an accurate description of the content so that attendees can easily identify if it’s for them or not?
    • Expo halls – these can be crazy loud, busy and sometimes intimidating. How can it be made easier to navigate the maze to find what you want? Is there a way to find a time that is quieter so that everyone can get the most out of what is on offer?
    • R & R – When we are at home, we usually can find space to relax and recharge. Conversely, in-person can be so exhausting with moving from room to room whilst our brains are being crammed full of new information, not to mention the social anxiety that you might feel. How can breaks be made to count more? Are refreshments provided to suit everyone’s needs?
    • Hybrid Sessions  - Should all sessions be streamed for those who are unable to make the physical event - or is it sufficient to have the sessions recorded and then available on-demand later?

    Perhaps you have more ideas – I would love to hear them - do get in touch and let me know!

    See you there?

    Those nice people at Dynamic Communities have asked me to be a Community Ambassador for SUMMIT NA 2021. This is going to be an exciting in-person event (fingers crossed) where I'll be speaking about custom pages and Power FX command buttons - it would be great to see you there if you are at all able to travel to Houston, Texas. Here are my sessions:

    If you are in North America, do you feel comfortable yet with an in-person event? You can also register using the Promo code Durow10 to get a 10% discount if you've not already registered!

    The most important part is - they are offering a 100% money-back guarantee should the event be cancelled due to COVID-19- and naturally there is a significant focus on Health and Safety.

    Looking forwards to meeting IRL πŸ˜Š


  12. As a follow-on to my last post on adding custom page dialogs to model-driven forms, in this post, I'm going to show you how you can easily do the same using the Next Gen Commanding Buttons.

    Ribbon Workbench Smart Buttons have two parts:

    • Smart Button Manifest - the manifest file included in the smart button solution that defines the templates that are picked up by the Ribbon Workbench
    • JavaScript library - the actual run-time JavaScript library that is called by the smart buttons themselves

    Since the JavaScript library is just a normal web resource, it can be called from a Commanding V2 button as well since you can use JavaScript actions as well as Power Fx actions!

    1. Ensure you have the latest version of the smart button solution installed and add a custom page to your model-driven app as described by my last post.

    2. Edit the command bar using the new command bar editor.

    3. Add a new button to the command bar and select Run Javascript instead of Run formula for the Action parameter.

    4. Use the + Add Library button. Search for smartbuttons and select the dev1_/js/SmartButtons.ClientHooks.js library.

    5. Set the function parameter to be SmartButtons.ClientHooks.SmartButtons.OpenDialog

    6. Add the following parameters:

    Parameter 1 (On a Grid): SelectedControlSelectedItemIds

    Parameter 1 (On a Form): PrimaryItemIds

    Parameter 2: The unique name of your custom page

    Parameter 3: The width of your dialog (or zero for a sidebar)

    Parameter 4: The height of your dialog (or zero for a side-bar)  

    Parameter 5:  The title of your dialog

    Parameter 6: PrimaryControl

    Parameter 7:SelectedControl 

    Parameter 1 is a dynamic property that passes the id of the record currently selected - Parameters 6 & 7 give the code a context to use when calling the client-side api.
    Once this is done you will see something like the following:

    Commanding V2 designer with show dialog smart button

    7. If you are adding a button to a grid, you will also need to set Visibility to Show on condition from formula with the expression:


    This will ensure that the button is only shown when a single record is selected in the grid.

    8. Save and Publish

    ...and that's all there is to it! Using Smart Buttons in the Ribbon Workbench has the advantage that it will set up much of this for you and only ask you for the parameters needed, but the new commanding designer is so easy to use it makes using the Smart Button library really straightforward. 

    P.S. There is a bug that will be fixed by Microsoft in the coming weeks where commanding v2 JavaScript buttons do not show up correctly on forms.

    See more at community Summit NA!

    Those nice people at Dynamic Communities have asked me to be a Community Ambassador for SUMMIT NA 2021. This is going to be an exciting in-person event (fingers crossed) where I'll be speaking about custom pages and Power FX command buttons - it would be great to see you there if you are at all able to travel to Houston, Texas. Can't wait to show you all the cool new features. You can also register using the Promo code Durow10 to get a 10% discount if you've not already registered!

    @ScottDurow 😊

  13. Now that custom pages are released (in preview), we are one step closer to the convergence towards a single app type that has the best of model-driven apps and canvas apps.

    Previously, I had released a Ribbon Workbench smart button that allowed opening a canvas app as a dialog via a command bar button. With the latest release of the smart buttons solution you can add a button to open a custom page as a dialog box or sidebar. This creates a really native feel to the dialog since it's included inside the page rather than an embedded IFRAME, and the good news is that it's really easy to upgrade from the previous Canvas App dialog smart button!

    Demo of Custom Page Dialog

    Step 1: Add a custom page to your solution

    Open the, and open the solution that contains your model-driven app.

    Inside the solution, select  + New -> App -> Page. 

    Add Page

    The page editor will open, which is essentially a single Screen canvas app. In this example, I create a dialog to update the Credit Hold flag on a record and add some notes to the Description. In order to do this, we need to get a reference to the record that the dialog is being run on. Inside the App OnStart event, add the following code:

    Set(varRecordId, If(
    Set(varSelectedRecord, LookUp(Accounts, Account = varRecordId))

    Notice that there is a hard-coded GUID there - this is simply for testing purposes when running inside the designer. You can replace it with the GUID of a record in your dev environment - or you could use First(Account) to get a test record. When the dialog is opened inside the app, the recordid parameter will contain the GUID of the current record.

    The size of the screen needs to be adjusted to accommodate the borders of the dialog - so edit the Screen Width and Height properties to be:

    HeightMax(App.Height, App.MinScreenHeight)-24
    WidthMax(App.Width, App.MinScreenWidth)-10

    Now we can add a root container with a width and height set to Parent.Width & Parent.Height - this will result in a responsive layout. You can then add child layout containers that hold the dialog controls. The layout might look like the following:

    Custom Page designer

    Notice the nested horizontal and vertical layout containers which work great for creating responsive layouts. This is especially important because we want our dialog to work both as a popup modal dialog as well as a sidebar dialog. The sidebar dialog will fill the available height and so our dialog content should also expand to fill the available height. 

    We can use the name of the account selected by using the variable we set in the App OnStart, by setting the Text of a label to the expression:

    Concatenate("Are you sure you want to submit the account '",varSelectedRecord.'Account Name',"' for credit check?")

    The Cancel button can close the dialog using:


    Note: This is slightly different from a canvas app smart button that would call Exit().

    The Confirm button that can run the expression:

    Patch(Accounts,varSelectedRecord,{'Credit Hold':'Credit Hold (Accounts)'.Yes,Description:txtNotes.Value});

    This will simply update the credit hold and description columns for the selected record and then close the dialog.

    You can download my example dialog from here - 

    When you Save and Publish your custom page, it will be given a unique name that we will use when creating the smart button:

    Custom Page Unique Name

    Unfortunately, you can't copy this unique name from the solution editor, but in the next step once it is added to the app designer it can be selected and copied!

    Step 2: Add a custom page to the app

    The custom page preview allows you to add the custom page to the app in the model-driven app navigation, but we can also add it to the app without it being visible. This is required to enable opening the custom page as a dialog.

    Open your model-driven app in the preview editor (Open in preview) and select Pages -> Add Page -> Custom (preview) -> Next -> Use and existing Page

    Select the page you created in step 1. Uncheck Show in navigation, and then click Add.

    You can now easily copy the unique name of the custom page that you'll need in the next step when adding the smart button.

    Unique Name in App Designer

    You now need to Save and Publish the app.

    Note: You will need to Save and Publish each time you make a change to your custom page.

    Step 3: Install the latest smart button solution

    You will need the latest smart buttons solution –

    Step 4: Add dialog smart button

    When you open the Ribbon Workbench for the environment that the Smart Button solution and Canvas App is installed into, you can then drop the ‘Open Dialog’ button on either a Form, SubGrid, or Home Grid.

    Set the smart button properties to be:

    Title: The text to display on the button
    Dialog Url/Custom Page Unique name: The unique name copied from the app designer. E.g. dev1_pageaccountsubmitforcreditcheck_e748f
    Width: 470
    Height: 350 (or zero to show the dialog as a sidebar)
    Dialog Title: The name to show at the top of the dialog. E.g. Credit Check 

    Now you just need to save and publish and that's it!

    Note: You might need to enable Wave 2 2021 depending on which release your environment is on. I have seen some environments not work correctly when using custom pages due to the recordId parameter not being correctly passed to the custom page.

    Migrating from canvas app dialog smart buttons

    If you have been using the canvas app dialog smart button approach, then you can very easily migrate to this custom page technique by performing the following:

    1. Create a custom page as described above, but copy and paste the screen contents from your existing canvas app. It's cool that you can copy and paste controls between custom pages and canvas apps!
    2. Update the layout to use the new responsive containers.
    3. Add the custom page to your model-driven app.
    4. Update the Open Dialog smart button with the unique name of the custom page instead of the canvas url.

    Remember that this feature is still in preview and does not work inside the native mobile/tablet apps at this time. You can read more about how this smart button works in the docs topic: Navigating to and from a custom page using client API (preview).

    In my next post, I'll show you how to do this using the Commanding V2 designer rather than the Ribbon Workbench!

    See more at community Summit NA!

    Those nice people at Dynamic Communities have asked me to be a Community Ambassador for SUMMIT NA 2021. This is going to be an exciting in-person event (fingers crossed) where I'll be speaking about custom pages and Power FX command buttons - it would be great to see you there if you are at all able to travel to Houston, Texas. Can't wait to show you all the cool new features. You can also register using the Promo code Durow10 to get a 10% discount if you've not already registered!

  14. Power Fx command bar buttons (Commanding V2) is the latest exciting feature to be released into preview by the Power Platform team! Check out Casey's blog post and my first look video where I show how amazingly easy it is to add complex functionality to your model-driven command bars!

    The Ribbon Workbench marked its 10-year anniversary this year and so it's fitting that the new Power Fx command buttons for model-driven apps have been released. This exciting new feature is part of the continued journey of converging the goodness of both model-driven apps and canvas apps into a single app that gives the best of both worlds! In this post, I'll identify some of the differences in functionality. This initial release provides the foundation for Power Fx and as you'll see there are still gaps - but I am confident that the functionality will be developed over the coming months.

    Key Design differences

    The Ribbon Workbench (and the underlying RibbonXml that supports it) has many legacy components that are baggage from the days when there was a Ribbon rather than a command bar. Things like Groups, Tabs & Templates have no meaning in the command bar as we see it today. For this reason, the new Power Fx command buttons have greatly simplified the model for customizing the model-driven app command bar.

    Here are some of the key differences in the design:

    • Buttons, Commands & Visibility Rules are linked - In the Ribbon Workbench, you would create a button and then associate it with a command. With Power Fx commands, the button, command, and visibility rules are all linked together as a single unit.
    • Localized Labels are part of the solution translations - In the Ribbon Workbench, button label translations were part of the RibbonXml, whereas with Power Fx commands you can use the standard export/import translations feature for the solution to provide translations.
    • Customizations are deployed via separate solution components - In the Ribbon Workbench, your command buttons were deployed via the entity/table solution component. With Power Fx commands, you add the Component Library to your solution to deploy command buttons. At this time, the Component Library must be shared with the users separately to the model-driven app.
    • No need for a solution import - Since the Power Fx commands are deployed using Component Libraries, there is no need for the lengthy export/unpack/update/rezip/import cycle that happens when you publish from inside the Ribbon Workbench. This makes working with the Power FX Command buttons much quicker!
    • Power Fx solution packager required to see command details - When exporting the solution that contains the Command Component Libraries, the expressions are inside the .msapp files. To see the details, you will need to use the new Power Fx Solution Packager functionality to extract into yaml files and add this to source control. The great news is that canvas app unpacking/packing is now included in the PAC CLI.

    You can still use JavaScript Commands!

    Possibly one of the most important features of the new commanding feature is that you can still call your existing JavaScript for commands (but not Enable rules at this time). Why is this important? Because it makes the path to migrate to Version 2 commands easier where the functionality is not yet possible in Power Fx expressions.

    Common Requirements

    The following table shows common requirements that I find needed when customizing the command bar using the Ribbon Workbench. You'll see that there are still gaps that will require the Ribbon Workbench for the time being - but these will be addressed over time.

    Common Requirement Ribbon Workbench Commanding V2
    Hide existing OOTB button Hide Action Not yet available
    Move existing OOTB button Customize Button and Drag to the new location Not yet available
    Change label/icon of existing OOTB button Customize Button and edit properties Not yet available
    Change command of existing OOTB button Customize Command and edit actions Not yet available
    Pass CommandValueId to JavaScript Context when the same command is used on multiple buttons Set CommandValueId property Not applicable since the command is not separate from the button
    Update a form value and then save the record Ribbon Workbench 'QuickJS' Smart Button or custom JavaScript.
    The PrimaryControl Parameter provides the event context which can be used to access the form context.
    Patch(Accounts,Self.Selected.Item,{'Credit Hold':'Credit Hold (Accounts)'.Yes});
    Note: The form is automatically saved and refreshed!
    Update/Create a related record Ribbon Workbench 'QuickJS' Smart Button or custom JavaScript that uses the WebApi and then calls refresh on the formContext provided by a PrimaryControl parameter. Update Related Record:

    Create related record (An additional data source must be added to the component library )

    Note: The form is automatically refreshed!
    Add buttons to a flyout button Use the Flyout or SplitButton toolbox control with a MenuSection Not yet available
    Dynamically populate a flyout button (e.g. from a WebApi call) Use the PopulateQueryCommand with a Custom JavaScript Action Not yet available
    Add buttons to the Application Ribbon so that they appear in multiple locations (including the global command bar) Add the Application Ribbon to the solution loaded into the Ribbon Workbench. The entity type can be used in a EntityRule to show buttons for multiple entities. Not yet available
    Run a command on multiple selected records on a grid Use a Custom JavaScript Command that accepts SelectedControlSelectedItemIds as a parameter - and then iterate over the array, performing an action for each.

    New! To apply an update to multiple selected records use something similar to:

    ForAll(Self.Selected.AllItems As ThisRecord, Patch(Accounts, ThisRecord, { 'Credit Hold':'Credit Hold (Accounts)'.Yes }));
    Display a blocking wait spinner whilst a long-running task is in progress Use showProgressIndicator inside Custom JavaScript. Not yet available in Power Fx command expressions
    Run a Workflow on the current record Ribbon Workbench 'Run Workflow' Smart Button or custom JavaScript. Trigger a workflow on change of a form field change
    Run a Report on the current record Ribbon Workbench 'Run Report' Smart Button or custom JavaScript. Use a Custom JavaScript function
    Run a flow on the current record Ribbon Workbench 'Run Webhook' Smart Button or custom JavaScript. Use a Custom JavaScript function - but watch this space!
    Open a dialog from a button Ribbon Workbench 'Open dialog' Smart Button linked to a Canvas App Use a Custom JavaScript function - but watch this space!

    Visibility Rules

    Perhaps the biggest gap in functionality at this time is in the area of visibility rules:

    Common Requirement Ribbon Workbench Commanding V2
    Show button only for a specific Form Use a Custom JavaScript Enable Rule or add the RibbonXml to the FormXml Not yet available
    Show button only for a specific App Use a Custom JavaScript EnableRule that returns true for specific app unique names Commands are added to specific apps.
    One button cannot be shared between apps at this time.
    Show a button only for Web or Outlook client Use the CrmClientType Rule Not yet available
    Show a button only when online/offline Use the CrmOfflineAccessStateRule Not yet available
    Show a button based on the user's security privileges Use the RecordPriviledgeRule or MiscellaneousPrivilegeRule Not yet available
    Show a button based on certain entity metadata (e.g. IsActivity) Use the EntityPropertyRule in a Display Rule Not yet available
    Show a button only for existing or read-only records. Use the FormStateRule in a Display Rule Not yet available
    Show a button only when a single record is selected in the grid Use the SelectionCountRule inside an EnableRule

    Visibility Expression:

    I prefer the CountRows version because it's more consistent with other situations like this next one.
    Show a button only when a single record is selected in the grid Use the SelectionCountRule inside an EnableRule Visibility Expression:
    Show a button based on a form field value ValueRule inside an EnableRule. refreshRibbon must be called inside the onchange event of the form field.

    Visibility Expression:
    Self.Selected.Item.'Credit Hold'='Credit Hold (Accounts)'.Yes

    NOTE: refreshRibbon still must be called if you want the button to show/hide when the field is changed.

    Currently, there is an issue when using optionsets/status reasons like this where you will need to cast to a String and compare using:

    Text(Self.Selected.Item.'Credit Hold')="Yes"

    Show a button only when a related record column has a specific value Use a Custom JavaScript EnableRule that performs a WebApi query. Self.Selected.Item.'Parent Account'.'Credit Hold'='Credit Hold (Accounts)'.Yes
    Show a button when a form value matches a complex expression Use a Custom JavaScript EnableRule that performs a WebApi query or uses the provided formContext. StartsWith(Self.Selected.Item.'Account Name',"a")
    Show a button when there are a specific number of related records matching a query Use a Custom JavaScript EnableRule that performs a WebApi query. CountRows(Self.Selected.Item.Contacts)>0
    Note: This does not seem to work consistently at this time and gives a delegation warning.


    I will come back to this page and update it as new features are unlocked. You can also read more in the official documentation. As you'll see from the tables above, there are some gaps (especially with Enable/Display rules) but I have no doubt that they will be filled 'in the fullness of time'. The ease at which you can create complex Power Fx expressions to perform logic that would have previously required some complex JavaScript is very exciting and will unlock many scenarios that were previously off-limits to low-code app makers.


  15. Power Fx command bar buttons in model-driven apps is the latest exciting feature to be released into preview by the Power Platform team! Check out my first look video and Casey’s blog post.

    This post shows you the steps to follow to add a command bar button on a model-driven form to create a related task for the account record and to only show this when the Credit Hold flag is set to No. This would normally require custom JavaScript and the Ribbon Workbench but now can be accomplished with a simple expression!

    1. Open the new Model Driven App editor

    First, we must open the new model-driven app editor to access the command bar editor.

    1. Create a new model-driven app and add the account table.
    2. Open the solution that contains the model-driven app using the preview editor (
    3. Using the context menu on the model-driven app, select Edit -> Edit in preview
    4. This will open the new app designer preview. Eventually, this will be the default experience.

    Open Preview Editor

    2. Edit the command bar

    Once the app designer has opened we can edit the command bar on the account table. We will create a form button to create a new task for the selected account.

    1. Inside the Pages panel, select the Account Pagecontext menu -> Edit command bar (preview).
    2. Select the Main form command bar to edit.
    3. The command bar editor will open.

    Edit Command Bar

    3. Add Power Fx Command Button

    The command bar editor will show all the buttons configured for the account main form. Some of these buttons will not be visible by default but are displayed still in the editor. This is very much like the Ribbon Workbench. The existing buttons are considered V1 buttons and cannot be edited at this time.

    1. Select New command.
    2. In the Command properties panel on the right, set the Label and Icon of the button.

    Note: You can also upload your own svg rather than selecting from the out-of-the-box icons available.

    Add Command

    4. Set Visibility Expression

    This is where Power Fx starts to make an appearance!

    1. In the Visibility section, select Shown on condition from formula that is at the bottom (you may need to scroll down).
    2. Notice the Expression drop-down now shows Visible rather than OnSelect.
    3. Enter the expression:
      Self.Selected.Item.'Credit Hold'='Credit Hold (Accounts)'.Yes

      You can also use navigation properties to access related records in these kinds of expressions!
    4. Save and Publish and then close the editor window.

    Setting Visibility

    5. Open Component Library and add a data source

    So that we can add a new task, we must add the Tasks data source connection much like we would in a canvas app.

    1. In the solution editor, select Component libraries and then open the CommandV2 component library that should have been created.
    2. Once the editor has opened, select Data in the left-hand panel.
    3. Select Add data.
    4. Select the Tasks table from the Current environment connector.

    Add Task Datasource

    6. Close Component Library to release the lock

    When you open a component library, a lock is taken out to prevent it from being edited in multiple sessions. We must close the editor to release the lock.

    1. Select File -> Save.
    2. Select Publish -> Publish this version.
    3. Select Close.

    Closing Component Library

    7. Add OnSelect Expression to create a task

    Now we can add the Power Fx expression to create the new task related to the account record.

    1. Open the command bar editor again using Edit command bar (preview) from inside the model-driven app editor.
    2. Select the Main Form again.
    3. Select the Credit Check button.
    4. In the OnSelect expression enter:
      Patch(Tasks,Defaults(Tasks),{Regarding:Self.Selected.Item,Subject:"Credit Check Follow Up"});
      Notify("Credit task created",NotificationType.Success);
    5. Select Save and Publish.
    6. Select Play to open the model-driven app.

    Adding Command

    8....and the result!

    Once the model-driven app opens, you can open an account record and see the Credit Check button appear only when the Credit Hold column is set to Yes.

    Selecting the button will create a new task related to the current record! Notice that the form is automatically refreshed to show the new record created inside the related records.

    Note: If you wanted to make the button appear as soon as Credit Hold is set to Yes, you would need to add a call to refreshRibbon inside the form fields onChange Event.

    To add this functionality using the Ribbon Workbench would have required JavaScript and would be considerably more complex. The new commanding Power FX command buttons unlocks many customizations to low-code app makers!

    There are still some requirements that are not yet possible to implement using the new Power Fx Commanding, where you will need to continue to use the Ribbon Workbench. One example of this is the more complex display/enable rules you could create such as visibility depending on the user's security privileges - but I am hopeful that these gaps will be filled in the 'fullness of time' 😊 

    Watch out for more posts from me on Power Fx commands!


  16. If you are building code components for Power Apps (PCF) you might be using msbuild to build cdsproj projects:

    msbuild /p:configuration=Release

    This works well on windows, and requires either Visual Studio or Build Tools for Visual Studio with the .NET build tools workload installed.

    What about if you didn't want to install Visual Studio, or you were not running on Windows? The good news is that you can still develop and build code components (or run a build inside a non-windows automated build pipeline) using the .NET core equivalent:

    dotnet build -c release

    To get developing cross platform you would use the following:

    1. Power Platform Extension for Visual Studio Code (This is in preview right now, but is a cross platform alternative to the Power Platform CLI MSI installer that only works on windows)
    2. .NET 5.x SDK

    Once you have installed these, you can use both the pac CLI and dotnet build from a terminal right inside VSCode.

    Happy cross-platform PCF developing!

  17. If you are using the latest versions of the PowerApps CLI then much of the implementation now uses the new dotnetcore DataverseServiceClient. You may find that you occasionally get the following error when performing pac pcf operations:

    The request channel timed out while waiting for a reply after 00:02:00. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout.

    Previously we could solve this by adding the configuration  MaxCrmConnectionTimeOutMinutes  - but since the move to the Dataverse Service Client, the key has now changed to  MaxDataverseConnectionTimeOutMinutes. We can see this from the source code in GitHub.

    To increase the timeout on the PowerApps CLI PCF operations to 10 minutes you need to:

    1. Locate the file for the latest version of the Power Apps CLI that will be at a location similar to: C:\Users\YourProfileName\AppData\Local\Microsoft\PowerAppsCLI\Microsoft.PowerApps.CLI.1.6.5

    2. Edit the file \tools\pac.exe.config

    3. Add the following underneath the startup element:

      <add key="MaxDataverseConnectionTimeOutMinutes" value="10"/>

    Note: The value is in minutes!

    4. Save

    5. Ensure you are using the latest version of the Power Apps CLI by using:

    pac install latest
    pac use latest

    Now you should no longer receive a timeout when using pac pcf push ! πŸš€

  18. A hot area of investment from the Dataverse product team in Wave 1 2021 has been the Relevance search experience.

    Quick Actions

    Part of this new search experience brings the command bar to the inline search results as well as the search results screen.

    What's really cool is that you can customize these command bar buttons using the Ribbon Workbench. The relevance search can have up to 3 buttons visible when you hover over a record, and then an additional 2 actions in the overflow (maximum of 5 command buttons).

    The search experience picks up commands from the HomePage Grid command bar, and this is where we can apply our customizations using the Ribbon Workbench.

    Adding new buttons

    To add a custom button to the Search Experience - both the drop-down and the search results grids, follow these steps:

    1. Create a temporary solution that contains just the entities you wish to add a command button to. Don’t include any additional components for performance reasons (see
    2. Drag a button onto the Home Command Bar and add your new command.

      Note: Here I am using the Quick JS Smart Button ( but you can add whatever you want!

    3. To your command, add a new Enable Rule:
      UnCustomised (IsCore):

      Important: The IsCore property tells the Ribbon Workbench that this rule is an out-of-the-box rule that we don’t need to provide an implementation for in our customizations.

      Note: You can also use Mscrm.ShowOnGridAndQuickAction if you want the button to appear both on the Home Page grid AND on the search results.
    1. At the time of writing, it seems that custom SVG icon web resources are not supported and so your button will appear with a blank icon. To get around this you can either leave the modern icon blank (your button will be assigned the default icon) or you can use a Modern icon name from one of the other out-of-the-box icons.


    1. Publish and wait (Yes, I’d make it quicker if I could!)

    Removing existing out of the box buttons

    Perhaps you don't want to show an existing out-of-the-box button on the search results, or you want to make space for your own. You can do this using another specific EnableRule Mscrm.ShowOnGrid:

    1. Find the button you want to remove from the quick actions (e.g. Assign Button)
    2. Right Click -> Customize Command
    3. Add the Enable Rule Mscrm.ShowOnGrid and again set ‘IsCore’ to true
      The Mscrm.ShowOnGrid enable rule tells the command bar to only show the command on the home page and not the search results.
    4. Set IsCore to true for all the other out of the button Enable & Display Rules that were added when you customized the command (e.g. Mscrm.AssignSelectedRecord).
    5. Publish!

    The Result!

    Once you've added the new button, and hidden the existing one, you'll see the changes to the command bar after doing a hard refresh on your App in the browser:

    Pretty cool! For more info about the Enable Rules used by Relevance Search, see

    Hope this helps!

  19. If you are creating Cloud Flows in Solutions today, you are using Connection References. Although they are listed as ‘Preview’ – there really is not an alternative as when you create a new Cloud Flow – a connection reference is automatically created for you.

    Connection References are essentially a ‘pointer’ to an actual connection. You include the Connection Reference in your solution so that it can be deployed, and then once imported, you can wire up the Connection Reference to point to a real connection. This means that you can deploy solutions with Flows that do not actually know the details of the connection, and without the need to edit the flows after deployment.

    Here is how it all fits together:


    The one thing that Connection References brings us is it avoids having to edit a flow after deployment to ‘fix’ connections. Previously, if you had 10 flows, then you would previously have to fix each of the flows. With Connection References, you only have to ‘fix’ the Connection References used by the flows.


    You can find all the connection references that do not have an associated connection using the following query:

      <entity name="connectionreference" >
        <attribute name="connectorid" />
        <attribute name="connectionreferenceid" />
        <attribute name="statecode" />
        <attribute name="connectionid" />
          <condition attribute="connectionid" operator="null" />

    When you import a solution with Connection Reference into a target environment using the new solution import experience, you will be prompted to link to an existing or create a new connection for any newly imported connection references. If they have previously been imported, then they are simply re-used.

    However, we want to automate our deployments...

    Editing Connection References and Turning on Flows using a Service Principle

    So, what about in an ALM automatic deployment scenario? 

    At this time, however, importing a solution using a Service Principle in ALM (e.g. using the Power Platform Build Tools) leaves your flows turned off since the connection references are not linked to connections.

    You can easily connect your connection references and then turn on a flow programmatically (see at the end of this post for the full PowerShell script):

    # Set the connection on a connection reference:
    Set-CrmRecord -conn $conn -EntityLogicalName connectionreference -Id $connectionreferenceid -Fields @{"connectionid" = $connectorid }
    # Turn on a flow
    Set-CrmRecordState -conn $conn -EntityLogicalName workflow -Id $flow.workflowid -StateCode Activated -StatusCode Activated

    …but if you try to do this using a Service Principal, you will get an error similar to:

    Flow client error returned with status code "BadRequest" and details "{"error":{"code":"BapListServicePlansFailed","message":"{\"error\":{\"code\":\"MissingUserDetails\",\"message\":\"The user details for tenant id … and principal id …' doesn't exist

    Suggested Solution

    My current approach to this (until we have official support in the Power Platform Build Tools) is something like this. Imagine the scenario where a feature branch introduced a new Flow, where there previously had been none – let us run through how this works with Connection References.

    1. Adding a new Cloud Flow to the solution

    1. When you add a new Cloud Flow to a Solution, the Connection References that it uses are also added automatically. If you are adding an existing Flow that was created in a different solution, you will need to remember to add the Connection References it uses manually.
    2. Key Point: Connection References do not include any Connection details – they are only a placeholder that will point to an actual Connection via the connectionid attribute.

    2. Solution Unpacked into a new branch

    1. The solution is unpacked and committed to the feature branch.
    2. The Feature branch eventually results in a Pull Request that is then merged into a release branch.
    3. The Connection Reference shows up in the PR unpacked solution:
    4. The new Flow also shows up in the pull request unpacked solution. Notice that the connection reference is referenced via the connectionReferenceLogicalName setting in the Flow json.

    3. Build & Release

    1. When the Pull Request is merged, the Build Pipeline will run automatically.
    2. When the CI Build has run, the Flow will be packed up into the – so you can then deploy it to your target environments.

    4. Connection Reference - Set Connections

    1. Once the release has completed – the solution will be deployed.
    2. Key Point: At this stage, the Flow(s) are turned off because the Connection Reference is not yet wired up to a Connection.
    3. Of course, if you were importing this solution by hand, you would be prompted to connect the unconnected connection references.

      This is what the Flow and Connection Reference will look like in the solution explorer:

      Connection References always show the Status of 'Off' - even if they are wired to a connection!

    4. The Owner of the Connection Reference and Flow is the Application User SPN that is used by the Power Platform Build Tools
    5. If you try and turn on a flow that uses any connection other than the Current Environment Dataverse connector, you’ll get a message similar to:

    Failed to activate component(s). Flow client error returned with status code "BadRequest" and details "{"error":{"code":"XrmConnectionReferenceMissingConnection","message":"Connection Id missing for connection reference logical name 's365_sharedoffice365_67cb4'."}}".

    5. Turning on Flows 

    1. At this time there is no way of editing a connection reference from inside a managed solution, so you need to create a new solution and add the Managed Connection References to them.
    2. Once inside a new solution, you can edit the Connection References and create a new or select an existing connection. 

    3. This will only need to be done once on the first deployment. Once the connection is created and linked to the connection reference it will remain after further deployments. 
    4. If you have already created the connection, you can programmatically set the connection on the connection reference if you needed using the following (You will need to be impersonating an actual user rather than using the SPN - see below).
      # Set the connection on a connection reference:
      Set-CrmRecord -conn $conn -EntityLogicalName connectionreference -Id $connectionreferenceid -Fields @{"connectionid" = $connectorid }
    5. Note: Interestingly, you can actually turn on a Cloud Flow that only uses the Current Environment connector without actually connecting your connection references – this is done automatically for you. For the purposes of the scenario let’s assume that we also have other connectors is use such as the Office 365 Connector.

    6. Key Point - Turning flows back on after subsequent deployments

    The challenge now is that subsequently, ALM automated deployments to this solution using the Service Principle will turn the flows off again. The connection references will stay connected, but the flows will be off. Furthermore, as mentioned above you can't use the Service Principle to edit connection references or turn flows on so we need to impersonate a real user (I hope this will be fixed in the future). To do this, we can use the Power Apps Admin Powershell scriptlets to get the user who created the connections in use (manually above) and impersonate this user to turn the flows on.

    Here is the full powershell script that you can add to your build or release pipeline:

    $connectionString = 'AuthType=ClientSecret;url=$(BuildToolsUrl);ClientId=$(BuildToolsApplicationId);ClientSecret=$(BuildToolsClientSecret)'
    # Login to PowerApps for the Admin commands
    Write-Host "Login to PowerApps for the Admin commands"
    Install-Module  Microsoft.PowerApps.Administration.PowerShell -RequiredVersion "2.0.105" -Force -Scope CurrentUser
    Add-PowerAppsAccount -TenantID '$(BuildToolsTenantId)' -ApplicationId '$(BuildToolsApplicationId)' -ClientSecret '$(BuildToolsClientSecret)' -Endpoint "prod"
    # Login to PowerApps for the Xrm.Data commands
    Write-Host "Login to PowerApps for the Xrm.Data commands"
    Install-Module  Microsoft.Xrm.Data.PowerShell -RequiredVersion "2.8.14" -Force -Scope CurrentUser
    $conn = Get-CrmConnection -ConnectionString $connectionString
    # Get the Orgid
    $org = (Get-CrmRecords -conn $conn -EntityLogicalName organization).CrmRecords[0]
    $orgid =$org.organizationid
    # Get connection references in the solution that are connected
    Write-Host "Get Connected Connection References"
    $connectionrefFetch = @"
        <entity name='connectionreference' >
        <attribute name="connectionreferenceid" />
        <attribute name="connectionid" />
        <filter><condition attribute='connectionid' operator='not-null' /></filter>
        <link-entity name='solutioncomponent' from='objectid' to='connectionreferenceid' >
            <link-entity name='solution' from='solutionid' to='solutionid' >
                <condition attribute='uniquename' operator='eq' value='$(BuildToolsSolutionName)' />
    $connectionsrefs = (Get-CrmRecordsByFetch  -conn $conn -Fetch $connectionrefFetch -Verbose).CrmRecords
    # If there are no connection refeferences that are connected then exit
    if ($connectionsrefs.Count -eq 0)
        Write-Host "##vso[task.logissue type=warning]No Connection References that are connected in the solution '$(BuildToolsSolutionName)'"
        Write-Output "No Connection References that are connected in the solution '$(BuildToolsSolutionName)'"
    $existingconnectionreferences = (ConvertTo-Json ($connectionsrefs | Select-Object -Property connectionreferenceid, connectionid)) -replace "`n|`r",""
    Write-Host "##vso[task.setvariable variable=CONNECTION_REFS]$existingconnectionreferences"
    Write-Host "Connection References:$existingconnectionreferences"
    # Get the first connection reference connector that is not null and load it to find who it was created by
    $connections = Get-AdminPowerAppConnection -EnvironmentName $conn.EnvironmentId  -Filter $connectionsrefs[0].connectionid
    $user = Get-CrmRecords -conn $conn -EntityLogicalName systemuser -FilterAttribute azureactivedirectoryobjectid -FilterOperator eq -FilterValue $connections[0] 
    # Create a new Connection to impersonate the creator of the connection reference
    $impersonatedconn = Get-CrmConnection -ConnectionString $connectionString
    $impersonatedconn.OrganizationWebProxyClient.CallerId = $user.CrmRecords[0].systemuserid
    # Get the flows that are turned off
    Write-Host "Get Flows that are turned off"
    $fetchFlows = @"
        <entity name='workflow'>
        <attribute name='category' />
        <attribute name='name' />
        <attribute name='statecode' />
            <condition attribute='category' operator='eq' value='5' />
            <condition attribute='statecode' operator='eq' value='0' />
        <link-entity name='solutioncomponent' from='objectid' to='workflowid'>
            <link-entity name='solution' from='solutionid' to='solutionid'>
                <condition attribute='uniquename' operator='eq' value='$(BuildToolsSolutionName)' />
    $flows = (Get-CrmRecordsByFetch  -conn $conn -Fetch $fetchFlows -Verbose).CrmRecords
    if ($flows.Count -eq 0)
        Write-Host "##vso[task.logissue type=warning]No Flows that are turned off in '$(BuildToolsSolutionName)."
        Write-Output "No Flows that are turned off in '$(BuildToolsSolutionName)'"
    # Turn on flows
    foreach ($flow in $flows){
        Write-Output "Turning on Flow:$(($flow).name)"
        Set-CrmRecordState -conn $impersonatedconn -EntityLogicalName workflow -Id $flow.workflowid -StateCode Activated -StatusCode Activated -Verbose -Debug

    Managing connection details

    Since your pipeline will want to run on release pipelines as well as branch environments, I use variable groups to define the connection details.

    Something like this.

    Note: The name is in the format, branch-environment-<BRANCH NAME>

    So then in a YAML pipeline, you can bring in the details you want to use for the specific branch using:

    - group: branch-environment-${{ variables['Build.SourceBranchName'] }}

    When you use the script in a Release pipeline, you can simply add the right Variable Group for environments you are deploying to:


    1. When you first deploy your solution with connection references, they must be connected (manually through the Solution Explorer, or programmatically by updating connectionid) before the flows that use them can be turned on.
    2. This connection reference connecting cannot be done by a service principle - the deployment script will need to impersonate a non-application user.
    3. One approach is to use the user that created the connection references to get the user to impersonate - this way you don't need to manually specify the user for each environment. If you have multiple users involved in connection reference authentication, you will likely need to impersonate the user for each connection.
    4. After each subsequent deployment, you will need to turn on the flows again. This also needs to be performed using impersonation.
    5. You can setup variable groups that will be dynamically picked using the current branch (for build pipelines) or the release environment.
    6. I hope at some point, this kind of operation will be supported by the Power Platform Build Tools out of the box.


  20. With the recent experimental announcement of the PowerApps Solution Packager, we now have a much better way of managing Canvas Apps in your source code repository. This moves us much closer to a better ALM story for the whole of the Power Platform so that my top 3 principles can be followed:

    1. Everything as code – The single point of truth for all artifacts (solution metadata, apps, code, data, versioning, releases) should be under source control.
    2. Environments as cattle, not pets – When the entire solution is under source control, environments can be built and deleted for specific purposes – e.g. features, experiments, testing. I wrote a recent post on this.
    3. Define your Branching Strategy– A branching strategy is defined as describing how features/releases are represented as branches. Each time a new feature (or group of linked features) is worked on, it should happen in a source code branch. Code can be merged between each branch when ready to be integrated, built and released. Branches normally would have an associated PowerPlatform environment to allow you to work in parallel to other changes without the risk of introducing changes that should not be released. The gitflow branching strategy is a great starting point to use.

    The Power Apps Solution Packager (pasopa) brings us closer to the 'everything as code' mantra - by unpacking a Canvas App into source code files that allow you to merge in changes from other branches and then re-pack. Eventually, this will make its way into the Power Apps CLI and Solution Packager.

    Here are a couple of videos I've done on the subject of the PowerApps Solution Packager:




  21. If you were thinking that Power Apps Canvas Apps and Dataverse for Teams Canvas Apps are just the same – but with a different license and container – well whilst it is mostly true, there is a very big difference:
    Dataverse for Teams uses a completely different set of Out of the Box controls. They are based on the Fluent UI library.
    This post will hopefully save someone the time that I’ve spent investigating why a very common UI design pattern doesn’t work in Dataverse for Teams.

    The Toggle Pattern

    A common pattern in Canvas Apps is to bind a variable to the Default property of a Toggle Button, and then use the OnChange event to fire some code when it is changed. This is a very common solution to the problem that components cannot raise events at the time of writing.
    Imagine a scenario where you have a Component that renders a button, that when selected it should raise an event on the hosting screen.
    The common pattern is to toggle an output property from a custom component, and then bind the output to a variable – that is in turn bound to a toggle button. When the variable is toggled, it then raises the OnChecked event on the toggle button so you can perform the logic you need. This does seem like a hack – but it is the only mechanism I know of to respond to events from inside components.

    I hope that at some point we will see custom events being able to be defined inside components – but for now, the workaround remains.
    So, the app looks something like this:

    Fluent UI Controls not only look different - they behave differently!

    The problem is that inside Dataverse for Teams, the standard controls have been replaced with the new Fluent UI based controls, and with that, there is a subtle difference.

    The default property has been replaced by a new set of properties that are control specific (e.g. Checked, Value, Text, etc). With this change, the change events are only fired with the user initiates the event – and not when the app changes the value.

    So in Dataverse for Teams, the App looks very similar, but with the Checked property rather than Default:

    This results in the OnChecked event not being fired and as such, the pattern no longer works.

    If you look carefully, you'll see, in Dataverse for Teams, the label counter only increments when the toggle button is checked but not when the button is clicked. This is because the OnChecked event is not triggered by the varToggle variable being changed by the component.

    I really love the Fluent UI controls in Dataverse for Teams - especially with the awesome responsive layout controls - but this drawback is very limiting if you are used to writing Power Apps Canvas Apps. I hope that we will see an update soon that will remove this limitation from Dataverse for Teams Apps.

    Work Around

    Update 2021-02-10: There is a workaround to this - you can enable 'classic' controls - this then gives you the choice between using the Fluent UI OR the classic Toggle control. By using the classic control you then get the OnChecked event being raised!


  22. One of the most requested features of Model-Driven Apps ‘back in the day’ was to edit the popup dialog boxes that do actions such as Closing Opportunities or Cases. These were ‘special’ dialogs that had a fixed user interface.

    There were a few workarounds that involved either using dialogs (now deprecated) or a custom HTML web resource.

    More recently, the ability to customize the Opportunity Close dialog was introduced ( however this is very limited in what you can actually do.

    Canvas Apps are a great way of creating tailored specific purpose user interfaces and are great for this kind of popup dialog type action. If only there was a way to easily open a Canvas App from a Model-Driven Command Bar. Well, now there is!

    Open Dialog

    Open Dialog Smart Button

    I’ve added a new smart button that allows you to easily provide the URL to the Canvas App to use as a dialog and pass the current record or selected record in a grid.

    Step 1. Create a Canvas App Dialog

    Your Canvas App will be responsible for performing the logic that your users need. The information that is passed to it is in the form of the record Id and logical name parameters. You can grab these values in the Canvas App startup script and then load the record that you need:

    Set(varRecordId, If(
    Set(varRecordLogicalName, Param("recordLogicalName"));
    Set(varSelectedRecord, LookUp(Accounts, Account = varRecordId))

    Replace the GUID with the id of a record you want to use as a test when running inside the Canvas App Studio.

    Any buttons that perform actions on the data or a cancel button that just closes the dialog, simply use the Exit() function:

    // Do some stuff
        'Invoice Date':dpkInvoiceDate.SelectedDate

    The smart button listens for the result of the Exit() function to close the dialog.

    One of the challenges of adding a Canvas App to a Model-Driven app is styling it to look like the out of the box Model-Driven App dialogs. I have created a sample app that you can import and then use as a template -

    Step 2. Publish and grab the App Url.

    Publish your Canvas App in a solution, and then grab the App Url from the details. Select the … from the Canvas App and then select ‘Details’

    Get App Url

    Then copy just the Url of the App that is displayed:

    You could create an environment variable to hold this similar to the WebHook smart button - This is because the url to the Canvas App will be different in each environment you deploy to.

    Note: Make sure you share your Canvas App with the users that are going to be using your Model-Driven App! (

    Step 3. Install the Smart Buttons solution

    You will need the latest smart buttons solution –

    Step 4. Open the Ribbon Workbench and add the buttons

    When you open the Ribbon Workbench for the environment that the Smart Button solution and Canvas App is installed into, you can then drop the ‘Open Dialog’ button on either a Form, SubGrid, or Home Grid.

    The properties for the Smart Button might look something like:

    Note: I've used an environment variable reference in the Dialog Url parameter - but equally, you could just paste the URL of your canvas app in there if you didn't want to deploy to multiple environments such that the app URL would be different.

    And that's it!

    It’s really that simple. Now you will have a dialog that allows you to take actions on records from forms or grids using a Canvas App. The data is then refreshed after the dialog is closed.

    Mobile App Support

    At this time, due to cross-domain restrictions inside the Power Apps Mobile App, this technique will not work. The user will simply be presented with a login message, but the button will not do anything. If you would like to unblock this scenario – please vote this suggestion up!

    Let me know how you get on over on GitHub - 


  23. There is a new kid in town! Not long after the Power Apps Build Tools for Azure Dev Ops were released out of beta under the new name of Power Apps Build Tools ( the new set of GitHub actions for Power Platform ALM have been released in public preview ( They can be used in your workflows today and will be available in the GitHub Marketplace later in the year.

    Since Microsoft acquired GitHub for $7.5 Billion back in 2018 there has been a growing amount of investment – it seems that parity with Azure Dev Ops is inevitable before long. The CI/CD story in the open-source world has been served using products such as Octopus Deploy for a long time, but one of the investments Microsoft have made is in the are of GitHub actions (

    GitHub Actions for Power Platform ALM

    Actions and Workflows give a yaml build pipeline with a set of hosted build agents. This provides a significant step towards some degree of parity with Azure Pipelines.

    With the public preview of the Power Platform GitHub actions, we can come some way to moving our CI/CD pipeline to GitHub. At this time, not all of the Azure Dev Ops Power Platform Build Tools are supported yet – with the most notable omission being the Solution Checker and environment management tasks.

    Power Platform Build Tools

    GitHub Power Platform Actions



    Power Platform Checker


    Power Platform Import Solution


    Power Platform Export Solution


    Power Platform Unpack Solution


    Power Platform Pack Solution


    Power Platform Publish Customizations


    Power Platform Set Solution Version


    Power Platform Deploy Package


    Power Platform Create Environment


    Power Platform Delete Environment


    Power Platform Backup Environment


    Power Platform Copy Environment






    An interesting addition to the GitHub actions is the branch-solution action which I think is intended to be used when you want a new pro-code or low-code environment to match a GitHub branch so that you can ‘harvest’ the solution xml from any changes automatically. I look forwards to seeing documentation on the best practices surrounding this action.

    There are two missing features that I would really like to see in the actions:

    1. Client Secret Authentication
    2. Cross-Platform Support

    When do we move from Azure Dev Ops then?

    Not yet! Personally, I feel the biggest gap in actions is the maturity around the release management in GitHub actions. Azure Dev Ops allows you to create multi-stage deployments with approval gates that can be driven from the output of a specific build, whereas GitHub actions require you to manage this using release tags and branch merging or external integrations.


    You can see an example of the new GitHub actions at work in my NetworkView PCF control repo (

    Each time a pull request is merged into the master branch, the PCF control is built, the solution packaged and a release created.

    Since the solution contains more than just the PCF control (forms too!), I have a folder called solution_package that contains the solution as unpacked by the Solution Packager. After the PCF control is built, a script is then used to copy the bundle.js into the solution package and update the version of the artefacts. Then the solution is built using the microsoft/powerplatform-actions/pack-solution@latest action. I chose to use a node script rather than PowerShell/PowerShell Core so that eventually it will be easier to be cross-platform once the Power Platform tools are also cross-platform.

    You can take a look at the build yaml here - 


  24. A very common request I've had for the Ribbon WorkbenchSmart Button solution is to be able to configure the WebHook/FlowUrl using an Environment Variable. Environment Variables are small pieces of information that can vary between environments without there needing to be customizations update. This way you can have different endpoints for each environment without making customization changes.

    As of Version 1.2.435.1 you can now put an environment variable (or combination of) into the FlowUrl smart button parameter:

    This screenshot assumes you have added an environment variable to your solution with the schema name dev1_FlowUrl

    The Url is in the format {%schemaname%}. Adding the environment variable to the solution would look like:

    The really awesome part of environment variables is that you are promoted to update them when you import to a new environment inside the new Solution Import experience that came with Wave 1 2020.

    If you have any feedback or suggestions for Smart Buttons, please head over to the Github project page.


  25. A situation I see very frequently is where there is a ‘special’ PowerApps environment that holds the master unmanaged customizations. This environment is looked after for fear of losing the ability to deploy updates to production since with managed solutions you can’t re-create your unmanaged environment. Sometimes, a new partner starts working with a customer only to find that they have managed solutions in production with no corresponding unmanaged development environment.

    I’m not getting into the managed/unmanaged debate – but let’s assume that you are following the best practices outlined by the PowerApps team themselves “Managed solutions are used to deploy to any environment that isn't a development environment for that solution”[1]+[2]

    There is a phrase that I often use (adapted from its original use [3]):

    “Treat your environments like cattle, not pets”

    This really resonates with the new PowerApps environment management licensing where you pay for storage and not per-environment. You can create and delete environments (provided you are not over DB storage capacity) with ease.

    If you store your master unmanaged solution in an environment – and only there – then you will start to treat it like a pet. You’ll stroke it and tend to its every need. Soon you’ll spend some much time in pet-care that you’ll be completely reliant on it, but it’ll also be holding you back.

    There is another principle I am very vocal about:

    “Everything as code”

    This is the next logical step from “Infrastructure as code” [4]

    In the ‘everything as code’ world, every single piece of the configuration of your development environment is stored as code in source control, such that you can check-out and build a fully functioning unmanaged development environment that includes:

    1. Solution Customisations as XML
    2. Canvas Apps as JSON
    3. Flows as JSON
    4. Workflows as XAML
    5. Plugins as C#
    6. JS Web resources as TypeScript
    7. Configuration data as Xml/JSON/CSV
    8. Package Deployer Code
    9. Test Proxy Stub Code for external integrations
    10. Scripts to deploy incremental updates from an older version
    11. Scripts to extract a solution into its respective parts to be committed to source control
    12. Scripts to set up a new development environment
      1. Deploy Test Proxy Stub Services
      2. Build, Pack and deploy a solution to a new environment
      3. Deploy Reference Data
      4. Configure Environment Variables for the new environment

    There are still areas of this story that need more investment by the PowerApps teams such as connector connection management and noisy diffs – but even if there are manual steps, the key is that everything is there in source control that is needed. If you lose an environment, it’s not a disaster – it’s not like you have lost your beloved pet.

    The advantages of combining these two principles are that every single time you make a change to any aspect of an environment, it is visible in the changeset and Pull Request

    If you are working on a new feature, the steps you’d take would be:

    1. Create a new branch for the Issue/Bug/User Story
    2. Checkout the branch locally
    3. Create a new development PowerApps environment and deploy to it using the build scripts
    4. Develop the new feature
    5. Use the scripts to extract and unpack the changes
    6. Check that your changeset only contains the changes you are interested in
    7. Commit the changes
    8. Merge your branch into the development/master branch (depending on the branching strategy you are using)
    9. Delete your development environment

    Using this workflow, you can even be working on multiple branches in parallel provided there won’t be any significant merge conflicts when you come to combine the work. Here is an example of a branching strategy for a hotfix and two parallel feature development branches:

    The most common scenario I see where there are merge conflicts are RibbonXml, FormXml, and ViewXml – the editing of both of these elements is now supported – and so you can manage merge conflicts inside your code editor! CanvasApps and Flows are another story – there really isn’t an attractive merge story at this time and so I only allow a single development branch to work on Canvas Apps, Flows, and Workflows at any one time.

    If you think you have pet environments, you can still keep them around until you feel comfortable letting go, but I really recommend starting to herd your environments and get everything extracted as code. You’ll not look back.



    [1] ALM Basics -

    [2] Solution Concepts -

    [3] Pets vs Cattle -

    [4] Infrastructure as Code -

    [5]  ALM with the PowerPlatform -

    [6] ALM for Developers -

    [7] Supported Customization Xml Edits -

    [9] Healthy ALM -

  26. Linters have been around for ages - it all started back in 1978 apparently - but has now become a mainstay of modern JavaScript and TypeScript programming.

    Writing code without a linter is like writing an essay without using spell checker! Sure there may be some super humans who can write their code perfectly without linting - but I’m not one of those!

    Much has been written about linting since 1978 and there are plenty of opinions! For me there are two parts:

    1. Enforcing semantic code rules such as not using var in TypeScript or using let when it could be const because the value doesn’t change. These rules are designed to help you trap bugs as early as possible and enforce best practices.
    2. Formatting rules - such as not mixing tabs and spaces and adding spaces before and after keywords.

    For TypeScript, we can enforce rules using eslint - and automatically format our code using prettier.
    There are a whole raft of style rules that then can be applied for different libraries such as react.

    This post shows you how to setup linting quickly and easily for a TypeScript PCF project that uses React.

    Create your PCF project

    Create your pcf project using your CLI/IDE of choice:
    I use:

    pac pcf init --namespace dev1 --name pcflint --template field
    npm install react react-dom @fluentui/react
    yo pcf --force

    Install ESLint, Prettier and the plugins

    Prettier is great for formatting your code, but doesn’t really do any of the semantic code checks. So the configuration we are going to create uses prettier as a plugin from within eslint. This means when you run eslint, no only will it warn and attempt to fix semantic issues, it’ll also tidy up the formatting for you using prettier.

    npm install eslint --save-dev

    You can use the bootstrapper if you want - but this can lead to a configuration that you don’t really want:

    npx eslint --init
    1. Next up is installing prettier (;
    npm install --save-dev --save-exact prettier

    We use the --save-exact as recommended by the project because sometimes formatting rules can change slightly and you don’t suddenly want your source control diffs to include formatting differences.

    1. Now install the plugins and configurations needed for our rules:
    npm install --save-dev @typescript-eslint/eslint-plugin @typescript-eslint/parser eslint-plugin-react eslint-config-prettier eslint-plugin-prettier
    1. Next we configure setline to call prettier when it is run ( - this uses estlint-plugin-prettier
      Create a file named .eslintrc.json:
        "parser": "@typescript-eslint/parser",
        "env": {
            "browser": true,
            "commonjs": true,
            "es6": true,
            "jest": true,
            "jasmine": true
        "extends": [
        "parserOptions": {
            "project": "./tsconfig.json"
        "settings": {
            "react": {
              "pragma": "React",
              "version": "detect"
        "plugins": [
        "rules": {
            "prettier/prettier": "error"
        "overrides": [
              "files": ["*.ts"],
              "rules": {
                "camelcase": [2, { "properties": "never" }]


    1. There is an override rule to allow non-camelcase property names since we often use pascal named SchemaNames from CDS.
    2. There is support for jest and jasmine tests.

    Now configure the prettier rules by creating a file called .prettierrc.json

      "semi": true,
      "trailingComma": "all",
      "singleQuote": false,
      "printWidth": 120,
      "tabWidth": 2,

    Let the magic happen!

    There are two ways to get eslint to do its job:

    1. Run from the command line
    2. Use a VSCode extension.

    Note: Both approaches will require you to have setup eslint and prettier already

    Run from the command line:

    1. You will need to globally install eslint:
    npm install -g eslint
    1. After that you can add a script to your package.config:
    "scripts": {
      "lint": "eslint ./**/*.ts --fix"

    Run from inside VSCode

    This is my day-to-day use of eslint.

    1. Install the eslint VSCode extension -
    2. lint issues will show up via a code-lens - the details show up using Ctrl-.
    3. You can auto-format your code using Alt-SHIFT-P

    I really recommend getting linting into your workflow early on – because you don’t want to enable it later and then find you have 1000’s of issues to wade through!

  27. It's been over a year since I last blogged about DateTimes and nearly a decade since I blogged the first time on the subject! CRM DateTimes – so it’s well overdue that I update you on how DateTimes work with PCF.

    My last post on the subject was when the ‘Timezone independent’ and ‘Date Only’ behaviours were introduced -DateTimes - It’s never the last word.

    This made the time zone handling of dates much easier if you needed to store absolute date/times – however, there are always times where you need to store a date that is dependant on the user’s time zone (e.g. date/time a task is completed, etc.)

    In PCF, it would have been nice if the time zone element of the date was handled for us – but unfortunately not!

    There are 3 places where we have to consider datetime behaviours in PCF:

    • Field Controls

      • Inbound dates - When PCF calls updateView()

      • Outbound dates - When PCF calls getOutputs()

    • Dataset Controls - Inbound dates

    Field Controls - Inbound dates

    When the PCF passes our component a date as a bound property to the context via the updateView methods, the date will be provided as a formatted date string and also a raw Date object.

    I have a record with the dateAndTimeField property bound to a DateTime field that has the User Local DateTime behaviour.

    I can get the two values as follows:

    • Raw - parameters.dateAndTimeField.raw

    • Formatted - parameters.dateAndTimeField.formatted

    There are two time zones I can vary, firstly the CDS User Settings (I have it set to GMT+8) and my local browser time zone. In the following table, I vary the browser time zone and keep the CDS time zone constant.

    The formatted date is formatted using my CDS user settings – YYYY/MM/DD HH:mm

    Local Time Zone: GTM GMT-3 GMT+8
    CDS UTC 2020-05-10T04:30:00Z 2020-05-10T04:30:00Z 2020-05-10T04:30:00Z
    Raw 2020-05-10 05:30:00 GMT+0100 2020-05-10 02:30:00 GMT-0200 2020-05-10 12:30:00 GMT+0800
    Formatted 2020/05/10 12:30 2020/05/10 12:30 2020/05/10 12:30

    You’ll notice that the formatted time is still 12:30 because it’s showing as the CDS UTC+8 date. Changing my local time zone shouldn’t change this. However, the Raw date is now showing as 12:30 because it’s converted to my local browser time zone, and what makes it more complex is that Daylight savings is also added - depending on the date in the year. JavaScript dates are awkward like this. Although the date is set to the UTC date by PCF – it is provided in the local time zone.

    So why not use the formatted date?

    To work with the date value (bind it to a calendar control etc.) we need it in the user’s CDS local time zone - that shown by the formatted date. If we are just showing the date and not editing it, then the formatted string is the way to go. However, if we want to edit the date, then we need to convert it to a Date object. This could be done by parsing the Formatted Date but that would require us to understand all the possible date formats that CDS has in the user settings. Instead we can simple apply the following logic:

    1. Convert to UTC to remove the browser timezone offset:
    const localDate = getUtcDate(localDate)
    getUtcDate(localDate: Date) {
        return  new  Date(
    1. Apply the user’s time zone offset. This requires access to the user’s time zone settings - luckily they are loaded for us in the PCF context:
    convertDate(value: Date) {
        const offsetMinutes = this.context.userSettings.getTimeZoneOffsetMinutes(value);
        const localDate = addMinutes(value, offsetMinutes);
        return getUtcDate(localDate);
    addMinutes(date: Date, minutes: number): Date {
        return new Date(date.getTime() + minutes * 60000);

    This will now give us a Date that represents the correct Datetime in the browser local time zone - and can be used as a normal date!

    Because some dates can be set as time zone independent, we can conditionally run this logic depending on the metadata provided:

    convertToLocalDate(dateProperty: ComponentFramework.PropertyTypes.DateTimeProperty) {
        if (dateProperty.attributes?.Behavior == DateBehavior.UserLocal) {
            return this.convertDate(dateProperty.raw);
        } else {
            return this.getUtcDate(dateProperty.raw);

    We still need to convert to UTC even if the date is time zone independent - this is to remove the correction for the browser timezone.

    Fields controls - outbound dates

    Now we have a date time that is corrected for our local browser time zone, we can simply return the Date object from inside the getOutputs().
    So if we wanted to set 12:30 - and our browser timezone is set to GMT-3 (Greenland) - then the date will actually be: 12:30:00 GMT-0200 (West Greenland Summer Time)
    PCF ignores the timezone part of the date and then converts the date to UTC for us.

    NOTE: It does seem odd that we have to convert to local inbound - but not back to UTC outbound.

    Dataset controls - inbound dates

    There are two notable differences when binding datasets to tables in PCF compared to the inbound values in their field counterparts.

    1. Dates that are provided by a dataset control binding are similar in that they are provided in the browser timezone - however they are strings and not Date objects.
    2. There is no information on the UserLocal/Timezone independant behaviour - and so we need to know about this in advance.

    So as before, when binding to a datagrid, it’s easiest to use the formatted value:

    If you need the Date object to edit the value - then you’ll need to convert to the local date as before - but with the added step of converting to a Date object:

    const dateValue = item.getValue("dateAndTimeField");
    const localDate = this.convertDate(dateValue);

    This isn’t going to be the last I write on this subject I am sure of it! Anything that involves timezones is always tricky!

  28. One of the challenges with PCF controls is getting them to reflow to the available space that they are stretched to fill the available space. Doing this using standard HTML involves using the flexbox. The really nice aspect of the Fluent UI react library is that it comes with an abstraction of the flexbox called the ‘Stack’.

    The aim of this post is to layout a dataset PCF as follows:

    • Left Panel - A fixed width vertical stack panel that fills 100% of the available space
    • Top Bar - A fixed height top bar that can contain a command bar etc.
    • Footer - A centre aligned footer that can contain status messages etc.
    • Grid - a DetailsList with a sticky headers that occupies 100% of the middle area.

    The main challenges of this exercise are:

    1. Expanding the areas to use 100% of the container space - this is done using a combination of verticalFill and height:100%
    2. Ensure that the DetailsList header row is always visible when scrolling - this is done using the onRenderDetailsHeader event of the DetailsList in combination with Sticky and ScrollablePane
    3. Ensure that the view selector and other command bar overlay appear on top of the stick header.
      This requires a bit of a ‘hack’ in that we have to apply a z-order css rule to the Model Driven overlays for the ViewSelector and Command Bar flyoutRootNode. If this is not applied then flyout menus will show behind the Stick header:

    Here is the React component for the layout:

    /* eslint-disable @typescript-eslint/no-non-null-assertion */
    /* eslint-disable @typescript-eslint/explicit-function-return-type */
    import * as React from "react";
    import {
    } from "office-ui-fabric-react";
    export class DatasetLayout extends React.Component {
      private onRenderDetailsHeader: IRenderFunction<IDetailsHeaderProps> = (props, defaultRender) => {
        if (!props) {
          return null;
        const onRenderColumnHeaderTooltip: IRenderFunction<IDetailsColumnRenderTooltipProps> = tooltipHostProps => (
          <TooltipHost {...tooltipHostProps} />
        return (
          <Sticky stickyPosition={StickyPositionType.Header} isScrollSynced>
      private columns = [
          key: "name",
          name: "Name",
          isResizable: true,
          minWidth: 100,
          onRender: (item: string) => {
            return <span>{item}</span>;
      render() {
        return (
            <Stack horizontal styles={{ root: { height: "100%" } }}>
                {/*Left column*/}
                <Stack verticalFill>
                      root: {
                        textAlign: "left",
                        width: "150px",
                        paddingLeft: "8px",
                        paddingRight: "8px",
                        overflowY: "auto",
                        overflowX: "hidden",
                        height: "100%",
                        background: "#DBADB1",
                      <Stack.Item>Left Item 1</Stack.Item>
                      <Stack.Item>Left Item 2</Stack.Item>
              <Stack.Item styles={{ root: { width: "100%" } }}>
                {/*Right column*/}
                    root: {
                      width: "100%",
                      height: "100%",
                  <Stack.Item verticalFill>
                        root: {
                          height: "100%",
                          width: "100%",
                          background: "#65A3DB",
                      <Stack.Item>Top Bar</Stack.Item>
                          root: {
                            height: "100%",
                            overflowY: "auto",
                            overflowX: "auto",
                        <div style={{ position: "relative", height: "100%" }}>
                          <ScrollablePane scrollbarVisibility={}>
                              items={[...Array(200)].map((_, i) => `Item ${i + 1}`)}
                      <Stack.Item align="center">Footer</Stack.Item>

    Here is the css:

        z-index: 20;
    #__flyoutRootNode .flexbox {
        z-index: 20;

    Hope this helps!


  29. One of the recent additions to PCF for Canvas Apps is the ability to bind dataset PCF controls to datasets in a Canvas App. A challenge that faces all PCF developers is if their control should support both Model AND Canvas – so with this in mind you need to be aware of the differences in the way that data is paged.

    This post demonstrates how the paging API works in Model and Canvas and highlights the differences. In my tests, I used an entity that had 59 records and spanned 3 pages of 25 records per page.


    There are two ways of paging through your data:

    1. Incrementally load the data using loadNextPage
    2. Page the data explicitly using loadExactPage
      In Model Apps, when you call loadNextPage, the next page of data will be added on top of the existing dataset.sortedRecordIds – whereas in Canvas, you will get a reset set of records that will just show the page that you have just loaded.

    This is important if you control aims to load all records incrementally or uses some kind of infinite scrolling mechanism.

    This is how nextPage/previousPage works in Canvas Apps

    This is how nextPage/previousPage works in Model Apps

    Notice how the totalRecordsLoaded increases with each page for Model, but for Canvas it shows only the number of records on that page.
    You might think that using this approach would be more efficient because it uses the fetchXml paging cookie - well from what I can see it doesn't seem to be any different to just specifying the page in the fetchXml - and has the same performance as loadExactPage...


    When you want to show a specific page – jumping over other pages without loading them, you can use ‘loadExactPage’. This method is not currently documented – but it is mentioned by the PCF team in the forums
    This method will load the records for the specific page and so dataset.sortedRecordIds will only contain that page – this is the same on both Canvas and Model!

    Notice that if you load a specific page, the hasNextPage and hasPreviousPage is updated to indicate if you can move back or forwards. This would only help when using loadExactPage in Model Apps, because when using loadNextPage in Model Apps, you will never get hasPreviousPage == true because you are loading all the records incrementally rather than a specific page.

    This is how loadExactPage works in Canvas Apps

    This is how loadExactPage works in Model Apps

    Notice total records loaded shows only the number of records in that page.


    This property should give you how many records there are in the current dataset – however, in Canvas it only gives you the number of records that have been loaded via the paging methods. If you look at the comparisons above, you’ll see that the Canvas totalResultCount goes up with each page, but in Model, it remains the total record count.
    Interestingly this property is not actually documented – however it’s in the Typescript definitions.

    The Future

    It’s not clear if we will see a completely unified experience between Canvas and Model with PCF controls – but I’ll update this post if anything changes!

  30. Those of you who know me will also know that I am a massive Fiddler fan for removing the need to deploy each time you change your JavaScript source.

    Here are some of my previous blog posts on Fiddler -

    The PowerApps docs now even include instructions on it

    Developing PCF Canvas Controls

    When developing PCF controls for Canvas Apps, the process is slightly different and includes an extra step.

    1. Add an autoresponder in the format:

    E.g. Resources0Controls0Develop1.PCFTester.bundle.js?sv=

    It should look something like:

    2. Since the scripts are served from a different domain to PowerApps, configure Fiddler to add the Access-Control-Allow-Origin header.

    In fiddler, Press Ctrl-R to open the rules editor.

    Locate the OnBeforeResponse function and add:

    if (oSession.oRequest.headers.Exists("Host") && oSession.oRequest.headers["Host"].EndsWith("")) {
      if (oSession.oResponse.headers.Exists("Access-Control-Allow-Origin")){
        oSession.oResponse.headers["Access-Control-Allow-Origin"] ="*";

    It should look something like:


    When you add in your PCF Component to the Canva App, it should now be loaded from your local file system just like it does with Model Driven Apps. To refresh the component, you will need to exit the app and re-open (rather than just refresh the window in Model Driven Apps).

    Hope this helps,


  31. Back at the end of 2015, Power Apps wasn’t even a thing. My world revolved around Dynamics 365 and the release cadence that was bringing us updates to the platform that were either keeping up with SalesForce or providing greater customizability. Much has changed since then, not least the way that we write rich UI extensions. With this in mind, I have completely re-written my Network View solution to use TypeScript and the Power Apps Component Framework.

    Mobile App Demo

    This version has some notable improvements on the old version:

    • βœ… Shows details of links
    • βœ… Allows including inside the actual form (thanks to PCF)

    There are a few more items TODO to bring parity with the old version:

    • πŸ”³ Loading Activities
    • πŸ”³ Showing the users/connection roles for the network
    • πŸ”³ Support for configurable information cards

    The source can be found at 

    I've not released a pre-compiled solution (yet) - if you would like to test it out, please get in touch!



  32. When applying the 2020 release wave 1 you may see a component such as the Dynamics 365 Core Service fail to complete.
    First, you may want to check that you have correctly followed the steps on how to opt-in for 2020 wave 1.

    To determine the issue - navigate to the solution manager in PowerApps and click 'See History'

    This should then show you the failed upgrade component:

    Clicking on the row will give you the details. In my case it was because the solution was blocked due to a previous upgrade being incomplete:

    Solution manifest import: FAILURE: The solution [FieldService] was used in a LayerDesiredOrder clause,
    but it has a pending upgrade.
    Complete the upgrade and try the operation again.

    To resolve this, you will need to navigate to the solution manager and click 'Switch Classic'. Locate the referenced solution that is still pending an upgrade, select it, and then click 'Apply Solution Upgrade'.

    Wait for the upgrade to be applied, then return to the 2020 wave 1 release area in the admin portal, and click 'Retry'

    If you see a subsequent error, you can repeat these steps for the failed solution.

    Hope this helps!

  33. Technology typically leads to polarized opinions. Always has…Vinyl/CD…Betamax/VHS…HD-DVD/Blu-ray… Of course, our minds know that it depends on the detail, but our hearts have preferences based on our experience. This product over that one. This technique over this new one. You like this tool better than theirs because you know and trust it. You do this, don’t you?!

    Imagine you are implementing a new solution for a customer and you are asked to choose between a Flow or a Plugin for a new piece of functionality. If you are a pro-coder, then naturally you will find the Plugin option the most attractive because you trust it – later you might decide it’s over-kill and decide that it can be done using a Flow. If you are a functional consultant who is only too aware of the total cost of ownership of ‘code’ then you’ll try and achieve the functionality with a Flow, but then you might find it becomes too complex and needs a Plugin. You naturally start with a position that you know best. Am I right?!

    We know there are thousands of variables that affect our ultimate decision – different people will end up at different decisions and the ‘side’ you start from might affect the outcome. But one thing is for sure – building software is far from simple!

    The Microsoft Power Platform 'Code or No Code' battle has been bubbling away for at least a year now. It’s an unfortunate mix of sweeping statements about not needing code anymore resulting in passive-aggressive comments from Pro-Coders about how they got you here in the first place.

    Not everyone gets it

    Sara Lagerquist and I did a mock 'fight' at the recent Scottish Summit 2020. We demonstrated the polarised viewpoints in an attempt to make see the futility of it. But not everyone gets it...

    If you’re from the older Model-Driven Apps space, then you’ll be very used to having to make choices between JavaScript or Business Rules, between Workflows or Plugins. But if you’re from the newer ‘Low Code’ Canvas App space, then it’s possible that you don’t see any of this as a problem! Why would you use code when you are told ‘Less Code – More Power’? It’s not even an option – so what’s the big deal? Why would anyone want to argue? But trust me, they do!

    Human nature

    Why is all this happening? Simple, because of human nature. It’s only natural to react to something that threatens our thoughts and ideas with a response that's at best, defensive, or at worst, passive-aggressive. It has nothing to do with technology, or code/no-code. It has everything to do with the ‘tribal’ attitudes that have started to emerge. This problem is no one's fault - but rather an unfortunate side-effect of successful online community building centered around the different parts of the Microsoft Power Platform.

    I'm guilty too!

    I am guilty of this too. I am an enthusiastic evangelist of the PowerPlatform and its no-code/low-code aspects – but still when I see the weaponizing of hashtags like #LessCodeMorePower - I get defensive. I’ve worked hard my entire professional career to get proficient at code – now someone is saying that solutions have more power with less of me? No way!

    I’m sure you can see my knee-jerk reactive is misguided. Being condescending towards code is not the intention of the hashtag – but my human psyche kicks in telling me “I don’t like it”.

    The secret to letting go

    So here’s the secret - the #LessCodeMorePower mantra is actually nothing to do with us! That’s right – it’s not about YOU or ME. It’s about how Microsoft is positioning their product in the market. It's how they are selling more licenses. Nothing has changed – this journey has been going on for a long time – it’s just the latest leap in abstraction. Technology will always move on and change – and that’s why we love being in this industry. Right?

    Now, let’s take a step back. We all have a shared love for the Microsoft Power Platform. Building software solutions is hard. Picking the most appropriate technology is hard. The right decision today may not even be true tomorrow! 

    How do we move forwards?

    Pro-coders: When you see #LessCodeMorePower social media posts – work at avoiding being defensive – don’t respond by protecting your corner. This isn’t a criticism of you – you are just experiencing the force of the Microsoft marketing machine. Microsoft is not saying you are no longer needed or that code can’t create powerful solutions. The Microsoft Power Platform needs code as much as it needs no-code - and in fact, that is one of its strengths over our competitors!

    Low-coder/No-coders: Make sure you use #LessCodeMorePower hashtag appropriately. Be considerate of all perspectives – is it really the right use? Use it to promote specific strengths of the Power Platform but not at the expense of making pro-coders defensive. Don’t just say ‘anyone can write apps’ or ‘it’s simple to develop software’ – put these powerful statements in context! You don’t really believe in those overly simplistic ideals without adding at least some caveats! Promote the platform, but not at the expense of your fellow community members.

    The unbreakable oath

    Overall, let’s all be considerate of the whole spectrum of our software development profession. Pro-Coders, Low-Coders, and No-Coders - encouraging one another rather than creating division. Together, let’s unite and make the Power Platform shine.

    Here is the oath that Saraand I took at #SS2020 – join us!

    I do solemnly swear…on Charles Lamanna’s face…
    To love, honor & respect all those who develop solutions on the Microsoft Power Platform.
    To encourage one another through difficult projects.
    To build mutual respect between no-coders, low-coders, and pro-coders.
    Together, promoting quality through collaboration and cooperation.

    @ScottDurow #ProCodeNoCodeUnite

  34. Continuous Integration and Delivery is somewhat passé these days, but what is often missed is the need for good tests and analysis in your build pipeline. The PowerApps team has been working hard on the Solution Checker over the last year, and it's become an essential part of every PowerApps solution development process. If you have a solution that is going to be put into App Source, you'll need to make sure it passes a special set of rules specifically for App Source solutions.

    This post shows you how to add the Solution Checker to your Build pipeline.

    Step 1 - Application User

    Before you can run the solution checker PowerShell module, you'll need to create an Application User in your Azure Active Directory Tenant. There is a great set of instructions in the PowerApps Solution Checker documentation -

    Step 2- PowerShell Script

    So that our Build Pipeline can run the Solution Checker, we add a PowerShell script to our repo. 

    Note that you'll need to:

    1. Create a secured variable in your pipeline to store the client secret so it can be passed to the script as a parameter.
    2. Update for your Tennant and Application ID 
    3. Update for the location of your that you've built in the pipeline. Mine is 

    Your Script should look something like:

    param (
    # Requires App User be set up
    $env:TENANTID = "65483ec4-ac1c-4cba-91ca-83d5b0ba6d88"
    $env:APPID = "2fa068dd-7b61-415b-b8b5-c4b5e3d28f61"
    $ErrorActionPreference = "Stop"
    install-module Microsoft.PowerApps.Checker.PowerShell -Force -Verbose -Scope CurrentUser
    $rulesets = Get-PowerAppsCheckerRulesets
    $rulesetToUse = $rulesets | where Name -NE 'AppSource Certification'
    $analyzeResult = Invoke-PowerAppsChecker -Geography UnitedStates -ClientApplicationId "$env:APPID" -TenantId "$env:TENANTID" -Ruleset $rulesetToUse `
        -FileUnderAnalysis "$env:BUILD_SOURCESDIRECTORY\DeploymentPackage\DeploymentPackage\bin\Release\PkgFolder\" `
        -OutputDirectory "$env:BUILD_SOURCESDIRECTORY" `
        -ClientApplicationSecret (ConvertTo-SecureString -AsPlainText -Force -String $clientsecret)
    # Unzip and results
    Expand-Archive -LiteralPath "$($analyzeResult.DownloadedResultFiles.Get(0))" -DestinationPath "$env:BUILD_SOURCESDIRECTORY" 
    $extractedFile = $($analyzeResult.DownloadedResultFiles.Get(0))
    $extractedFile = $extractedFile -replace ".zip", ".sarif"
    Rename-Item -Path $extractedFile -NewName "PowerAppsCheckerResults.sarif"
    If ($analyzeResult.IssueSummary.CriticalIssueCount -ne 0 -or $analyzeResult.IssueSummary.HighIssueCount -ne 0) {
    Write-Error -Message "Critical or High issue in PowerApps Checker" -ErrorAction Stop

    You can change the ruleset and add overrides as per

    Step 3 - Call and Collect Results in your build pipeline

    I'm assuming that you are using AzureDevOps YAML pipelines. If not, I'd recommend you do it because it makes source control and versioning of your pipelines so much easier.

    I have three tasks for the Solution Checker as follows:

    # PowerAppsChecker
    - task: PowerShell@2
      displayName: Solution Checker
        filePath: 'BuildTools\BuildScripts\SolutionChecker.ps1'
        arguments: '"$(ClientSecret)"'
        errorActionPreference: 'continue'
    - task: CopyFiles@2
      displayName: Collect - Solution Checker Results
        Contents: '**/PowerAppsCheckerResults.sarif'
        TargetFolder: '$(Build.ArtifactStagingDirectory)'
    - task: PublishBuildArtifacts@1
      displayName: Publish CodeAnalysisLogs
        PathtoPublish: '$(Build.ArtifactStagingDirectory)/PowerAppsCheckerResults.sarif'
        ArtifactName: 'CodeAnalysisLogs'
        publishLocation: 'Container'

    The first task runs the PowerShell script, and the second and third collects the results so that we can report on them.

    To ensure that the $(ClientSecret) parameter is provided, you need to add a pipeline variable for the same:

    Step 4 - Reporting the results

    The Solution Checker outputs the results in a 'Static Analysis Results Interchange Format' (SARIF) which is a standard format. There are various viewers you can use, but I find having the results directly in the pipeline very useful. 

    You will need to install the 'Sarif Viewer Build Tab' -

    Once you've got this working, it'll scan your build artifacts for a sarif file and show the results!


    So that's it! When you run your pipeline (which I recommend you do every time a new commit is made to the source branch), the solution will be automatically run through the solution checker, and if there are any critical issues, the build will fail.

    If you do find that there are some critical issues that are false positives (which can happen), you can exclude those rules by modifying your script to something like:

    $overrides = New-PowerAppsCheckerRuleLevelOverride -Id 'il-avoid-parallel-plugin' -OverrideLevel Informational
    $analyzeResult = Invoke-PowerAppsChecker -RuleLevelOverrides $overrides `

    Hope this helps!


  35. Happy 21st December!

    The chestnuts are roasting, and the snow is falling (somewhere I'm sure). It's that festive time of year again, and with it, a new year is beckoning. We all know that the biggest event of 2020 will be the retiring of the 'classic' user interface in Power Apps and Dynamics 365. To make sure you are ready for this, my gift is an updated version of Smart Buttons that is fully supported on the Unified Interface. It also includes a new smart button 'WebHook' that can be used to call HTTP Triggered Flows.

    What are Smart Buttons?

    Smart Buttons are a feature I introduced into the Ribbon Workbench a while ago to make it easier to add buttons to the Model Driven App Command Bar without needing to create JavaScript Web resources.

    To enable Smart Buttons in your environment, you will need to install the Smart Button Solution and then it will light-up the Smart Buttons area in the Ribbon Workbench. 

    There are 4 Smart Buttons at the moment (but you could easily create your own if you wanted!):

    • Run Workflow: Create a workflow short cut and then optionally run code when it has finished. Run Workflow can be added to Forms or Grids.
    • Run WebHook: Create a button to run a WebHook (such as an HTTP Flow). Run WebHook can be added to Forms or Grids.
    • Run Report: Create a report short-cut button on forms.
    • Quick JS: Add a quick snippet of JavaScript to run on a button without creating separate web resources. Think of this as the 'low code' way of adding Command Buttons!

    Quick JS

    Megan has used this Smart Button before and asked me if it can support the formContext way of accessing attribute values rather than the deprecated Xrm.Page. Well, the good news is that it now can!

    You could add some JavaScript to set a value on the form and then save and close it:


    In the Ribbon Workbench this is easy to do:

    Once you've published, you now have a button to run this 'low code' on the form:

    Literally you could use this for infinite possibilities where you need to make a small change to the form before saving it - just when a user clicks a button. You could even trigger a workflow or a Flow on the change of the value!

    Run Workflow

    The Run Workflow button has had a makeover too - it now gives much better feedback when running workflows (both sync and async) and you can run some simple JavaScript if there is a problem:

    The Workflow that this is running simply updates a field on the record with the current date:

    Once you've published, this looks like:

    You can see that now the grid is automatically refreshed for you too! This button can also be added to forms or subgrids on forms.

    Run WebHook

    If you have a Flow that is initiated by an HTTP request, you can use this Smart Button to call the Flow on a list of records. Imagine you had a Flow that has a 'When a HTTP request is received'. You can copy the HTTP Post url and provide the input JSON to receive an id string value of the record it is being run on.

    As you can see, this Flow simply updates the account record and then returns OK.

    Inside the Ribbon Workbench, you can then add the WebHook smart button:

    Notice the Url is pasted in from the Flow definition. Eventually, once Environment Variables have come out of preview, I will update this to receive an environment variable schema name so that you can vary the URL with different deployments. That said, I also hope that this kind of functionality will become supported natively by the Flow integration with Model Driven Apps so that we can programmatically run a Flow from a Command Button in a fully supported way. Until then, once you've published you'll be able to run the flow on multiple records:

    Again, once the Flow has been run, the grid is refreshed. This button can also be included on Sub Grids on forms or the form command bar it's self.

    A little bit of DevOps

    When I first wrote the Smart Buttons solution, I set it up in Azure DevOps to automatically build and pack into a solution. This made it so much easier when I came to do this update. Doing DevOps right from the beginning really pays dividends later on! You can head over to GitHub to check out the code which is now written entirely in TypeScript and uses gulp and spkl to do the packing (If you are into that kind of thing!).

    Well, there you have it - hopefully, this will help you with the move to the UCI if you are already using Smart Buttons, and if you are not then you might find a need for them in your next Demo or when needing to quickly create Command Bar short cuts. If you are upgrading from the old version, it will mostly work with and in-place update, but you will need to add the extra two parameters on the Run Workflow smart button. The easiest would be to remove the old button and re-add it. Oh yes, and the Run Dialog smart button is no longer included because they are not part of the UCI!

    >> You can grab the updated Smart Button solution from github too <<

    Merry Christmas to one and all! ❀


  36. Yesterday we announced our new product, SalesSpark, the Sales Engagement platform built natively upon the PowerPlatform πŸš€ I've been working on this product for the last few months and have been really impressed with what the Power Apps Component Framework (PCF) can do for Model Driven Power Apps. In the past, the only way to extend Apps was to include custom HTML Web-resources. My open-source project SparkleXrmmade this easier by including libraries for building grids, and controls that acted like the out of the box controls. With the availability of PCF, the landscape has shifted and so will the direction of SparkleXrm.

    To build SalesSpark we have used the power of the office UI fabric which is built upon React. Just like SparkleXRM, we use the MVVM pattern to create separation between UI rendering logic and the ViewModel logic. 

    In this post, I wanted to share just a few features of SalesSpark that I'm really happy with! 😊

    PCF means breaking free from IFRAMEs!

    At the heart of SalesSpark are Sequences - these are a set of steps that act as your 'Virtual Assistant' when engaging with prospects. SalesSpark connects to your mailbox, and sends and replies to emails directly inside Office 365. We had to build a Sequence Designer that allows adding emails using templates. One of the user experience patterns that has always been impossible when using Html Web-resources was the popup editor. This was because you were never allowed to interact with the DOM. Since the PCF team now support the office UI fabric, those constraints have gone away, allowing us to create a really cool sequence editor experience:

    PCF allows Drag and Drop!

    These days, everyone expects things to be drag and dropable! This again has always been a challenge with 'classic' HTML Web-resources. With PCF we were able to create a variety of drag and drop user experiences:

    Not only can you drag and drop the sequence steps, but you can also add in attachments to emails. The attachments can be 'traditional' email attachments or cloud download attachments that allow you to monitor who has opened them from your email. Also, notice how the email can be created without saving it, the attachments are then uploaded when you are ready to send or you save the email.

    PCF is great for Visualizations

    In addition to the user experience during data entry, PCF is great for introducing visualizations that make sense for the data you are monitoring. With SalesSpark, when you add contacts to a Sequence, you then want to monitor how things are progressing. We made the sequence editor not only allow you to build sequences but also monitor the progress - allowing you to make changes as it runs.

    PCF and the Data Grid!

    I think the most exciting part of PCF for me is that it allows extending the native Power Apps experience rather than replacing it. With HTML Web-resources, once you were there, you had to do everything. Using PCF fields on a form means that you don't have to worry about the record lifecycle or navigation. Adding a PCF control to a view means you get all the command bar, data loading and paging for 'free'.

    The SalesSpark data grid control implements lots of additional features to extend the native data grids. You get infinite scrolling and grouping, as well as custom filtering experience.


    Chart Filtering

    And of course, because it's a Grid as far as Power Apps is concerned - you can use the Chart filtering - here I am using a Chart to filter the list to show contacts that have no stage set on them so that I can add them to a Sequence:

    I hope you'll agree that the PCF unlocks so much potential in Power App Model-Driven Apps that we simply couldn't access before!

    Watch this space for some more exciting things to come! πŸš€
    Learn more about SalesSpark


    P.S. If you've any questions about the PCF, just head over to the PCF forumswhere you'll often find me hanging out with other like-minded PCF developers like TanguyAndrew, and Natraj - and what's more the Microsoft PCF product team is always available to answer those really tough questions!

  37. It is wonderful to see so many PCF controls being built by the community.  This post is a call-to-action for all PCF builders - it's time to make sure your PCF component handles read-only and field-level security! The good news is that it's really easy to do. There isn't much in the documentation about this subject, so I hope this will be of help.

    Read-only or Masked?

    In your index.ts, you first need to determine if your control should be read-only or masked. It will be read-only if the whole form is read-only or the control is marked as read-only in the form properties. It can also be read-only if Field Level Security is enabled. Masked fields are where the user doesn't have access to read the field due to the Field Security Profile settings. Typically masked fields are shown as *****.

    // If the form is diabled because it is inactive or the user doesn't have access
    // isControlDisabled is set to true
    let readOnly = this._context.mode.isControlDisabled;
    // When a field has FLS enabled, the security property on the attribute parameter is set
    let masked = false;
    if ( {
      readOnly = readOnly || !;
      masked = !;

    Pass the flags to your control

    I use React for my control development and so this makes it really easy to pass the details into the component. You'll then need to ensure your control is disabled or masked when instructed to.

      React.createElement(PicklistControl, {
        value: this._selectedValue,
        options: options,
        readonly: readOnly,
        masked: masked,
        onChange: this.onChange,


    Testing the result!

    Here I have a simple picklist PCF control. It is associated with two Optionset fields. One normal, and one with Field Level Security:

    The 'Secured Optionset' field is masked because the associated Field Security Profile has 'No' on the 'Read' setting. This causes the readable property to be false.

    If we toggle this to 'Yes' the field will be readable, but not editable because 'Update' is set to 'No':

    If we then set Update to 'Yes' we can then edit both fields:

    Finally, let's deactivate the whole record. This will then show both fields as read-only - irrespective of the Field Security!

    You can see that the record is read-only by the banner at the top of the record:

    Call to action!

    If you have any PCF controls out there, it's time to re-visit them and check they handle read-only and Field Level Security settings.


  38. The road from Classic Workflows to Flows has been a long one. Microsoft has been committed to bringing parity to Flow when compared to Classic Workflows. We are almost there but this is only half the story because there is so much more you can do with Flows compared to Classic Workflows. Transaction support is one of those features that Synchronous Workflows inherently supported because they ran inside the execution pipeline, but Asynchronous Workflows left you to tidy up manually if something went wrong halfway through a run. This often led to using Actions to perform specific actions inside a transaction, but wouldn't it be cool if we didn't need to do this? Read on!

    Note: Even though the product that was formally known as Microsoft Flow is now called Power Automate, Flows are still called Flows!

    So what's a transaction?

    At the risk of teaching you to suck eggs, Transactions simply put are a way of executing multiple operations, where if one fails, they all 'roll back' as if they never happened. The 'changeset' of operations is said to be 'atomic' which means that until the transaction is 'committed', no one else can see the records that are created/updated/deleted inside the transaction scope.

    Imagine a scenario, where a system needs to transfer a booking from one flight to another where both flights are in very high demand:

    1. βœ… The system cancels the customers current booking
    2. ❌ The system books the new flight, but this fails because the flight is now full
    3. ❌ The system tries to re-book the previous canceled flight, but someone has already taken the seat
    4. 😒 The customer is left with NO flight 

    What about a different order of events where the system goes offline halfway through:

    1. βœ… The system books the new flight
    2. ❌ The system cancels the previous flight, but this fails because the system is unavailable
    3. ❌ The system tries to cancel the flight just booked in step 1 because the customer now has two flights, this fails because the system is unavailable
    4. 😱 The customer now has TWO flights!

    In both of these situations, without transaction support, we are left having to perform complex 'manual transaction compensation'.  The topic of transactions is fairly complex, and there are lots of other topics such as locking and distributed systems, but simply put, transactions make database consistency easier to manage!

    How do Flows now support CDS transactions?

    Transactions are called 'changesets' in a Flow. This is a feature that was announced as part of the Wave 2 changes - and it's just landed!

    To use changesets, you will need to be using the CDS Current Environment Connector: