" MicromOne

Pagine

Essential LaTeX Code Snippets and Where to Use Them


LaTeX is one of the most powerful tools for creating professional documents, especially in academic, scientific, and technical fields. Beginners often struggle because LaTeX uses code instead of a graphical editor—but once you understand the main commands and where they are used, it becomes a highly efficient system. This article provides practical LaTeX code examples and explains exactly when and why to use each one.

1. Basic Document Structure

Every LaTeX document starts with a declaration of the document class and required packages. This is the core “skeleton” of your file.

\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage{lipsum}

\begin{document}

\section{Introduction}
\lipsum[1]

\end{document}

Where to use it:
Use this basic structure for articles, reports, papers, essays, and simple documents. It’s the foundation for almost every LaTeX project.

2. Using Document Classes for Specific Projects

LaTeX has different classes depending on what type of document you want to create.

Article

For papers, homework, essays, and standard documents.

\documentclass{article}

Report

For multi-chapter documents, theses, and dissertations.

\documentclass{report}

Book

For full-length books with front matter and chapters.

\documentclass{book}

Beamer

For presentations (similar to PowerPoint but with LaTeX quality).

\documentclass{beamer}

Where to use them:
Choose the class depending on the type of project—each class formats chapters, sections, headers, and page layout differently.

3. Adding Packages (Extensions)

Packages extend the power of LaTeX. Some of the most important:

\usepackage{graphicx}    % for images
\usepackage{amsmath}     % for advanced math
\usepackage{booktabs}    % for professional tables
\usepackage{hyperref}    % for clickable links

Where to use them:
Whenever you need extra functionality. For example, use graphicx to insert images or amsmath for equations.

4. Inserting Images

One of the most common tasks.

\usepackage{graphicx}

\begin{figure}[h]
  \centering
  \includegraphics[width=0.6\textwidth]{example-image}
  \caption{Sample Image}
\end{figure}

Where to use it:
Scientific papers, presentations, thesis documents, reports—anywhere you need figures.

5. Creating Tables

LaTeX is known for producing beautiful tables, especially with booktabs.

\begin{table}[h]
\centering
\begin{tabular}{lcr}
\toprule
Left & Center & Right \\
\midrule
A & B & C \\
D & E & F \\
\bottomrule
\end{tabular}
\caption{Simple Table Example}
\end{table}

Where to use it:
Use this in scientific papers, data summaries, reports, and presentations where table quality matters.

6. Writing Mathematical Equations

LaTeX is the standard for mathematical writing.

Inline math:

Einstein's equation $E = mc^2$ is well known.

Displayed equation:

\[
a^2 + b^2 = c^2
\]

Advanced equation:

\begin{equation}
\int_0^\infty e^{-x^2} dx = \frac{\sqrt{\pi}}{2}
\end{equation}

Where to use it:
Mathematics, physics, engineering, economics, statistics—any discipline involving formulas.

7. Lists (Itemized and Numbered)

Lists are fundamental for structuring content.

Bullet list

\begin{itemize}
\item First item
\item Second item
\end{itemize}

Numbered list

\begin{enumerate}
\item Step one
\item Step two
\end{enumerate}

Where to use it:
Use lists in lecture notes, instructions, articles, and reports to structure ideas clearly.

8. TikZ Graphics (Diagrams and Drawings)

TikZ is a powerful tool to create custom diagrams.

\usepackage{tikz}

\begin{tikzpicture}
  \draw (0,0) -- (2,2);
  \node at (1,1) {Hello, TikZ};
\end{tikzpicture}

Where to use it:
Flowcharts, plots, geometric drawings, diagrams, illustrations for scientific or educational material.

9. Bibliography and Citations

LaTeX manages references beautifully.

\usepackage{biblatex}
\addbibresource{references.bib}

According to \cite{einstein1905}, the theory of relativity...

\printbibliography

Where to use it:
Research papers, theses, academic essays, scientific articles.

10. Hyperlinks

Add links and clickable references.

\usepackage{hyperref}
\href{https://example.com}{Click here}

Where to use it:
Digital documents, technical manuals, online publications.

LaTeX code may look intimidating at first, but every command has a specific, practical purpose. Once you understand what each snippet does and when to use it, LaTeX becomes a powerful ally for producing highly professional documents. Whether you are creating a thesis, a scientific article, a presentation, or complex diagrams, these examples give you a solid foundation to start building your projects effectively.

Automating Customer Creation in Dynamics 365 Using Playwright and MCP Server

Automating repetitive workflows in Microsoft Dynamics 365 can significantly improve efficiency and reduce human errors. One frequent scenario is creating a new customer and verifying that specific fields, such as the Active Fidelity Card, are automatically populated after saving. In this article, we’ll explore how to automate this entire process using Playwright with the MCP Server.

Objective

The goal of this automated test is to:

  • Navigate to the Dynamics 365 environment

  • Sign in using secure environment-based credentials

  • Create a new customer by filling all required fields across multiple tabs (except Active Fidelity Card)

  • Save the record and confirm the data is properly stored

  • Verify that the Active Fidelity Card field is populated after saving

If the field remains empty, the test should fail.

Step-by-Step Implementation

Navigate to Dynamics 365

Access your Dynamics 365 instance using a URL similar to:

https://your-dynamics-instance.crm.dynamics.com/main.aspx?appid=<your-app-id>

Sign In Securely

Store your credentials in environment variables to avoid exposing sensitive data:

export CRMUSER="your-username"
export CRMPASS="your-password"

Create a New Customer

Fill in all required customer fields, such as:

  • First Name

  • Last Name

  • Email

  • Phone

  • Address

You can skip the Active Fidelity Card field, as it will be validated after saving.

Save the Record

Once all required data is completed, save the record and wait for the page to refresh.

Verify the Active Fidelity Card

After saving, check that the Active Fidelity Card field includes a value.
If it is empty, the automation should mark the test as failed.

Expected Result

The Active Fidelity Card field must contain a value after creating a new customer.
If it remains empty, the test is considered failed.

Sample Playwright Script

Here is a simplified JavaScript example using Playwright:

import { test, expect } from '@playwright/test';

test('Verify Active Fidelity Card after creating customer', async ({ page }) => {
  await page.goto('https://your-dynamics-instance.crm.dynamics.com/main.aspx?appid=<your-app-id>');

  // Login using environment variables
  await page.fill('input[type="email"]', process.env.CRMUSER);
  await page.click('input[type="submit"]');
  await page.fill('input[type="password"]', process.env.CRMPASS);
  await page.click('input[type="submit"]');
  await page.waitForLoadState('networkidle');

  // Create new customer
  await page.click('button:has-text("New Customer")');
  await page.fill('#firstname', 'TestName');
  await page.fill('#lastname', 'TestSurname');
  await page.fill('#email', 'test@example.com');
  await page.fill('#phone', '123456789');
  await page.fill('#address', 'Via Test 123');

  // Save
  await page.click('button:has-text("Save")');
  await page.waitForTimeout(3000);

  // Verify Active Fidelity Card
  const fidelityCardValue = await page.textContent('#activeFidelityCard');
  expect(fidelityCardValue && fidelityCardValue.trim().length > 0).toBeTruthy();
});

Best Practices

  • Use environment variables or secure vaults for credentials

  • Add error handling and screenshot capture for easier debugging

  • Adjust your DOM selectors to match your specific Dynamics 365 layout

  • Consider additional steps if MFA (multi-factor authentication) is enabled


Power Platform and Agentic Development: A New Era Unveiled at PPCC 2025

Microsoft is reshaping the Power Platform and accelerating the rise of Agentic Development. Just hours after the event, the community was buzzing again: Microsoft quietly introduced a brand-new portal—now in Preview—that has already sparked intense discussion across LinkedIn and within developer circles.

A Portal That Redefines the Experience

This new portal marks a decisive break from the traditional Plan Designer. While previous tools centered around generating Model-Driven Apps, the new experience shifts toward creating Code Apps. For developers, this means more control, deeper customization, and the freedom to extend solutions far beyond the boundaries of low-code design.

But the real innovation lies beneath the surface.

A Multi-Agent Workflow Built for the Future

At the heart of the portal is a coordinated system of five intelligent agents, each responsible for a critical part of the development lifecycle. These agents work in parallel, continuously exchanging context and refining output to produce high-quality, production-ready solutions.

  • Requirements Agent – Interprets business goals, user stories, and functional needs.

  • Data Agent – Designs, structures, and optimizes the data layer.

  • Code Agent – Generates application logic and ensures maintainable architecture.

  • Solution Agent – Orchestrates components and packages everything into a unified solution.

This agentic workflow dramatically accelerates development, minimizes human error, and ensures that everything—from requirements to deployment—remains cohesive and aligned.

A Modern Design Philosophy

Another subtle but important detail is Microsoft’s decision to use @radix-ui instead of Fluent UI for the new portal’s interface. This signals a shift toward modern, headless, accessible UI primitives that can be styled flexibly and adapted to diverse design needs. It’s a move that resonates particularly well with developers who value clean, customizable, and scalable interfaces.

Why Developers Should Pay Attention

The new portal is more than a visual refresh—it represents a transformative step for how solutions are built on the Power Platform. Developers can expect tangible benefits across the entire development lifecycle:

  • Faster Development Cycles: Intelligent agents automate repetitive tasks, enabling teams to deliver solutions in a fraction of the time.

  • Enhanced Flexibility: Code Apps unlock advanced logic, rich integrations, and the freedom to work with external services.

  • Lower Cognitive Load: By offloading boilerplate tasks to agents, developers can focus on architecture, creativity, and innovation.

  • Improved Collaboration: Clearly defined agent roles promote better alignment between business stakeholders, developers, and architects.

  • Future-Proof Skills: As AI-assisted development becomes standard, mastering these tools positions developers at the forefront of the industry.

Real-World Scenarios Where This Shines

This new portal isn’t just impressive—it’s practical. Here’s how different industries and teams can benefit.

1. Rapid Prototyping for Startups
Startups often operate under extreme time pressure. With agentic workflows, they can define requirements quickly, generate functional prototypes in moments, and adapt them just as fast based on customer or investor feedback.

2. Enterprise Workflow Automation
Large organizations can streamline complex, multi-step processes with ease.
Examples include:

  • HR Onboarding: The Requirements Agent maps the flow, the Data Agent structures employee records, and the Code Agent automates approvals and notifications.

  • Financial Compliance: Automated reporting, validation logic, and dashboards generated with minimal manual intervention.

3. Industry-Specific Solutions
Specialized industries can build tailored applications faster than ever.

  • Healthcare: Create secure patient portals, automate scheduling, and manage billing workflows.

  • Retail: Deploy inventory apps connected to ERP systems, generate AI-driven personalized recommendations, and unify customer experiences across digital channels.

4. Empowering Citizen Developers
Perhaps one of the most exciting aspects is what this means for non-technical users. They can express business needs in natural language, rely on coordinated agents to generate the underlying logic, and deliver applications without depending on lengthy IT queues.

A Paradigm Shift in How We Build

The new portal represents more than a feature update—it signals a profound evolution in how digital solutions will be conceived, built, and maintained. With agentic workflows, code generation, modern UI principles, and deep integration within the Power Platform ecosystem, Microsoft is paving the way for a new era of development.

This is not simply an upgrade. It’s a reimagining of the entire development lifecycle—smarter, faster, more intuitive, and ready for the AI-first world that lies ahead.

Creating NumPy Arrays Effectively for AI and Machine Learning

NumPy offers a rich set of tools to create arrays quickly, efficiently, and in just one line of code. These capabilities are fundamental not only for data manipulation but also for building machine learning models, preparing datasets, and performing numerical computations.

Below is an expanded and improved guide that explains how NumPy creates arrays and why each technique matters in ML.

Arrays Filled with Zeros, Ones, or Constants

NumPy makes it extremely easy to generate arrays filled with predictable values:

Arrays of zeros
np.zeros(shape) creates arrays initialized to zero.
Useful when you need placeholder matrices or want to reset values during preprocessing.

Arrays of ones
np.ones(shape) creates arrays full of ones.
These can be used when building special matrices, bias vectors, or for debugging.

Arrays filled with a constant
np.full(shape, value) returns an array filled with any number you choose.
Helpful when creating masks, padding values, or constant-weight templates.

All these functions allow you to choose the data type with the dtype argument.

Identity and Diagonal Matrices

Linear algebra concepts like identity and diagonal matrices appear often in ML:

Identity matrix — np.eye(N)
Creates an N×N matrix with 1s on the diagonal.
This type of matrix is used in:

  • regularization (adding λI to control overfitting)

  • gradient-based optimization steps

  • matrix decomposition tasks

Diagonal matrix — np.diag(values)
Places specified values along the main diagonal.
Useful for scaling features, constructing transformation matrices, and representing variance in covariance matrices.

Generating Sequences with np.arange()

np.arange() generates evenly spaced values:

  • np.arange(stop) → values from 0 to stop−1

  • np.arange(start, stop) → values from start to stop−1

  • np.arange(start, stop, step) → custom step size

You’ll often use this to create:

  • index sequences

  • training steps or iteration counters

  • time axes for simulations

However, when working with floating-point steps, np.arange() may produce small precision errors.

Generating Evenly Spaced Values with np.linspace()

np.linspace(start, stop, num) returns a specified number of evenly spaced points between two values.

This is extremely useful because:

  • it avoids floating-point precision issues

  • it produces clean, evenly spaced data

  • it includes both endpoints by default (unless endpoint=False)

Common applications include:

  • creating high-resolution curves for visualization

  • generating synthetic continuous feature values

  • preparing sampling grids for interpolation

Reshaping Arrays with reshape()

reshape() lets you change the structure of an array without modifying its data.
It is essential whenever your data must match the input shape of a model.

You can reshape in two ways:

Function form
np.reshape(array, new_shape)

Method form
array.reshape(new_shape)

In machine learning, reshaping is used constantly:

  • converting 1D sequences into matrices

  • flattening images into vectors before feeding them into models

  • creating batches of data

  • rearranging tensors for CNNs or RNNs

The only rule is that the number of elements must remain unchanged.

Creating Random Arrays

Randomness is a key part of machine learning, especially when generating:

  • training samples

  • weight initialization

  • stochastic operations

NumPy provides several ways to generate random data:

Random floats (0 to 1)
np.random.random(shape)
Often used to initialize weights or simulate noise.

Random integers
np.random.randint(start, stop, size)
Useful for categorical data, random indexing, or creating random labels.

Random numbers from a normal distribution
np.random.normal(mean, std, size)
This is particularly important because many ML algorithms assume parameters follow a normal distribution.

Weight initialization in neural networks frequently uses small random values drawn from a normal distribution to help the model converge properly.

How These Arrays Are Used in Machine Learning

Initializing Neural Network Weights

Random values from normal or uniform distributions create the initial weights of neural networks. Proper initialization affects learning speed and training stability.

Creating Synthetic or Dummy Datasets

Random arrays allow quick creation of artificial data used for testing algorithms, debugging, or experimenting with preprocessing techniques.

Generating Feature Grids

np.linspace() and np.arange() help create:

  • grids for contour plots

  • time series

  • numerical simulations

  • sampling points for evaluating models

Matrix Operations and Regularization

Identity and diagonal matrices appear in:

  • Ridge Regression (adding λI)

  • covariance matrices

  • linear transformations

  • PCA and eigendecomposition

Shaping Data for Models

reshape() is essential in preparing data for algorithms:

  • flattening images

  • building 3D or 4D tensors for CNNs

  • splitting time-series into windows

  • creating batches dynamically

Handling Missing Data in Pandas: A Complete Guide

Before training any machine learning model, one of the most important steps is preparing the data. Real-world datasets often contain errors, outliers, or inconsistent values, but the most common issue is missing data. In Pandas, missing values are usually represented as NaN. In this article, you will learn how to detect, count, remove, and replace NaN values using practical examples.

Creating a DataFrame with Missing Values

To begin, let’s create a simple DataFrame that contains some NaN values.

We start by defining a list of dictionaries and converting it into a DataFrame:

items2 = [
    {'bikes': 20, 'pants': 30, 'watches': 35, 'shirts': 15, 'shoes': 8, 'suits': 45},
    {'watches': 10, 'glasses': 50, 'bikes': 15, 'pants': 5, 'shirts': 2, 'shoes': 5, 'suits': 7},
    {'bikes': 20, 'pants': 30, 'watches': 35, 'glasses': 4, 'shoes': 10}
]

store_items = pd.DataFrame(items2, index=['store 1', 'store 2', 'store 3'])
store_items

The resulting DataFrame contains three NaN values.

Detecting Missing Values

When working with large datasets, visually identifying NaN values is not practical. Pandas provides useful methods to detect and count them.

Counting Total NaN Values

You can combine the isnull() method with sum() to count all NaN values:

store_items.isnull().sum().sum()

The isnull() method returns a Boolean DataFrame where True marks missing values. Since True is treated as 1, summing twice gives the total number of NaN values.

Checking Which Values Are Missing

store_items.isnull()

This shows True wherever the DataFrame contains NaN.

Counting Non-Missing Values

If you want the opposite, use the count() method:

store_items.count()

This returns the number of non-NaN values in each column.

Removing NaN Values

Once you detect missing data, you can choose to eliminate it. Pandas allows you to drop rows or columns that contain NaN values.

Dropping Rows with NaN

store_items.dropna(axis=0)

This removes any row that has at least one NaN.

Dropping Columns with NaN

store_items.dropna(axis=1)

This removes any column containing NaN.

By default, dropna() does not change the original DataFrame unless you add inplace=True.

Replacing NaN Values

Instead of deleting data, a common approach is replacing NaN values. Pandas provides several useful methods for this.

Replacing NaN with a Fixed Value

You can fill all missing values with zero:

store_items.fillna(0)

Forward Fill (Using Previous Values)

Forward filling replaces a missing value with the most recent non-missing value.

Down a Column (axis=0)

store_items.ffill(axis=0)

Values are filled from the previous row in the same column.

Across a Row (axis=1)

store_items.ffill(axis=1)

This method fills missing values using the previous value in the same row.

Backward Fill (Using Next Values)

Backward filling uses the next available non-missing value.

Down a Column

store_items.bfill(axis=0)

Across a Row

store_items.bfill(axis=1)

Like the previous methods, forward and backward filling do not modify the original DataFrame unless you set inplace=True.

Interpolating Missing Values

Interpolation estimates missing values by using nearby data points. Linear interpolation is commonly used.

Interpolating Down a Column

store_items.interpolate(method='linear', axis=0)

Interpolating Across a Row

store_items.interpolate(method='linear', axis=1)


Understanding Pandas Series in Python

 

A Pandas Series is a one-dimensional, array-like data structure that can store many different data types, including integers, floats, and strings. What makes a Series particularly useful is its ability to associate each element with a custom index label. This allows you to access data in a more intuitive and descriptive way compared to traditional arrays.

How Pandas Series Differ from NumPy Arrays

Although Pandas is built on top of NumPy, a Pandas Series is more flexible than a NumPy ndarray. One major difference is that each element in a Pandas Series can have its own index label, which you can name freely. Instead of relying on numerical positions, you can refer to items by meaningful names.

Another important difference is that Pandas Series can store mixed data types. A NumPy array typically contains elements of a single data type, while a Series can hold integers, strings, or even Python objects all at once.

Creating a Pandas Series

To begin using Pandas, it is common practice to import it using the alias pd. You can create a Series by using the command pd.Series(data, index), where data contains your values and index contains the labels.

Example: Creating a grocery list as a Series.

import pandas as pd

groceries = pd.Series(data=[30, 6, 'Yes', 'No'],
                      index=['eggs', 'apples', 'milk', 'bread'])

print(groceries)

This output shows the index labels on the left and the corresponding values on the right. Notice that the Series contains both numbers and strings, demonstrating its ability to handle multiple data types.

Useful Attributes of a Pandas Series

Just like NumPy arrays, Pandas Series include several helpful attributes that provide quick information about the data.

print('Groceries has shape:', groceries.shape)
print('Groceries has dimension:', groceries.ndim)
print('Groceries has a total of', groceries.size, 'elements')

These attributes tell you how many elements the Series contains and confirm that it is a one-dimensional structure.

You can also access the values and index labels separately:

print('The data in Groceries is:', groceries.values)
print('The index of Groceries is:', groceries.index)

This is useful when working with large datasets where index labels are not immediately visible.

Checking for Index Labels

If you are not sure whether a specific index label exists in a Series, you can use the in keyword:

print('Is bananas an index label in Groceries:', 'bananas' in groceries)
print('Is bread an index label in Groceries:', 'bread' in groceries)

This quick check helps you avoid errors when accessing elements by label.

A Pandas Series is a powerful and flexible data structure that combines the simplicity of one-dimensional arrays with the added benefit of custom index labels. Whether you are managing small lists or large datasets, understanding how to create and interact with Series is an essential step in learning data analysis with Python.

Combining Arrays in JavaScript: Different Options Explained


When working with arrays in JavaScript, it’s common to need to merge or extend them with new values. Modern JavaScript provides several ways to accomplish this, each with its own advantages depending on readability, mutability, and style preferences. The following examples illustrate different approaches using an existing array named sharedValueNotAllowedBL and another array called businessLinesToBeRemove.

One of the most popular techniques is the spread operator. By writing:

sharedValueNotAllowedBL = [
  ...sharedValueNotAllowedBL,
  ...businessLinesToBeRemove
];

you create a new array that includes all elements from both arrays. This approach is concise, readable, and avoids mutating the original arrays directly.

Another option is to nest one of the arrays inside the spread expression:

[...sharedValueNotAllowedBL, businessLinesToBeRemove]

In this case, the second array is inserted as a single element rather than being expanded. The result is an array whose last item is itself an array, which may or may not be the intended structure. This approach is useful only when you specifically need to keep the second array intact as a separate element.

A more traditional method for combining arrays is the concat function:

sharedValueNotAllowedBL.concat(businessLinesToBeRemove)

The concat method returns a new array containing the elements of the first array followed by those of the second. Like the spread operator, it does not modify the original arrays. This method is still widely used and may feel more explicit, especially for developers who prefer a functional style or who work in environments where spread syntax is less common.

All of these techniques can be valid depending on your requirements. The spread operator offers modern, flexible syntax and is ideal for cloning and merging. The nested spread example is helpful when you need to preserve an array as a single item. The concat method remains a clear and reliable choice for building new arrays without mutation. Understanding these options allows you to choose the most appropriate pattern for maintaining clean, predictable, and expressive JavaScript code.

Valves vs Transistors vs MOSFETs vs Chips – The Evolution of Modern Electronics

Electronics have come a long way since the early days of glowing vacuum tubes. From the first bulky amplifiers to the ultra-compact silicon chips that power today’s smartphones, each generation of technology has changed the way we build and use electronic devices. Let’s explore the main milestones: Valves, Transistors, MOSFETs, and Chips (Integrated Circuits) — and see how they compare.

1. Vacuum Tubes (Valves): The Pioneers

Before the invention of the transistor, electronic circuits relied on vacuum tubes or valves to amplify or switch signals. These glass devices controlled the flow of electrons between electrodes in a vacuum.

Common Uses

  • Early radios and televisions
  • The first computers (like ENIAC)
  • High-fidelity audio amplifiers

Pros

  • Handle very high voltages
  • Produce a warm, pleasant sound
  • Resistant to radiation and interference

Cons

  • Large and fragile
  • Consume lots of power
  • Limited lifespan

2. Transistors: The Miniature Revolution

The invention of the transistor in 1947 marked the beginning of the modern electronic age. Made from semiconductor materials, transistors perform the same functions as valves — amplification and switching — but in a much smaller and more efficient package.

Common Uses

  • Amplifiers
  • Radios and TVs
  • Logic circuits and early computers

Pros

  • Compact and durable
  • Low power consumption
  • Cheap to produce

Cons

  • Limited voltage handling
  • Noisier in analog circuits

3. MOSFETs: Power and Precision

The MOSFET (Metal-Oxide-Semiconductor Field-Effect Transistor) became the foundation of all modern digital electronics. Instead of using current to control current, a MOSFET uses a voltage on the gate — making it extremely efficient.

Common Uses

  • Power supplies and converters
  • Logic gates in CPUs and GPUs
  • Audio and motor controllers

Pros

  • Very low power consumption
  • High switching speed
  • Ideal for integration in microchips

Cons

  • Sensitive to static discharge
  • Can overheat if not cooled

4. Integrated Circuits (Chips): Billions of Transistors

An Integrated Circuit (IC), often called a chip, combines millions or billions of MOSFETs on a single silicon wafer. This innovation made computers, smartphones, and all modern electronic devices possible.

Common Uses

  • CPUs, GPUs, and memory
  • Microcontrollers and sensors
  • Digital and analog signal processing

Pros

  • Extremely compact and powerful
  • Highly efficient and reliable
  • Mass production reduces cost

Cons

  • Cannot be repaired individually
  • Sensitive to heat and static electricity

Summary: The Timeline of Innovation

Technology Era Size Efficiency Speed Typical Use
Vacuum Tube 1920–1950 Large Low Slow Audio, radios, early computers
Transistor (BJT) 1950–Present Small Medium Good Amplifiers, logic circuits
MOSFET 1960–Present Tiny High Very High Digital electronics, power control
Integrated Circuit (Chip) 1970–Present Micro/Nano Ultra High Extreme All modern electronics

From the glowing vacuum tubes of the past to the microscopic transistors inside today’s chips, every step in this evolution made electronics smaller, faster, and more efficient. Without these innovations, there would be no smartphones, computers, or smart devices. It’s fascinating to realize that the same principles that powered a 1940s radio now run inside a billion-transistor processor on your desk.

Comparing Collections: JavaScript vs Python

 

When working with data in programming, both JavaScript and Python provide powerful collection types for organizing, storing, and manipulating information. However, while they often serve similar purposes, their behavior and syntax differ in key ways. Let’s explore how each language handles common collection types like maps, objects, sets, and tuples.

JavaScript Collections

In JavaScript, collections come in several forms, each designed for different use cases.

1. Map
A Map is a collection of key-value pairs, where keys can be of any type, including objects, arrays, or functions. Unlike plain objects, a Map maintains insertion order and offers better performance when frequent additions and deletions are required.

const m = new Map();
m.set("name", "Luca");
m.set(42, "number");
console.log(m.get("name")); // "Luca"

2. Object
Objects in JavaScript function similarly to dictionaries in Python. They store key-value pairs, but the keys must be strings or symbols. Objects are useful for representing structured data, such as user profiles or configurations.

const o = { name: "Luca", age: 25 };
console.log(o["name"]); // "Luca"

3. Set
A Set stores unique values, automatically removing duplicates. It’s useful for operations that rely on uniqueness, such as filtering out repeated entries.

const s = new Set([1, 2, 2, 3]);
console.log(s); // Set {1, 2, 3}

4. Tuple (Not Truly Available)
JavaScript does not have a built-in tuple type. Developers often use arrays as tuple-like structures, but these are mutable, so they do not provide the immutability that true tuples in Python guarantee.

const t = [1, "hello"];
// t[0] = 2 // modifiable, not a true tuple

Python Collections

Python offers a more extensive and conceptually simpler approach to collections, with built-in types for common data structures.

1. Dictionary (dict)
A dictionary stores key-value pairs with unique keys. Keys can be strings, numbers, or even tuples, and values can be any type.

d = {"name": "Luca", "age": 25}
print(d["name"])  # Luca

2. Set
A set is an unordered collection of unique items. Like JavaScript’s Set, it automatically removes duplicates.

s = {1, 2, 2, 3}
print(s)  # {1, 2, 3}

3. Tuple
A tuple is an immutable sequence of values, meaning that once created, its contents cannot be changed. Tuples are often used to store fixed collections of heterogeneous data.

t = (1, "hello", 3.5)
print(t[1])  # "hello"

4. map (Function)
Unlike JavaScript’s Map object, Python’s map is a built-in function that applies a transformation to each item in a sequence.

nums = [1, 2, 3]
result = map(lambda x: x * 2, nums)
print(list(result))  # [2, 4, 6]

5. zip
The zip function combines multiple sequences element by element, producing tuples of paired values.

a = [1, 2, 3]
b = ["a", "b", "c"]
zipped = zip(a, b)
print(list(zipped))  # [(1, 'a'), (2, 'b'), (3, 'c')


While both JavaScript and Python provide robust ways to manage collections, their philosophies differ. JavaScript emphasizes flexibility and object-based structures, while Python focuses on clarity and simplicity with well-defined collection types. Understanding these differences helps developers write cleaner, more efficient code and transition smoothly between the two languages.

Understanding Magic Methods in Python and Their C# Counterparts

 

In the world of object-oriented programming (OOP), both Python and C# offer powerful ways to customize how objects behave. In Python, this flexibility often comes through the use of magic methods, while in C#, similar behavior is achieved through operator overloading and method overriding.

In this article, we’ll explore what magic methods are, how they work, and how their counterparts in C# compare.

What Are Magic Methods in Python?

Magic methods (also called dunder methods because of their “double underscores”) are special methods that define how objects behave with built-in Python operations, functions, and operators. They allow you to control object behavior in a natural, Pythonic way — for example, how your class responds to printing, addition, or comparison.

Here are a few commonly used magic methods:

  • __init__(self, ...): Called when a new object is created (similar to a constructor).

  • __str__(self): Defines how the object is represented as a string (used by print() and str()).

  • __repr__(self): Returns an unambiguous string representation, often used for debugging.

  • __add__(self, other): Defines behavior for the + operator.

  • __eq__(self, other): Defines equality behavior for ==.

Magic methods make your custom classes feel like native Python types.

Example: Magic Methods in Action (Python)

Let’s look at an example using a simple Point class that represents a point in 2D space:

class Point:
    def __init__(self, x, y):
        self.x = x
        self.y = y

    def __str__(self):
        return f"Point({self.x}, {self.y})"

    def __add__(self, other):
        return Point(self.x + other.x, self.y + other.y)

# Create two Point objects
p1 = Point(1, 2)
p2 = Point(3, 4)

# Add the two points
p3 = p1 + p2

# Print the result
print(p3)

Output:

Point(4, 6)

Here, the __add__ method lets us use the + operator directly with Point objects, and __str__ defines how the object is displayed when printed.

C# Counterparts to Python’s Magic Methods

C# doesn’t use “magic methods” in the same way, but it provides operator overloading and method overriding to achieve similar functionality.

Operator Overloading in C#

Operator overloading lets you define how operators behave when used with custom objects. Here’s how we can replicate the Python example in C#:

using System;

class Point
{
    public int X { get; set; }
    public int Y { get; set; }

    public Point(int x, int y)
    {
        X = x;
        Y = y;
    }

    // Override ToString() (similar to __str__ in Python)
    public override string ToString()
    {
        return $"Point({X}, {Y})";
    }

    // Overload the + operator (similar to __add__ in Python)
    public static Point operator +(Point p1, Point p2)
    {
        return new Point(p1.X + p2.X, p1.Y + p2.Y);
    }
}

class Program
{
    static void Main()
    {
        Point p1 = new Point(1, 2);
        Point p2 = new Point(3, 4);

        // Use overloaded + operator
        Point p3 = p1 + p2;

        // Print the result
        Console.WriteLine(p3);
    }
}

Output:

Point(4, 6)

Just like in Python, we’ve defined how the + operator behaves for our custom Point class. The ToString() method provides a readable string version of the object, similar to Python’s __str__.

Key Differences Between Python’s Magic Methods and C# Features

Feature Python C#
Customization Mechanism Magic (dunder) methods like __add__, __str__ Operator overloading and method overriding
String Representation __str__() or __repr__() ToString()
Operator Overloading Defined using methods like __add__, __eq__ Explicitly declared using operator keyword
Invocation Automatically triggered by Python syntax Must be explicitly implemented in the class
Philosophy Implicit and flexible Explicit and strongly typed

While both languages achieve similar outcomes, Python’s approach is more implicit and syntactically integrated, whereas C# is more explicit and structured.

Both Python and C# allow developers to customize how objects behave, but they take different approaches.

Python’s magic methods offer a powerful, elegant way to integrate custom behavior directly into built-in syntax.
C#’s operator overloading and method overriding provide a more formal and type-safe mechanism for achieving the same flexibility.

Understanding these concepts helps you write cleaner, more intuitive, and more maintainable object-oriented code — no matter which language you use.

Whether you’re switching between Python and C#, or just deepening your understanding of OOP principles, mastering these features will help you design more expressive and consistent classes.

Handling Asynchronous Logic in Dynamics 365 Using Promises and Async/Await

When building custom client-side scripts in Microsoft Dynamics 365 / Power Apps, you’ll often need to make asynchronous Web API calls — for example, to validate data before saving a record.

This guide explains how to handle asynchronous logic using both the classic Promise syntax (.then()) and the modern async/await approach, with clean, adaptable examples for your own entities.

Scenario: Validate Data Before Save

Suppose you need to check whether a record already exists before saving.
You can call a custom action or Web API endpoint to perform that validation and decide whether to continue with the save.

We’ll look at two approaches:

  1. The classic way using new Promise() and .then()

  2. The modern way using async/await

Classic Approach — Using new Promise() and .then()

CheckRecord: function (formContext, callback) {
    return new Promise((resolve, reject) => {
        // Example: read values from the form
        var fieldA = formContext.getAttribute("fieldA")?.getValue() || "";
        var fieldB = formContext.getAttribute("fieldB")?.getValue() || "";

        // Build the request for your custom action
        var request = {
            CustomAction_RecordData: {
                "@odata.type": "Microsoft.Dynamics.CRM.sampleentity",
                "fieldA": fieldA,
                "fieldB": fieldB
            },
            getMetadata: function () {
                return {
                    boundParameter: null,
                    parameterTypes: {
                        CustomAction_RecordData: { typeName: "mscrm.sampleentity", structuralProperty: 5 }
                    },
                    operationType: 0,
                    operationName: "new_CustomAction"
                };
            }
        };

        // Execute the Web API call
        Xrm.WebApi.execute(request).then(async response => {
            const contentType = response.headers.get("content-type");
            if (contentType && contentType.includes("application/json")) {
                const jsonResponse = await response.json();
                if (jsonResponse && callback) {
                    callback();
                    resolve(true); // duplicate found
                }
            } else {
                resolve(false); // no duplicate
            }
        }).catch(error => {
            console.error("Error in CheckRecord:", error);
            reject(error);
        });
    });
},

Why This Approach Is Verbose

  • You must manually create a new Promise().

  • You need to chain .then() and .catch().

  • Error handling and readability can become messy with nested logic.

Modern Approach — Using async and await

CheckRecord: async function (formContext, callback) {
    try {
        // Read values from the form
        const fieldA = formContext.getAttribute("fieldA")?.getValue() || "";
        const fieldB = formContext.getAttribute("fieldB")?.getValue() || "";

        // Build the request
        const request = {
            CustomAction_RecordData: {
                "@odata.type": "Microsoft.Dynamics.CRM.sampleentity",
                "fieldA": fieldA,
                "fieldB": fieldB
            },
            getMetadata: function () {
                return {
                    boundParameter: null,
                    parameterTypes: {
                        CustomAction_RecordData: { typeName: "mscrm.sampleentity", structuralProperty: 5 }
                    },
                    operationType: 0,
                    operationName: "new_CustomAction"
                };
            }
        };

        // Await the Web API response
        const response = await Xrm.WebApi.execute(request);
        const contentType = response.headers.get("content-type");

        if (contentType && contentType.includes("application/json")) {
            const jsonResponse = await response.json();
            if (jsonResponse && callback) {
                callback();
                return true; // duplicate found
            }
        }

        return false; // no duplicate found
    } catch (error) {
        console.error("Error in CheckRecord:", error);
        return null; // error case
    }
},

Comparison: Classic vs Modern

Promise creation Manual (new Promise()) Automatic (async handles it)
Error handling .catch() try...catch
Code style Nested and verbose Linear and clean
Readability Harder to follow Easy, top-to-bottom flow

Using It in the Save Event

Example with .then()

MyNamespace.CheckRecord(formContext, () => {
    // handle duplicate
}).then(result => {
    if (result === false) {
        formContext.data.save(); // proceed with save
    }
});

Example with await

const result = await MyNamespace.CheckRecord(formContext, callback);
if (result === false) {
    formContext.data.save();
}

When to Still Use new Promise()

You only need to use new Promise() when working with APIs that do not return a Promise — for example:

function delay(ms) {
    return new Promise(resolve => setTimeout(resolve, ms));
}

In most Dynamics 365 Web API scenarios, Xrm.WebApi.execute() already returns a Promise, so using async/await is cleaner and safer.

Best Practices

  • Every Web API call in Dynamics 365 is asynchronous

  • Use async/await for simpler, cleaner syntax

  • Use new Promise() only with legacy or callback-based APIs

  • Keeping code linear and readable improves debugging, maintenance, and the user experience


Handling Form Save Events with Pre-Save Validation in Dynamics 365 Using JavaScript


When customizing forms in Microsoft Dynamics 365, it’s common to need validation logic before saving a record — for example, checking for duplicates or ensuring data consistency.

This article shows how to manage the OnSave event and perform a custom validation call before allowing the record to be saved. The validation can invoke a custom action or Web API request to check for duplicates or other business rules.

Overview

We’ll use two key components:

OnSave handler — intercepts the save event and prevents it temporarily.

CheckEntity function — executes a validation call (e.g., to a custom action or an API endpoint).

If the validation finds a duplicate or issue, the user will be prompted with a confirmation dialog before proceeding.

The OnSave Function

Here’s a sample implementation of the OnSave event handler:

OnSave: function (ExecutionContext) {
    formContext = ExecutionContext.getFormContext();

    if (!SAVE) {
        ExecutionContext.getEventArgs().preventDefault();
        CustomNamespace.Entity.CheckEntity(formContext, function () {
            var confirmStrings = {
                subtitle: "Duplicate record found",
                text: "A similar record already exists. Do you want to continue saving?",
                title: "Duplicate Detected",
                confirmButtonLabel: "CONFIRM",
                cancelButtonLabel: "CANCEL"
            };
            var confirmOptions = { height: 350, width: 500 };

            Xrm.Navigation.openConfirmDialog(confirmStrings, confirmOptions).then(
                function (success) {
                    if (success.confirmed) {
                        SAVE = true;
                        formContext.data.save();
                    } else {
                        return; // cancel save
                    }
                }
            );
        }).then((existsDuplicate) => {
            if (existsDuplicate === false) {
                SAVE = true;
                formContext.data.save();
            }
        });
    }

    SAVE = false;
},

Key Notes:

  • The code uses preventDefault() to stop the default save action until validation completes.

  • It calls a validation function (CheckEntity) to verify data before saving.

  • If a potential duplicate is found, the user is shown a confirmation dialog.

  • If no duplicate exists, the record is saved automatically.

This ensures clean control over the save process and prevents unwanted record duplication.

The Validation Function

The CheckEntity function performs a validation call — for instance, invoking a custom action in Dynamics 365 that returns whether a duplicate exists.

CheckEntity: function (formContext, callback) {
    return new Promise((resolve, reject) => {
        var field1 = formContext.getAttribute("field1")?.getValue() || "";
        var field = formContext.getAttribute("field")?.getValue() || "";
        var field3 = formContext.getAttribute("field3")?.getValue() || "";
        var combinedField = (field1 + " " + field).trim();

        var lookupField = formContext.getAttribute("lookup_field")?.getValue();
        var entityId = formContext.data.entity.getId()?.replace(/[{}]/g, "") || "00000000-0000-0000-0000-000000000000";

        var entityObject = {
            "@odata.type": "Microsoft.Dynamics.CRM.entityname",
            "entityid": entityId,
            "combinedfield": combinedField,
            "field3": field3
        };

        if (lookupField && lookupField.length > 0) {
            entityObject["lookup_field@odata.bind"] = `/relatedentity(${lookupField[0].id.replace(/[{}]/g, "")})`;
        }

        var request = {
            ValidateEntityPreOperation_EntityObject: entityObject,
            getMetadata: function () {
                return {
                    boundParameter: null,
                    parameterTypes: {
                        ValidateEntityPreOperation_EntityObject: { typeName: "mscrm.entityname", structuralProperty: 5 }
                    },
                    operationType: 0,
                    operationName: "custom_ValidateEntityPreOperation"
                };
            }
        };

        Xrm.WebApi.execute(request).then(async response => {
            const contentType = response.headers.get("content-type");
            if (contentType && contentType.indexOf("application/json") !== -1) {
                var jsonResponse = await response.json();

                if (jsonResponse && callback) {
                    callback();
                    resolve(true); // duplicate found
                }
            } else {
                resolve(false); // no duplicate
            }
        }).catch(error => {
            console.error("Error in CheckEntity:", error);
            reject(null);
        });
    });
},

How to Use SpiderFoot for Effortless Cybersecurity Recon

SpiderFoot is an open-source OSINT automation tool designed to gather public information about domains, IPs, email addresses, and organizations. It automates dozens of data sources and modules so you can quickly build a comprehensive footprint of a target without manual scraping and juggling multiple tools. SpiderFoot is useful for threat intelligence, attack surface discovery, red team recon, and security assessments. (hackerhaven.io)

What SpiderFoot can do (at a glance)

  • Enumerate DNS records, subdomains, and WHOIS details.

  • Pull leaked credentials and breach data where available.

  • Search social media signals and correlate identities.

  • Discover infrastructure exposed on the internet (IP ranges, open services).

  • Export findings in JSON, CSV or visual formats for further analysis.

These capabilities make SpiderFoot an efficient first step for mapping an organization’s public attack surface.

Quick setup (local/web UI)

  1. Install or pull the repo — SpiderFoot can be run locally (CLI) or via its web UI. If you prefer an all-in-one web interface, run the server locally and open the dashboard (commonly http://127.0.0.1:5001). (InfoSec Train)

  2. Create a new scan — From the web UI click New Scan, enter the target (domain, IP, or organization name) and give it a descriptive label. (InfoSec Train)

  3. Choose a scan profile — Profiles let you balance speed vs coverage:

    • All: every module (slowest, most exhaustive).

    • Footprint: public footprinting modules only.

    • Investigate: adds malicious indicator checks.

    • Passive: avoids active probes (safer/legal for some scenarios). (InfoSec Train)

  4. Select modules and API keys — Configure modules you want (WHOIS, DNS, Shodan, HaveIBeenPwned, social lookups). Add API keys for services that require them to improve results.

  5. Run the scan and monitor — Start the scan and monitor progress in the dashboard; results stream in and are categorized by type.

Interpreting results

SpiderFoot groups findings by categories (domains, IPs, breaches, social handles, etc.). Important tips:

  • Prioritize high-confidence findings first (verified WHOIS, confirmed domain-to-IP mappings).

  • Correlate data — use timestamps, overlapping infrastructure, and repeated identifiers to join otherwise separate results.

  • Export for analysis — JSON or CSV exports let you feed results into other tools (SIEMs, graphing tools, Maltego) for deeper investigation.

Typical use cases

  • Attack surface discovery: Quickly discover subdomains, exposed services and third-party assets.

  • Phishing defense: Identify spoofable domains and leaked credentials that support targeted phishing simulations.

  • Threat intelligence: Map infrastructure and linked identities used by suspicious actors.

  • Pre engagement recon: Save time during red team or pen test engagements by automating initial discovery.

Best practices & safety

  • Use passive mode for legal safety when you don’t have authorization; active probing can trigger logging or be considered unauthorized access.

  • Respect robots.txt and API terms for external services and rate limits.

  • Limit sensitive exports — treat scan results containing personal data or breached credentials as sensitive: store securely and follow privacy rules and company policy.

  • Enrich, don’t replace — SpiderFoot is powerful, but combine its findings with human analysis and other OSINT tools (Maltego, Shodan, Recon-ng) for the full picture. (hackerhaven.io)

Example quick workflow (practical)

  1. Start SpiderFoot UI → New Scan → target example.com.

  2. Choose Footprint profile + enable WHOIS, DNS, subdomain discovery, certificate transparency modules.

  3. Run scan; export JSON.

  4. Load JSON into a graph tool or spreadsheet to group subdomains, IP ownership, and open ports.

  5. Manually validate top-risk findings and document remediation recommendations.

For hands-on walkthroughs and UI screenshots, community guides and tutorials demonstrate exact clicks and module names. (InfoSec Train)



Building a Custom API in Dynamics 365 with Entity Input and Output Parameters

The Scenario

We’ll create a Custom API ValidateExampleRecord that:

  1. Accepts an Entity as input (ValidateExampleRecord_InputEntity),

  2. Performs server-side validation and duplicate checking,

  3. Returns an Entity as output (ValidateExampleRecord_OutputEntity),

  4. Can be called from:

    • JavaScript (e.g., form OnSave),

    • Another plugin (PreOperation stage).




Create the Custom API in Dataverse

You can create it either via Power Apps Maker Portal or Power Platform CLI (PAC).

Using Power Apps Maker Portal

  1. Go to Solutions → Open or create a new solution.

  2. Click New → Automation → Custom API.

  3. Fill in the main fields as follows:



Unique NameValidateExampleRecord
Display NameValidate Example Record
Binding TypeNone (Global)
Can be called fromBoth (client and server)
Enabled for WorkflowNo
Execute Privilege Name(leave empty)
DescriptionValidates entity data and checks for duplicates.
  1. Save and publish.




Define Custom API Parameters

Next, add input and output parameters.

Input Parameter



NameValidateExampleRecord_InputEntity
Display NameInput Entity
TypeEntity
Entity Logical Nameexample_entity
Is OptionalNo
DirectionInput

Output Parameter



NameValidateExampleRecord_OutputEntity
Display NameOutput Entity
TypeEntity
Entity Logical Nameexample_entity
DirectionOutput

Then save and publish again.



Custom API Plugin (Server-Side Validation)


Here’s a sample C# plugin class that implements the logic behind our Custom API.
It accepts an entity as input, validates its content, and outputs a
matching entity if found.

using Microsoft.Xrm.Sdk;
using Microsoft.Xrm.Sdk.Query;
using System;
using System.Collections.Generic;
using System.Linq;

namespace Example.CustomAPIs
{
    public class ValidateExampleRecord : IPlugin
    {
        ITracingService tracing;

        public void Execute(IServiceProvider serviceProvider)
        {
            tracing =
(ITracingService)serviceProvider.GetService(typeof(ITracingService));
            var context =
(IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext));
            var serviceFactory =
(IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory));
            var service = serviceFactory.CreateOrganizationService(null);

            tracing.Trace("START ValidateExampleRecord");

            try
            {
                if
(!context.InputParameters.Contains("ValidateExampleRecord_InputEntity")
||

context.InputParameters["ValidateExampleRecord_InputEntity"] == null)
                    throw new InvalidPluginExecutionException("The
input parameter 'InputEntity' is missing or null.");

                var inputEntity =
(Entity)context.InputParameters["ValidateExampleRecord_InputEntity"];
                tracing.Trace($"Entity: {inputEntity.LogicalName}, ID:
{inputEntity.Id}");

                // Extract attributes
                var recordName =
inputEntity.GetAttributeValue<string>("example_name");
                var recordCode =
inputEntity.GetAttributeValue<string>("example_identifier");
                var parentRefId =
inputEntity.GetAttributeValue<EntityReference>("example_parentreference")?.Id;

                // Validate required attributes
                var missing = new List<string>();
                if (string.IsNullOrWhiteSpace(recordName))
missing.Add("example_name");
                if (string.IsNullOrWhiteSpace(recordCode))
missing.Add("example_identifier");
                if (parentRefId == null || parentRefId == Guid.Empty)
missing.Add("example_parentreference");

                if (missing.Count > 0)
                    throw new
InvalidPluginExecutionException($"Missing required attributes:
{string.Join(", ", missing)}");

                // Run duplicate search
                var existingRecord = FindExisting(service,
inputEntity.Id, recordName, recordCode, parentRefId.Value);

context.OutputParameters["ValidateExampleRecord_OutputEntity"] =
existingRecord;

                tracing.Trace("END ValidateExampleRecord");
            }
            catch (Exception ex)
            {
                tracing.Trace($"Error: {ex}");
                throw new InvalidPluginExecutionException($"Custom API
error: {ex.Message}", ex);
            }
        }

        private Entity FindExisting(IOrganizationService service,
Guid? entityId, string name, string code, Guid parentId)
        {
            tracing.Trace("Executing duplicate search...");
            Entity result = null;

            var query = new QueryExpression("example_entity")
            {
                ColumnSet = new ColumnSet("example_name",
"example_identifier", "example_parentreference")
            };

            query.Criteria.AddCondition("example_name",
ConditionOperator.Equal, name);
            query.Criteria.AddCondition("example_parentreference",
ConditionOperator.Equal, parentId);
            if (entityId != null && entityId != Guid.Empty)
                query.Criteria.AddCondition("example_entityid",
ConditionOperator.NotEqual, entityId);

            var matches = service.RetrieveMultiple(query).Entities;
            tracing.Trace($"Found {matches.Count} possible duplicates.");

            var existing = matches.FirstOrDefault();
            if (existing != null)
            {
                var existingCode =
existing.GetAttributeValue<string>("example_identifier");
                if (!string.IsNullOrWhiteSpace(existingCode) &&
                    string.Equals(existingCode.Trim(), code.Trim(),
StringComparison.OrdinalIgnoreCase))
                {
                    result = existing;
                    tracing.Trace("Duplicate record found.");
                }
            }

            return result;
        }
    }
}

Calling the Custom API from JavaScript


You can trigger this Custom API from the form’s OnSave event to
prevent users from creating duplicates.
This example uses Xrm.WebApi.execute() to send the entity data to the
Custom API.

var formContext;
var ALLOW_SAVE = false;

if (typeof (APP) == "undefined") {
    APP = {};
}

APP.Record = {

    OnSave: function (ExecutionContext) {
        formContext = ExecutionContext.getFormContext();

        if (!ALLOW_SAVE) {
            APP.Record.CheckRecord(formContext, function () {
                var dialogOptions = {
                    title: "Duplicate Detected",
                    text: "A record with similar values already
exists. Do you want to continue?",
                    confirmButtonLabel: "Confirm",
                    cancelButtonLabel: "Cancel"
                };


Xrm.Navigation.openConfirmDialog(dialogOptions).then(function
(response) {
                    if (response.confirmed) {
                        ALLOW_SAVE = true;
                        formContext.data.save();
                    } else {
                        ExecutionContext.getEventArgs().preventDefault();
                    }
                });
            });
        }

        ALLOW_SAVE = false;
    },

    CheckRecord: function (formContext, callback) {
        var name = formContext.getAttribute("example_name")?.getValue() || "";
        var code =
formContext.getAttribute("example_identifier")?.getValue() || "";
        var parent =
formContext.getAttribute("example_parentreference")?.getValue();
        var recordId =
formContext.data.entity.getId()?.replace(/[{}]/g, "") ||
"00000000-0000-0000-0000-000000000000";

        var inputEntity = {
            "@odata.type": "Microsoft.Dynamics.CRM.example_entity",
            "example_entityid": recordId,
            "example_name": name,
            "example_identifier": code
        };

        if (parent && parent.length > 0) {
            inputEntity["example_parentreference@odata.bind"] =
`/example_parents(${parent[0].id.replace(/[{}]/g, "")})`;
        }

        var request = {
            ValidateExampleRecord_InputEntity: inputEntity,
            getMetadata: function () {
                return {
                    boundParameter: null,
                    parameterTypes: {
                        ValidateExampleRecord_InputEntity: { typeName:
"mscrm.example_entity", structuralProperty: 5 }
                    },
                    operationType: 0,
                    operationName: "ValidateExampleRecord"
                };
            }
        };

        Xrm.WebApi.execute(request)
            .then(async response => {
                if (!response.ok) throw new Error("HTTP Error: " +
response.status);
                const type = response.headers.get("content-type");
                return type && type.indexOf("application/json") !== -1
? await response.json() : null;
            })
            .then(result => {
                if (result) callback();
            })
            .catch(error => console.error("Error in CheckRecord:", error));
    }
};

Calling the Custom API from Another Plugin (with Target + PreImage)

In this version, the plugin executes during the Pre-Operation stage
and uses both the incoming Target entity (the data being updated or
created) and a PreImage (the record as it was before the update).
This ensures consistent behavior whether the operation is a Create or Update.

using Microsoft.Xrm.Sdk;
using System;

namespace Example.Plugins
{
    public class ExampleRecordPreOperation : IPlugin
    {
        public void Execute(IServiceProvider serviceProvider)
        {
            var tracing =
(ITracingService)serviceProvider.GetService(typeof(ITracingService));
            var context =
(IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext));
            var serviceFactory =
(IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory));
            var service =
serviceFactory.CreateOrganizationService(context.InitiatingUserId);

            tracing.Trace("START - ExampleRecordPreOperation");

            // Ensure plugin registration context
            if (context.Stage != 20 || (context.MessageName !=
"Create" && context.MessageName != "Update"))
            {
                tracing.Trace($"Unexpected context:
Stage={context.Stage}, Message={context.MessageName}");
                return;
            }

            // Get target entity
            if (!context.InputParameters.Contains("Target") ||
!(context.InputParameters["Target"] is Entity target))
                throw new InvalidPluginExecutionException("Target
entity is missing or invalid.");

            // Get PreImage for Update
            Entity preImage = null;
            if (context.MessageName == "Update" &&
context.PreEntityImages.Contains("PreImage"))
                preImage = context.PreEntityImages["PreImage"];

            // Merge data between target and preImage
            string recordName = target.GetAttributeValue<string>("example_name")
                                ??
preImage?.GetAttributeValue<string>("example_name");

            string recordCode =
target.GetAttributeValue<string>("example_identifier")
                                ??
preImage?.GetAttributeValue<string>("example_identifier");

            EntityReference parentRef =
target.GetAttributeValue<EntityReference>("example_parentreference")
                                        ??
preImage?.GetAttributeValue<EntityReference>("example_parentreference");

            // Prepare the entity for Custom API validation
            var entityToValidate = new Entity("example_entity");

            if (context.MessageName == "Update" && target.Id != Guid.Empty)
                entityToValidate.Id = target.Id;

            if (!string.IsNullOrWhiteSpace(recordName))
                entityToValidate["example_name"] = recordName;

            if (!string.IsNullOrWhiteSpace(recordCode))
                entityToValidate["example_identifier"] = recordCode;

            if (parentRef != null)
                entityToValidate["example_parentreference"] = parentRef;

            tracing.Trace("Calling Custom API: ValidateExampleRecord");

            // Execute Custom API
            var request = new OrganizationRequest("ValidateExampleRecord")
            {
                ["ValidateExampleRecord_InputEntity"] = entityToValidate
            };

            var response = service.Execute(request);
            var duplicate =
response.Results["ValidateExampleRecord_OutputEntity"] as Entity;

            // Handle duplicate scenario
            if (duplicate != null && duplicate.Id != Guid.Empty)
                throw new InvalidPluginExecutionException(
                    "A record with the same name and identifier
already exists for this parent record.");

            tracing.Trace("END - ExampleRecordPreOperation");
        }
    }
}


Optional: Registering the Plugin

In your plugin registration tool (e.g., Plugin Registration Tool or
Power Platform CLI):

Stage: Pre-Operation (20)

Messages: Create and Update

Primary Entity: example_entity

PreImage Name: PreImage

PreImage Attributes: example_name, example_identifier, example_parentreference