DICOM Server rs v1.0.0 published

Announcing dicom-server-rs v0.1.0: A High-Performance DICOM Store SCP in Rust

We are excited to announce the initial release of dicom-server-rs, a lightweight, high-performance DICOM server (Store SCP) built entirely in Rust.

In the world of medical imaging, reliability and speed are non-negotiable. By leveraging the Rust ecosystem, dicom-server-rs provides a modern alternative for receiving and managing DICOM files with a focus on safety and efficiency.

🚀 Key Features

  • Native Rust Implementation: Built using the dicom-rs ecosystem for robust parsing and protocol handling.

    Read more →

How to Use Multi-Related Stream to Upload DICOM Files | Cloud DICOM-WEB Service

How to Use Multi-Related Stream to Upload DICOM Files

Implementing Multi-File Upload with CURL Scripts or .NET Core Applications

During STOW-RS development, to test RESTful API interfaces, we created a CURL script that uploads multiple DICOM files.

It’s important to note that RESTful interfaces require uploads to be in multipart/related format with Content-Type: multipart/related;boundary=DICOM_BOUNDARY;type=application/dicom.

CURL Script Implementation

#!/bin/bash

# Check parameters
if [ $# -eq 0 ]; then
    echo "Usage: $0 <dicom_directory>"
    echo "Example: $0 ~/amprData"
    exit 1
fi

DICOM_DIR="$1"

# Check if directory exists
if [ ! -d "$DICOM_DIR" ]; then
    echo "Error: Directory '$DICOM_DIR' does not exist"
    exit 1
fi

BOUNDARY="DICOM_BOUNDARY"
TEMP_FILE="multipart_request_largdata.tmp"

# Check for DICOM files
DICOM_FILES=($(find "$DICOM_DIR" -type f -name "*.dcm"))
if [ ${#DICOM_FILES[@]} -eq 0 ]; then
    echo "Warning: No DICOM files found in '$DICOM_DIR'"
    exit 0
fi

echo "Found ${#DICOM_FILES[@]} DICOM files"

# 1. Initialize file (without JSON part)
> "$TEMP_FILE"

# 2. Loop through all DICOM files (first file doesn't need prefix separator)
for i in "${!DICOM_FILES[@]}"; do
    dicom_file="${DICOM_FILES[$i]}"

    # Add prefix separator for all files except the first
    if [ $i -gt 0 ]; then
        printf -- "\r\n--%s\r\n" "$BOUNDARY" >> "$TEMP_FILE"
    else
        # First file needs starting separator
        printf -- "--%s\r\n" "$BOUNDARY" >> "$TEMP_FILE"
    fi

    printf -- "Content-Type: application/dicom\r\n\r\n" >> "$TEMP_FILE"

    # Append DICOM file content
    cat "$dicom_file" >> "$TEMP_FILE"

    echo "Added file: $(basename "$dicom_file")"
done

# 3. Write ending separator for request body
printf -- "\r\n--%s--\r\n" "$BOUNDARY" >> "$TEMP_FILE"

# 4. Calculate file size
CONTENT_LENGTH=$(wc -c < "$TEMP_FILE" | tr -d ' ')

echo "Total content length: $CONTENT_LENGTH bytes"

# 5. Send request
curl -X POST http://localhost:9000/stow-rs/v1/studies \
     -H "Content-Type: multipart/related; boundary=$BOUNDARY; type=application/dicom" \
     -H "Accept: application/json" \
     -H "x-tenant: 1234567890" \
     -H "Content-Length: $CONTENT_LENGTH" \
     --data-binary @"$TEMP_FILE"

# 6. Clean up temporary file
rm "$TEMP_FILE"

echo "Upload completed"

.NET Core Implementation

using System.Text;

namespace MakeMultirelate
{
    public class ConstructPostRequest : IDisposable
    {
        private const string Boundary = "DICOM_BOUNDARY";
        private readonly HttpClient _httpClient = new();

        public async Task SendDicomFilesAsync(List<string> dicomFilePaths, string url, string tenantId)
        {
            // Estimate memory stream size for performance optimization
            long estimatedSize = 0;
            foreach (var filePath in dicomFilePaths)
            {
                if (!File.Exists(filePath))
                    throw new FileNotFoundException($"DICOM file not found: {filePath}");

                // Get file size and accumulate
                var fileInfo = new FileInfo(filePath);
                estimatedSize += fileInfo.Length;
            }

            // Add estimated size for boundaries and headers (approximately 200 bytes per file for separators and headers)
            estimatedSize += dicomFilePaths.Count * 200;
            // Add end boundary size
            estimatedSize += Boundary.Length + 10;

            // Create memory stream to build multipart content with estimated size initialization
            using var memoryStream = new MemoryStream((int)Math.Min(estimatedSize, int.MaxValue));

            // Build multipart content
            foreach (var filePath in dicomFilePaths)
            {
                if (!File.Exists(filePath))
                    throw new FileNotFoundException($"DICOM file not found: {filePath}");

                // Add separator and header
                var separator = Encoding.UTF8.GetBytes($"\r\n--{Boundary}\r\n");
                var header = Encoding.UTF8.GetBytes("Content-Type: application/dicom\r\n\r\n");

                if (memoryStream.Length == 0)
                {
                    // First part doesn't need leading separator
                    separator = Encoding.UTF8.GetBytes($"--{Boundary}\r\n");
                }

                await memoryStream.WriteAsync(separator, 0, separator.Length);
                await memoryStream.WriteAsync(header, 0, header.Length);

                // Read and add DICOM file content
                var fileBytes = await File.ReadAllBytesAsync(filePath);
                await memoryStream.WriteAsync(fileBytes, 0, fileBytes.Length);
            }

            // Add ending separator
            var endBoundary = Encoding.UTF8.GetBytes($"\r\n--{Boundary}--\r\n");
            await memoryStream.WriteAsync(endBoundary, 0, endBoundary.Length);

            // Prepare request content
            var content = new ByteArrayContent(memoryStream.ToArray());
            content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("multipart/related");
            content.Headers.ContentType.Parameters.Add(
                new System.Net.Http.Headers.NameValueHeaderValue("boundary", Boundary));
            content.Headers.ContentType.Parameters.Add(
                new System.Net.Http.Headers.NameValueHeaderValue("type", "application/dicom"));

            // Set request headers
            _httpClient.DefaultRequestHeaders.Clear();
            _httpClient.DefaultRequestHeaders.Add("Accept", "application/json");
            _httpClient.DefaultRequestHeaders.Add("x-tenant", tenantId);

            // Send request
            var response = await _httpClient.PostAsync(url, content);

            // Process response
            var responseContent = await response.Content.ReadAsStringAsync();
            Console.WriteLine($"Status Code: {response.StatusCode}");
            Console.WriteLine($"Response: {responseContent}");
        }

        // Overloaded function: Recursively find DICOM files based on directory
        public async Task SendDicomFilesAsync(string dicomDirectory, string url, string tenantId)
        {
            if (!Directory.Exists(dicomDirectory))
                throw new DirectoryNotFoundException($"DICOM directory not found: {dicomDirectory}");

            // Recursively find all files with .dcm extension
            var dicomFiles = Directory.GetFiles(dicomDirectory, "*.dcm", SearchOption.AllDirectories);

            // If no files found, throw exception
            if (dicomFiles.Length == 0)
                throw new FileNotFoundException($"No DICOM files found in directory: {dicomDirectory}");

            // Call original method to send files
            await SendDicomFilesAsync(dicomFiles.ToList(), url, tenantId);
        }

        public void Dispose()
        {
            _httpClient.Dispose();
        }
    }
}

Key Implementation Points

  1. Multipart/Related Format Content-Type must be multipart/related;boundary=DICOM_BOUNDARY;type=application/dicom Each DICOM file requires proper boundary delimiters First file needs a starting boundary, subsequent files need separator boundaries

    Read more →

6.1 WADO-Consumer Service Implementation: Building Scalable Cloud DICOM-WEB Services

WADO-Consumer Module Documentation

Overview

storage_consumer.rs is the core consumption module in the WADO system, responsible for consuming DICOM storage messages from the Kafka message queue, performing batch processing, persistent storage, and republishing to other topics.

Main Functions

Message Consumption and Processing Flow

  • Subscribe to DicomStoreMeta messages from Kafka topics
  • Batch collect messages and process according to time and quantity thresholds
  • Classify processed messages into state metadata and image metadata
  • Persist to databases
  • Republish to corresponding Kafka topics

Core Components

start_process Function

System entry point, responsible for initializing the entire consumption process:

Read more →

5.1 WADO-StoreSCP Service Implementation: Building DICOM C-STORE Services with Rust

Overview

WADO-StoreSCP is a DICOM Storage SCP (C-STORE) service implementation written in Rust that can receive DICOM files sent from other DICOM devices. This project is part of the DICOM-rs project.

Feature Set

  • Supports DICOM C-STORE protocol
  • Provides both synchronous and asynchronous operation modes
  • Supports multiple transfer syntaxes
  • Automatically transcodes unsupported transfer syntaxes
  • Saves received DICOM files to local storage
  • Sends metadata information via Kafka
  • Supports multi-tenant (hospital/institution) environments
  • Includes certificate validation mechanisms

System Architecture

Core Components

  1. Main Program main.rs

    Read more →

æķˆæŊ队列|Kafka Message Processing for Medical Imaging - Scalable Healthcare Data Streaming

Kafka Message Processing System for Medical Imaging

The [message_sender_kafka.rs] module provides robust Kafka integration for medical imaging systems, enabling scalable, reliable message processing for DICOM metadata and imaging workflows.

Message Processing Architecture

High-Throughput Design

Implements batch processing and compression to handle large volumes of medical imaging metadata with minimal resource consumption.

Reliable Delivery

Features built-in retry mechanisms and error handling to ensure message delivery even in challenging network conditions.

Topic Segregation

Uses separate Kafka topics for different message types:

Read more →

Redis įž“å­˜ |Redis Key Management for DICOM Server - High Performance Caching

Redis Key Management in DICOM Server

The redis_key.rs module is a critical component of the DICOM server infrastructure, providing high-performance caching capabilities for medical imaging metadata and authentication data. This module implements intelligent key generation and caching strategies to significantly reduce database load and improve response times.

Core Features

Authentication Caching

The module efficiently caches JWKS (JSON Web Key Set) URLs, which are essential for OAuth2/OpenID Connect authentication in medical imaging systems. This reduces authentication overhead and improves security token validation performance.

Read more →

Medical Imaging Certificate Management - Secure Healthcare System Authentication

Certificate Management System for Medical Imaging

The cert_helper.rs module provides comprehensive certificate management capabilities for securing medical imaging systems, implementing Public Key Infrastructure (PKI) with X.509 certificates and OpenSSL integration.

Security Architecture

Certificate Authority (CA) Management

Implements a complete CA system for issuing and managing certificates within medical imaging environments.

Client Certificate Generation

Generates client certificates with machine-specific bindings for enhanced security.

Server Certificate Management

Creates server certificates for HTTPS and DICOM TLS connections.

Read more →

DICOM Storage Configuration and |Medical Imaging Storage Management - Efficient DICOM File Organization

Storage Configuration and Management System

The storage_config.rs module provides intelligent storage management for medical imaging systems, optimizing file organization and path management for large-scale DICOM archives.

Storage Architecture

Hierarchical Directory Structure

Implements a logical directory hierarchy based on medical imaging metadata:

  • Tenant-based organization for multi-tenant deployments
  • Date-based grouping for efficient retrieval
  • UID-based separation for unique identification

Path Management

Generates consistent, predictable paths for all DICOM files and associated metadata.

Key Features

UID Hashing

Uses SeaHash algorithm to create fixed-length hashed identifiers for secure, predictable path generation while maintaining patient privacy.

Read more →

DICOM Server Configuration Management: Flexible Multi-Environment Setup

DICOM Server Configuration Management System

The server_config.rs module provides a robust configuration management system for DICOM servers, supporting complex multi-environment deployments with flexible configuration sources.

Configuration Architecture

Multi-Source Configuration Loading

The system supports configuration loading from multiple sources:

  • JSON configuration files (application.dev.json, application.test.json)
  • Environment variables with prefix support
  • Secure credential management

Environment-Specific Settings

Different environments (development, testing, production) can have distinct configurations while maintaining consistency through shared structures.

Core Configuration Components

Database Configuration

Supports multiple database backends including MySQL, PostgreSQL, and Doris with secure password handling and connection string generation.

Read more →

DICOM Server Common Library: Building Scalable Medical Imaging Systems with Rust

DICOM Server Common Library

This is a common library for DICOM medical imaging systems that provides various core functionality modules for building scalable healthcare applications.

Feature Modules

Project Overview

The library is designed to provide reusable components for DICOM servers, including configuration loading, database access, cache management, messaging, and other essential functions for healthcare applications.

Read more →

DICOM File Transfer and Conversion - Medical Imaging Format Transformation


title: “DICOM File Transfer and Conversion - Medical Imaging Format Transformation” description: “Learn how DICOM file transfer and conversion capabilities handle various medical imaging transfer syntaxes with GDCM integration for seamless format transformation.” keywords: [“DICOM conversion”, “medical imaging”, “file transfer”, “transfer syntax”, “GDCM integration”, “image processing”, “format transformation”] date: 2024-01-01T00:00:00+08:00 draft: false

DICOM File Transfer and Conversion System

The [change_file_transfer.rs] module provides comprehensive DICOM file transfer syntax conversion capabilities, enabling seamless format transformation for medical imaging systems using GDCM integration.

Read more →

dabase access and DICOM Object Metadata Extraction - Comprehensive Medical Imaging Data Processing

DICOM Object Metadata Extraction System

The [dicom_object_meta.rs] module provides comprehensive extraction capabilities for DICOM medical imaging metadata, processing hundreds of standardized DICOM tags to create structured data representations.

Metadata Categories

Patient Information

Extracts critical patient identifiers and demographics:

  • Patient name, ID, and demographics
  • Birth date and sex
  • Medical record numbers
  • Accession numbers

Study Metadata

Processes study-level information:

  • Study instance UID and description
  • Study date and time
  • Referring physician information
  • Study ID and accession numbers

Series Information

Handles series-specific data:

Read more →

Building Scalable Cloud DICOM-WEB Services: Multi-Database Support with Rust Implementation

Overview

In this article, we’ll explore how to implement a scalable cloud DICOM-WEB service using Rust that supports multiple database systems, including MySQL, PostgreSQL, and MongoDB. We’ll focus on creating a flexible database interface that allows for seamless switching between different database backends.

Prerequisites: Interface Definition (Traits)

First, let’s define our database provider interface using Rust traits:

dicom_dbprovider.rs

use crate::dicom_meta::{DicomJsonMeta, DicomStateMeta};
use async_trait::async_trait;
use thiserror::Error;

#[derive(Error, Debug)]
pub enum DbError {
    #[error("Database operation failed: {0}")]
    DatabaseError(String),

    #[error("Data record does not exist: {0}")]
    RecordNotExists(String),

    #[error("Record already exists")]
    AlreadyExists,

    #[error("Entity extraction failed: {0}")]
    ExtractionFailed(String),

    #[error("Transaction failed: {0}")]
    TransactionFailed(String),
}

pub fn current_time() -> chrono::NaiveDateTime {
    chrono::Local::now().naive_local()
}

#[async_trait]
pub trait DbProvider: Send + Sync {
    async fn save_state_info(&self, state_meta: &DicomStateMeta) -> Result<(), DbError>;

    async fn save_state_list(&self, state_meta: &[DicomStateMeta]) -> Result<(), DbError>;

    async fn save_json_list(&self, state_meta: &[DicomJsonMeta]) -> Result<(), DbError>;

    async fn get_state_metaes(
        &self,
        tenant_id: &str,
        study_uid: &str,
    ) -> Result<Vec<DicomStateMeta>, DbError>;

    /*
     * Get metadata series information that needs to generate JSON format.
     * end_time: cutoff time.
     */
    async fn get_json_metaes(&self, end_time: chrono::NaiveDateTime) -> Result<Vec<DicomStateMeta>, DbError>;

    async fn get_json_meta(&self, tenant_id:&str, study_uid: &str, series_uid: &str)->Result<DicomJsonMeta, DbError>;
}

Implementation

3.1 MySQL Database Implementation

dicom_mysql.rs

use crate::dicom_dbprovider::{DbError, DbProvider};
use crate::dicom_meta::{DicomJsonMeta, DicomStateMeta};
use async_trait::async_trait;
use mysql::prelude::*;
use mysql::*;

pub struct MySqlDbProvider {
    db_connection_string: String,
}

impl MySqlDbProvider {
    pub fn new(db_connection_string: String) -> Self {
        MySqlDbProvider {
            db_connection_string,
        }
    }
}

#[async_trait]
impl DbProvider for MySqlDbProvider {
    async fn save_state_info(&self, state_meta: &DicomStateMeta) -> Result<(), DbError> {
        
        let mut conn = mysql::Conn::new(self.db_connection_string.as_str())
            .map_err(|e| DbError::DatabaseError(format!("Failed to connect to MySQL: {}", e)))?;

        
        let query = r#"
            INSERT INTO dicom_state_meta (
                tenant_id,
                patient_id,
                study_uid,
                series_uid,
                study_uid_hash,
                series_uid_hash,
                study_date_origin,
                patient_name,
                patient_sex,
                patient_birth_date,
                patient_birth_time,
                patient_age,
                patient_size,
                patient_weight,
                study_date,
                study_time,
                accession_number,
                study_id,
                study_description,
                modality,
                series_number,
                series_date,
                series_time,
                series_description,
                body_part_examined,
                protocol_name,
                series_related_instances,
                created_time,
                updated_time
            ) VALUES (
                :tenant_id,
                :patient_id,
                :study_uid,
                :series_uid,
                :study_uid_hash,
                :series_uid_hash,
                :study_date_origin,
                :patient_name,
                :patient_sex,
                :patient_birth_date,
                :patient_birth_time,
                :patient_age,
                :patient_size,
                :patient_weight,
                :study_date,
                :study_time,
                :accession_number,
                :study_id,
                :study_description,
                :modality,
                :series_number,
                :series_date,
                :series_time,
                :series_description,
                :body_part_examined,
                :protocol_name,
                :series_related_instances,
                :created_time,
                :updated_time
            ) ON DUPLICATE KEY UPDATE
                patient_id = VALUES(patient_id),
                study_uid_hash = VALUES(study_uid_hash),
                series_uid_hash = VALUES(series_uid_hash),
                study_date_origin = VALUES(study_date_origin),
                patient_name = VALUES(patient_name),
                patient_sex = VALUES(patient_sex),
                patient_birth_date = VALUES(patient_birth_date),
                patient_birth_time = VALUES(patient_birth_time),
                patient_age = VALUES(patient_age),
                patient_size = VALUES(patient_size),
                patient_weight = VALUES(patient_weight),
                study_date = VALUES(study_date),
                study_time = VALUES(study_time),
                accession_number = VALUES(accession_number),
                study_id = VALUES(study_id),
                study_description = VALUES(study_description),
                modality = VALUES(modality),
                series_number = VALUES(series_number),
                series_date = VALUES(series_date),
                series_time = VALUES(series_time),
                series_description = VALUES(series_description),
                body_part_examined = VALUES(body_part_examined),
                protocol_name = VALUES(protocol_name),
                series_related_instances = VALUES(series_related_instances),
                updated_time = VALUES(updated_time)
        "#;

        
        conn.exec_drop(
            query,
            params! {
                "tenant_id" => &state_meta.tenant_id,
                "patient_id" => &state_meta.patient_id,
                "study_uid" => &state_meta.study_uid,
                "series_uid" => &state_meta.series_uid,
                "study_uid_hash" => &state_meta.study_uid_hash,
                "series_uid_hash" => &state_meta.series_uid_hash,
                "study_date_origin" => &state_meta.study_date_origin,
                "patient_name" => &state_meta.patient_name,
                "patient_sex" => &state_meta.patient_sex,
                "patient_birth_date" => &state_meta.patient_birth_date,
                "patient_birth_time" => &state_meta.patient_birth_time,
                "patient_age" => &state_meta.patient_age,
                "patient_size" => &state_meta.patient_size,
                "patient_weight" => &state_meta.patient_weight,
                "study_date" => &state_meta.study_date,
                "study_time" => &state_meta.study_time,
                "accession_number" => &state_meta.accession_number,
                "study_id" => &state_meta.study_id,
                "study_description" => &state_meta.study_description,
                "modality" => &state_meta.modality,
                "series_number" => &state_meta.series_number,
                "series_date" => &state_meta.series_date,
                "series_time" => &state_meta.series_time,
                "series_description" => &state_meta.series_description,
                "body_part_examined" => &state_meta.body_part_examined,
                "protocol_name" => &state_meta.protocol_name,
                "series_related_instances" => &state_meta.series_related_instances,
                "created_time" => &state_meta.created_time,
                "updated_time" => &state_meta.updated_time,
            },
        )
        .map_err(|e| DbError::DatabaseError(format!("Failed to execute query: {}", e)))?;

        Ok(())
    }

    async fn save_state_list(&self, state_meta_list: &[DicomStateMeta]) -> Result<(), DbError> {
        if state_meta_list.is_empty() {
            return Ok(());
        }

        
        let mut conn = mysql::Conn::new(self.db_connection_string.as_str())
            .map_err(|e| DbError::DatabaseError(format!("Failed to connect to MySQL: {}", e)))?;

       
        conn.query_drop("START TRANSACTION")
            .map_err(|e| DbError::DatabaseError(format!("Failed to start transaction: {}", e)))?;

       
        let query = r#"
            INSERT INTO dicom_state_meta (
                tenant_id,
                patient_id,
                study_uid,
                series_uid,
                study_uid_hash,
                series_uid_hash,
                study_date_origin,
                patient_name,
                patient_sex,
                patient_birth_date,
                patient_birth_time,
                patient_age,
                patient_size,
                patient_weight,
                study_date,
                study_time,
                accession_number,
                study_id,
                study_description,
                modality,
                series_number,
                series_date,
                series_time,
                series_description,
                body_part_examined,
                protocol_name,
                series_related_instances,
                created_time,
                updated_time
            ) VALUES (
                :tenant_id,
                :patient_id,
                :study_uid,
                :series_uid,
                :study_uid_hash,
                :series_uid_hash,
                :study_date_origin,
                :patient_name,
                :patient_sex,
                :patient_birth_date,
                :patient_birth_time,
                :patient_age,
                :patient_size,
                :patient_weight,
                :study_date,
                :study_time,
                :accession_number,
                :study_id,
                :study_description,
                :modality,
                :series_number,
                :series_date,
                :series_time,
                :series_description,
                :body_part_examined,
                :protocol_name,
                :series_related_instances,
                :created_time,
                :updated_time
            ) ON DUPLICATE KEY UPDATE
                patient_id = VALUES(patient_id),
                study_uid_hash = VALUES(study_uid_hash),
                series_uid_hash = VALUES(series_uid_hash),
                study_date_origin = VALUES(study_date_origin),
                patient_name = VALUES(patient_name),
                patient_sex = VALUES(patient_sex),
                patient_birth_date = VALUES(patient_birth_date),
                patient_birth_time = VALUES(patient_birth_time),
                patient_age = VALUES(patient_age),
                patient_size = VALUES(patient_size),
                patient_weight = VALUES(patient_weight),
                study_date = VALUES(study_date),
                study_time = VALUES(study_time),
                accession_number = VALUES(accession_number),
                study_id = VALUES(study_id),
                study_description = VALUES(study_description),
                modality = VALUES(modality),
                series_number = VALUES(series_number),
                series_date = VALUES(series_date),
                series_time = VALUES(series_time),
                series_description = VALUES(series_description),
                body_part_examined = VALUES(body_part_examined),
                protocol_name = VALUES(protocol_name),
                series_related_instances = VALUES(series_related_instances),
                updated_time = VALUES(updated_time)
        "#;

        
        for state_meta in state_meta_list {
            let result = conn.exec_drop(
                query,
                params! {
                    "tenant_id" => &state_meta.tenant_id,
                    "patient_id" => &state_meta.patient_id,
                    "study_uid" => &state_meta.study_uid,
                    "series_uid" => &state_meta.series_uid,
                    "study_uid_hash" => &state_meta.study_uid_hash,
                    "series_uid_hash" => &state_meta.series_uid_hash,
                    "study_date_origin" => &state_meta.study_date_origin,
                    "patient_name" => &state_meta.patient_name,
                    "patient_sex" => &state_meta.patient_sex,
                    "patient_birth_date" => &state_meta.patient_birth_date,
                    "patient_birth_time" => &state_meta.patient_birth_time,
                    "patient_age" => &state_meta.patient_age,
                    "patient_size" => &state_meta.patient_size,
                    "patient_weight" => &state_meta.patient_weight,
                    "study_date" => &state_meta.study_date,
                    "study_time" => &state_meta.study_time,
                    "accession_number" => &state_meta.accession_number,
                    "study_id" => &state_meta.study_id,
                    "study_description" => &state_meta.study_description,
                    "modality" => &state_meta.modality,
                    "series_number" => &state_meta.series_number,
                    "series_date" => &state_meta.series_date,
                    "series_time" => &state_meta.series_time,
                    "series_description" => &state_meta.series_description,
                    "body_part_examined" => &state_meta.body_part_examined,
                    "protocol_name" => &state_meta.protocol_name,
                    "series_related_instances" => &state_meta.series_related_instances,
                    "created_time" => &state_meta.created_time,
                    "updated_time" => &state_meta.updated_time,
                },
            );

           
            if let Err(e) = result {
                conn.query_drop("ROLLBACK").map_err(|rollback_err| {
                    DbError::DatabaseError(format!(
                        "Failed to rollback transaction after error {}: {}",
                        e, rollback_err
                    ))
                })?;

                return Err(DbError::DatabaseError(format!(
                    "Failed to execute query for state meta: {}",
                    e
                )));
            }
        }

        
        conn.query_drop("COMMIT")
            .map_err(|e| DbError::DatabaseError(format!("Failed to commit transaction: {}", e)))?;

        Ok(())
    }

    async fn save_json_list(
        &self,
        json_meta_list: &[DicomJsonMeta],
    ) -> std::result::Result<(), DbError> {
        if json_meta_list.is_empty() {
            return Ok(());
        }

        
        let mut conn = mysql::Conn::new(self.db_connection_string.as_str())
            .map_err(|e| DbError::DatabaseError(format!("Failed to connect to MySQL: {}", e)))?;

       
        conn.query_drop("START TRANSACTION")
            .map_err(|e| DbError::DatabaseError(format!("Failed to start transaction: {}", e)))?;

      
        let query = r#"
            INSERT INTO dicom_json_meta (
                tenant_id,
                study_uid,
                series_uid,
                study_uid_hash,
                series_uid_hash,
                study_date_origin,
                created_time,
                flag_time,
                json_status,
                retry_times
            ) VALUES (
                :tenant_id,
                :study_uid,
                :series_uid,
                :study_uid_hash,
                :series_uid_hash,
                :study_date_origin,
                :created_time,
                :flag_time,
                :json_status,
                :retry_times
            ) ON DUPLICATE KEY UPDATE
                study_uid_hash = VALUES(study_uid_hash),
                series_uid_hash = VALUES(series_uid_hash),
                study_date_origin = VALUES(study_date_origin),
                created_time = VALUES(created_time),
                flag_time = VALUES(flag_time),
                json_status = VALUES(json_status),
                retry_times = VALUES(retry_times)
        "#;

        
        for json_meta in json_meta_list {
            let result = conn.exec_drop(
                query,
                params! {
                    "tenant_id" => &json_meta.tenant_id,
                    "study_uid" => &json_meta.study_uid,
                    "series_uid" => &json_meta.series_uid,
                    "study_uid_hash" => &json_meta.study_uid_hash,
                    "series_uid_hash" => &json_meta.series_uid_hash,
                    "study_date_origin" => &json_meta.study_date_origin,
                    "created_time" => &json_meta.created_time,
                    "flag_time" => &json_meta.flag_time,
                    "json_status" => &json_meta.json_status,
                    "retry_times" => &json_meta.retry_times,
                },
            );

            
            if let Err(e) = result {
                conn.query_drop("ROLLBACK").map_err(|rollback_err| {
                    DbError::DatabaseError(format!(
                        "Failed to rollback transaction after error {}: {}",
                        e, rollback_err
                    ))
                })?;

                return Err(DbError::DatabaseError(format!(
                    "Failed to execute query for json meta: {}",
                    e
                )));
            }
        }

       
        conn.query_drop("COMMIT")
            .map_err(|e| DbError::DatabaseError(format!("Failed to commit transaction: {}", e)))?;

        Ok(())
    }

    async fn get_state_metaes(
        &self,
        tenant_id: &str,
        study_uid: &str,
    ) -> Result<Vec<DicomStateMeta>, DbError> {
       
        let mut conn = mysql::Conn::new(self.db_connection_string.as_str())
            .map_err(|e| DbError::DatabaseError(format!("Failed to connect to MySQL: {}", e)))?;

        
        let query = r#"
            SELECT
                tenant_id,
                patient_id,
                study_uid,
                series_uid,
                study_uid_hash,
                series_uid_hash,
                study_date_origin,
                patient_name,
                patient_sex,
                patient_birth_date,
                patient_birth_time,
                patient_age,
                patient_size,
                patient_weight,
                study_date,
                study_time,
                accession_number,
                study_id,
                study_description,
                modality,
                series_number,
                series_date,
                series_time,
                series_description,
                body_part_examined,
                protocol_name,
                series_related_instances,
                created_time,
                updated_time
            FROM dicom_state_meta
            WHERE tenant_id = :tenant_id AND study_uid = :study_uid
        "#;

       
        let result: Vec<DicomStateMeta> = conn
            .exec_map(
                query,
                params! {
                    "tenant_id" => tenant_id,
                    "study_uid" => study_uid,
                },
                |row: mysql::Row| {
                    
                    DicomStateMeta {
                        tenant_id: row.get("tenant_id").unwrap_or_default(),
                        patient_id: row.get("patient_id").unwrap_or_default(),
                        study_uid: row.get("study_uid").unwrap_or_default(),
                        series_uid: row.get("series_uid").unwrap_or_default(),
                        study_uid_hash: row.get("study_uid_hash").unwrap_or_default(),
                        series_uid_hash: row.get("series_uid_hash").unwrap_or_default(),
                        study_date_origin: row.get("study_date_origin").unwrap_or_default(),
                        patient_name: row.get("patient_name").unwrap_or_default(),
                        patient_sex: row.get("patient_sex").unwrap_or_default(),
                        patient_birth_date: row.get("patient_birth_date").unwrap_or_default(),
                        patient_birth_time: row.get("patient_birth_time").unwrap_or_default(),
                        patient_age: row.get("patient_age").unwrap_or_default(),
                        patient_size: row.get("patient_size").unwrap_or_default(),
                        patient_weight: row.get("patient_weight").unwrap_or_default(),
                        study_date: row.get("study_date").unwrap_or_default(),
                        study_time: row.get("study_time").unwrap_or_default(),
                        accession_number: row.get("accession_number").unwrap_or_default(),
                        study_id: row.get("study_id").unwrap_or_default(),
                        study_description: row.get("study_description").unwrap_or_default(),
                        modality: row.get("modality").unwrap_or_default(),
                        series_number: row.get("series_number").unwrap_or_default(),
                        series_date: row.get("series_date").unwrap_or_default(),
                        series_time: row.get("series_time").unwrap_or_default(),
                        series_description: row.get("series_description").unwrap_or_default(),
                        body_part_examined: row.get("body_part_examined").unwrap_or_default(),
                        protocol_name: row.get("protocol_name").unwrap_or_default(),
                        series_related_instances: row
                            .get("series_related_instances")
                            .unwrap_or_default(),
                        created_time: row.get("created_time").unwrap_or_default(),
                        updated_time: row.get("updated_time").unwrap_or_default(),
                    }
                },
            )
            .map_err(|e| DbError::DatabaseError(format!("Failed to execute query: {}", e)))?;

        Ok(result)
    }

    async fn get_json_metaes(
        &self,
        end_time: chrono::NaiveDateTime,
    ) -> std::result::Result<Vec<DicomStateMeta>, DbError> {
        
        let mut conn = mysql::Conn::new(self.db_connection_string.as_str())
            .map_err(|e| DbError::DatabaseError(format!("Failed to connect to MySQL: {}", e)))?;

        
        let query = r#"
            Select tenant_id,
                patient_id,
                study_uid,
                series_uid,
                study_uid_hash,
                series_uid_hash,
                study_date_origin,
                patient_name,
                patient_sex,
                patient_birth_date,
                patient_birth_time,
                patient_age,
                patient_size,
                patient_weight,
                study_date,
                study_time,
                accession_number,
                study_id,
                study_description,
                modality,
                series_number,
                series_date,
                series_time,
                series_description,
                body_part_examined,
                protocol_name,
                series_related_instances,
                created_time,
                updated_time
            From (SELECT dsm.*
                  FROM dicom_state_meta dsm
                           LEFT JOIN dicom_json_meta djm
                                     ON dsm.tenant_id = djm.tenant_id
                                         AND dsm.study_uid = djm.study_uid
                                         AND dsm.series_uid = djm.series_uid
                  WHERE djm.tenant_id IS NULL  AND dsm.updated_time < ?
                  UNION ALL
                  SELECT dsm.*
                  FROM dicom_state_meta dsm
                           INNER JOIN dicom_json_meta djm
                                      ON dsm.tenant_id = djm.tenant_id
                                          AND dsm.study_uid = djm.study_uid
                                          AND dsm.series_uid = djm.series_uid
                  WHERE dsm.updated_time != djm.flag_time
                   AND  dsm.updated_time < ?
                  ) AS t
                  order by t.updated_time asc limit 10;
        "#;

     
        let result: Vec<DicomStateMeta> = conn
            .exec_map(query, params! { end_time,end_time }, |row: mysql::Row| {
              
                DicomStateMeta {
                    tenant_id: row.get("tenant_id").unwrap_or_default(),
                    patient_id: row.get("patient_id").unwrap_or_default(),
                    study_uid: row.get("study_uid").unwrap_or_default(),
                    series_uid: row.get("series_uid").unwrap_or_default(),
                    study_uid_hash: row.get("study_uid_hash").unwrap_or_default(),
                    series_uid_hash: row.get("series_uid_hash").unwrap_or_default(),
                    study_date_origin: row.get("study_date_origin").unwrap_or_default(),
                    patient_name: row.get("patient_name").unwrap_or_default(),
                    patient_sex: row.get("patient_sex").unwrap_or_default(),
                    patient_birth_date: row.get("patient_birth_date").unwrap_or_default(),
                    patient_birth_time: row.get("patient_birth_time").unwrap_or_default(),
                    patient_age: row.get("patient_age").unwrap_or_default(),
                    patient_size: row.get("patient_size").unwrap_or_default(),
                    patient_weight: row.get("patient_weight").unwrap_or_default(),
                    study_date: row.get("study_date").unwrap_or_default(),
                    study_time: row.get("study_time").unwrap_or_default(),
                    accession_number: row.get("accession_number").unwrap_or_default(),
                    study_id: row.get("study_id").unwrap_or_default(),
                    study_description: row.get("study_description").unwrap_or_default(),
                    modality: row.get("modality").unwrap_or_default(),
                    series_number: row.get("series_number").unwrap_or_default(),
                    series_date: row.get("series_date").unwrap_or_default(),
                    series_time: row.get("series_time").unwrap_or_default(),
                    series_description: row.get("series_description").unwrap_or_default(),
                    body_part_examined: row.get("body_part_examined").unwrap_or_default(),
                    protocol_name: row.get("protocol_name").unwrap_or_default(),
                    series_related_instances: row
                        .get("series_related_instances")
                        .unwrap_or_default(),
                    created_time: row.get("created_time").unwrap_or_default(),
                    updated_time: row.get("updated_time").unwrap_or_default(),
                }
            })
            .map_err(|e| DbError::DatabaseError(format!("Failed to execute query: {}", e)))?;

        Ok(result)
    }

    async fn get_json_meta(
        &self,
        tenant_id: &str,
        study_uid: &str,
        series_uid: &str,
    ) -> std::result::Result<DicomJsonMeta, DbError> {
       
        let mut conn = mysql::Conn::new(self.db_connection_string.as_str())
            .map_err(|e| DbError::DatabaseError(format!("Failed to connect to MySQL: {}", e)))?;

      
        let query = r#"
        SELECT
            tenant_id,
            study_uid,
            series_uid,
            study_uid_hash,
            series_uid_hash,
            study_date_origin,
            created_time,
            flag_time,
            json_status,
            retry_times
        FROM dicom_json_meta
        WHERE series_uid = :series_uid and tenant_id = :tenant_id and study_uid = :study_uid
    "#;

       
        let result: Option<DicomJsonMeta> = conn
            .exec_first(
                query,
                params! {
                    "series_uid" => series_uid,
                    "tenant_id" => tenant_id,
                    "study_uid" => study_uid,
                },
            )
            .map_err(|e| DbError::DatabaseError(format!("Failed to execute query: {}", e)))?
            .map(|row: mysql::Row| DicomJsonMeta {
                tenant_id: row.get("tenant_id").unwrap_or_default(),
                study_uid: row.get("study_uid").unwrap_or_default(),
                series_uid: row.get("series_uid").unwrap_or_default(),
                study_uid_hash: row.get("study_uid_hash").unwrap_or_default(),
                series_uid_hash: row.get("series_uid_hash").unwrap_or_default(),
                study_date_origin: row.get("study_date_origin").unwrap_or_default(),
                created_time: row.get("created_time").unwrap_or_default(),
                flag_time: row.get("flag_time").unwrap_or_default(),
                json_status: row.get("json_status").unwrap_or_default(),
                retry_times: row.get("retry_times").unwrap_or_default(),
            });

        
        match result {
            Some(json_meta) => Ok(json_meta),
            None => Err(DbError::DatabaseError(format!(
                "DicomJsonMeta with series_uid {} not found",
                series_uid
            ))),
        }
    }
}

3.2 PostgreSQL Implementation

Here’s the PostgreSQL implementation:

Read more →

Building DICOM Gateway with FoDICOM: Batch Sending DICOM Files Guide

DICOM Gateway: A Middleware System for Medical Imaging Data Conversion and Routing

A DICOM Gateway is a specialized middleware system designed to convert, route, adapt, or process DICOM data streams between different medical information systems. It is typically deployed at network boundaries or system integration points, serving as a “protocol converter,” “data router,” and “data preprocessor.”

What is a DICOM Gateway?

Conceptually, a DICOM Gateway is the specific application of traditional network “gateways” in the medical imaging domain:

Read more →

Rust Type-Safe Wrappers for DICOM Medical Imaging Systems: Building Scalable Cloud DICOM-WEB Services

Introduction

In DICOM medical imaging systems, data accuracy and consistency are critical. This article explores how to use Rust’s type system to create custom type-safe wrappers that prevent runtime errors and improve code maintainability.

BoundedString: Length-Limited Strings

BoundedString<N> is a generic struct that ensures strings do not exceed N characters in length.

FixedLengthString: Fixed-Length Strings

FixedLengthString<N> ensures strings are exactly N characters long, which is useful when handling certain DICOM fields.

Read more →

Rust Implementation of DICOM Medical Imaging Systems: Type-Safe Database Design | Building Scalable Cloud DICOM-WEB Services

Rust Implementation of DICOM Medical Imaging Systems: Type-Safe Database Design

This project demonstrates how to build a robust, type-safe database access layer in Rust, specifically designed for healthcare applications requiring strict data validation. Through abstract interfaces and concrete implementation separation, the system can easily scale to support more database types while ensuring code maintainability and testability.

Core Design Concepts

1. Type-Safe Data Structures

Several key type-safe wrappers are defined in the project:

Read more →

Preparations for Building Scalable Cloud DICOM-WEB Services

Preparations for Building Scalable Cloud DICOM-WEB Services

This article introduces the architectural design of a DICOM medical imaging system developed using Rust, which employs a modern technology stack including PostgreSQL as the primary index database, Apache Doris for log storage, RedPanda as the message queue, and Redis for caching. The system design supports both standalone operation and distributed scaling, fully leveraging the safety and performance advantages of the Rust programming language.

Read more →

How to Use fo-dicom to Build DICOM C-Store SCU Tool for Batch Sending DICOM Files

How to Use fo-dicom to Build DICOM C-Store SCU Tool for Batch Sending DICOM Files

Building a DICOM C-Store SCU (Service Class User) tool is essential for medical imaging applications that require sending DICOM files to storage systems. This tutorial will guide you through creating a robust batch DICOM file sender using fo-dicom, a powerful open-source DICOM library written in C#.

Fo-DICOM provides comprehensive features for working with DICOM data, including support for reading, writing, and manipulating DICOM files. This guide demonstrates how to create a DICOM C-Store SCU tool capable of batch sending DICOM files for testing and verification purposes in medical imaging environments.

Read more →

How to Build Scalable Cloud DICOM-WEB Services | DICOM Cloud

How to Build Scalable Cloud DICOM-WEB Services

Learn how to build distributed DICOM-WEB services using open-source projects.

Overall Architecture

  1. Apache Kafka as message queue. RedPanda can be used as alternative during development.
  2. Apache Doris as data warehouse. Provides storage for DicomStateMeta, DicomImageMeta, and WadoAccessLog, offering query and statistical analysis.
  3. PostgreSQL as database. Provides data storage and indexing functions. Stores only patient, study, and series level metadata to fully leverage ACID properties of relational databases. Can be scaled later with Citus.
  4. Redis as cache. Provides data caching functionality.
  5. Nginx as reverse proxy server. Provides load balancing, static files, and TLS termination.

Files received by the storage service are first stored locally, then sent to the message queue via Kafka.

Read more →

How to Use NLog in .NET Core

Benefits of Using NLog for Logging in .NET Projects

NLog is a powerful, flexible, and high-performance logging framework widely used in .NET applications (including .NET Framework, .NET Core, and .NET 5/6/7/8+). Integrating NLog into .NET projects can significantly enhance system observability, maintainability, and debugging efficiency. Here are the main advantages of using NLog.


I. Core Advantages

1. High Performance with Low Overhead

  • NLog is highly optimized, utilizing asynchronous writing, buffering, and batch processing mechanisms that have minimal impact on application performance.
  • Supports asynchronous logging (async="true") to prevent I/O from blocking the main thread.

2. Flexible Configuration Options

  • Supports externalized configuration through XML configuration files (like nlog.config), allowing adjustments to logging behavior without recompiling code.
  • Also supports code-based configuration for dynamic scenarios or cloud-native environments.

3. Rich Output Targets

NLog supports writing logs to multiple targets simultaneously, including:

Read more →

How to Modify DICOM Transfer Syntax Using Fo-DICOM Library: Encoding Format Conversion Guide

Purpose and Benefits of Modifying DICOM File Transfer Syntax

In medical image processing and exchange, Transfer Syntax is a key component of the DICOM standard that defines data encoding methods. It determines the byte order of data elements in DICOM files, whether Value Representation (VR) is explicitly declared, and whether pixel data is compressed. Therefore, modifying the transfer syntax of DICOM files is a common and important operation with clear purposes and significant benefits.

Read more →

DICOM Core Information Structure and Explanation: Understanding Medical Imaging Standards

DICOM Basic Concepts

DICOM (Digital Imaging and Communications in Medicine) is the international standard (ISO 12052) for medical imaging and related information, used for storing, exchanging, and transmitting medical images and related data. DICOM files not only contain pixel data (the image itself) but also contain extensive metadata describing the image, organized as “data elements” within “Information Object Definitions” (IOD).

✅ 1. Core Structure of DICOM Files

File Header (File Meta Information)

  • 128-byte fixed-length preamble (usually all 0x00, optional)
  • 4-byte DICOM prefix “DICM”
  • File meta information group (Group 0002), including:
    • (0002,0000) File Meta Information Group Length
    • (0002,0001) File Meta Information Version
    • (0002,0002) Media Storage SOP Class UID
    • (0002,0003) Media Storage SOP Instance UID
    • (0002,0010) Transfer Syntax UID
    • (0002,0012) Implementation Class UID
    • (0002,0013) Implementation Version Name

Dataset

  • Contains actual image information and metadata, following specific IODs (such as CT Image IOD, MR Image IOD, etc.)
  • Data elements organized by tags, formatted as (gggg,eeee), where gggg is the group number and eeee is the element number
  • Each data element includes: Tag, VR (Value Representation), Value Length, Value

✅ 2. Core Information Object (IOD) Structure Explanation

Each DICOM image belongs to a SOP Class (Service-Object Pair Class), for example:

Read more →