DICOM Server Configuration Management System
The server_config.rs module provides a robust configuration management system for DICOM servers, supporting complex multi-environment deployments with flexible configuration sources.
Configuration Architecture
Multi-Source Configuration Loading
The system supports configuration loading from multiple sources:
- JSON configuration files (application.dev.json, application.test.json)
- Environment variables with prefix support
- Secure credential management
Environment-Specific Settings
Different environments (development, testing, production) can have distinct configurations while maintaining consistency through shared structures.
Core Configuration Components
Database Configuration
Supports multiple database backends including MySQL, PostgreSQL, and Doris with secure password handling and connection string generation.
Redis Configuration
Manages Redis connections with optional password authentication and TLS support for secure caching layers.
Security Configuration
Handles OAuth2/OpenID Connect settings including JWKS URLs, issuer URLs, and role-based access control configurations.
Storage Configuration
Manages local storage paths for DICOM files and JSON metadata with automatic directory creation and validation.
Message Queue Configuration
Configures Kafka integration for distributed processing with buffering, compression, and topic management settings.
DICOM SCP Configuration
Manages DICOM Store SCP settings including AE titles, transfer syntax support, and tenant identification.
Security Features
- Credential Protection: Secure handling of database passwords and API keys
- Environment Isolation: Separate configurations prevent cross-environment contamination
- Validation: Built-in configuration validation prevents deployment errors
- Extensibility: Easy addition of new configuration parameters
Best Practices
- Use environment variables for sensitive data
- Implement configuration validation at startup
- Maintain separate configuration files for different environments
- Use descriptive configuration parameter names
Configuration Example
{
"redis": {
"url": "redis://192.168.1.14:6379/"
},
"database": {
"dbtype": "MYSQL",
"host": "192.168.1.14",
"port": 9030,
"username": "dicomstore",
"password": "xDicm#123",
"database": "dicomdb"
},
"server": {
"port": 8080,
"host": "0.0.0.0",
"allow_origin": [
"*"
]
},
"local_storage": {
"type": "DISK",
"dicm_store_path": "/media/store/dcm",
"json_store_path": "/media/store/json"
},
"dicom_store_scp": {
"port": 11111,
"ae_title": "STORE-SCP",
"tenant_group": "0x1211",
"tenant_element": "0x1217",
"unsupported_ts_change_to": "1.2.840.10008.1.2.1",
"cornerstonejs_supported_transfer_syntax": [
"1.2.840.10008.1.2",
"1.2.840.10008.1.2.1",
"1.2.840.10008.1.2.2",
"1.2.840.10008.1.2.1.99",
"1.2.840.10008.1.2.5",
"1.2.840.10008.1.2.4.50",
"1.2.840.10008.1.2.4.51",
"1.2.840.10008.1.2.4.57",
"1.2.840.10008.1.2.4.70",
"1.2.840.10008.1.2.4.80",
"1.2.840.10008.1.2.4.81"
]
},
"message_queue": {
"consumer_group_id": "dicom-consumer-group",
"topic_main": "storage_queue",
"topic_log": "log_queue",
"topic_dicom_state": "dicom_state_queue",
"topic_dicom_image": "dicom_image_queue"
},
"kafka": {
"brokers": "192.168.1.14:9092",
"queue_buffering_max_messages": 1000,
"queue_buffering_max_kbytes": 102400,
"batch_num_messages": 100,
"queue_buffering_max_ms": 100,
"linger_ms": 100,
"compression_codec": "snappy"
}
}
server_config.rs
use config::{Config, ConfigError, Environment, File};
use dicom_encoding::TransferSyntaxIndex;
use dicom_transfer_syntax_registry::TransferSyntaxRegistry;
use dotenv::dotenv;
use serde::Deserialize;
use std::env;
use std::sync::Once;
#[derive(Debug, Deserialize, Clone)]
pub struct RedisConfig {
pub url: String, // Connection URL
pub password: Option<String>, // Password
pub is_lts: Option<bool>, // Whether to enable TLS
}
// Define configuration structure
#[derive(Debug, Deserialize, Clone)]
pub struct DatabaseConfig {
pub dbtype: String, // Database type: POSTGRES, MYSQL, SQLITE
pub host: String,
pub port: u16,
pub username: String,
pub password: String,
pub database: String,
}
#[derive(Debug, Deserialize, Clone)]
pub struct ServerConfig {
pub port: u16,
pub host: String,
pub allow_origin: Vec<String>,
}
#[derive(Debug, Deserialize, Clone)]
pub struct LocalStorageConfig {
pub dicm_store_path: String,
pub json_store_path: String,
}
#[derive(Debug, Deserialize, Clone)]
pub struct DicomStoreScpConfig {
pub port: u16,
pub ae_title: String,
pub unsupported_ts_change_to: String,
pub cornerstonejs_supported_transfer_syntax: Vec<String>,
pub tenant_group: String, // "0x1211",
pub tenant_element: String, // "0x1217",
}
#[derive(Debug, Deserialize, Clone)]
pub struct KafkaConfig {
pub brokers: String,
pub queue_buffering_max_messages: u32,
pub queue_buffering_max_kbytes: u32,
pub batch_num_messages: u32,
pub queue_buffering_max_ms: u32,
pub linger_ms: u32,
pub compression_codec: String,
}
#[derive(Debug, Deserialize, Clone)]
pub struct MessageQueueConfig {
pub consumer_group_id: String,
pub topic_main: String,
pub topic_log: String,
pub topic_dicom_state: String,
pub topic_dicom_image: String,
}
#[derive(Debug, Deserialize, Clone)]
pub struct LicenseServerConfig {
/// DICOM license server API key - 16 alphanumeric characters
pub client_id: String,
/// DICOM license key hashcode
pub license_key: String,
}
// --- Configuration Structures ---
#[derive(Debug, Clone, Deserialize)]
pub struct RoleRule {
#[serde(rename = "from")]
pub json_path: String,
#[serde(rename = "values")]
pub required_values: Vec<String>,
}
#[derive(Debug, Deserialize, Clone)]
pub struct OAuth2Config {
pub issuer_url: String,
pub audience: String,
pub jwks_url: String,
#[serde(default)]
pub roles: Option<RoleRule>,
#[serde(default)]
pub permissions: Option<RoleRule>,
}
#[derive(Debug, Deserialize, Clone)]
pub struct WebWorkerConfig {
/// Series_lastUpdateTime + X minutes without updates
pub interval_minute: u16,
/// CPU usage percentage
pub cpu_usage: u16,
/// Memory usage percentage
pub memory_usage: u16,
}
// "webworker": {
// "interval_minute": 5,
// "cpu_usage": 40,
// "memory_usage": 70
// }
#[derive(Debug, Deserialize, Clone)]
pub struct AppConfig {
pub redis: RedisConfig,
pub kafka: KafkaConfig,
pub main_database: DatabaseConfig,
pub secondary_database: DatabaseConfig,
pub server: ServerConfig,
pub local_storage: LocalStorageConfig,
pub dicom_store_scp: DicomStoreScpConfig,
pub message_queue: MessageQueueConfig,
pub dicom_license_server: Option<LicenseServerConfig>,
pub wado_oauth2: Option<OAuth2Config>,
pub webworker: Option<WebWorkerConfig>,
}
static APP_ENV: &str = "APP_ENV";
static APP_PREFIX: &str = "DICOM";
// Global configuration instance and initialization status
static INIT: Once = Once::new();
static mut CONFIG: Option<AppConfig> = None;
pub fn load_config() -> Result<AppConfig, ConfigError> {
// USE ONCE TO ENSURE INITIALIZATION ONLY ONCE
unsafe {
INIT.call_once(|| {
dotenv().ok();
let cdir = match env::current_dir() {
Ok(path) => {
println!("Current working directory: {:?}", path);
path
}
Err(e) => {
println!("Failed to get current directory: {}", e);
std::path::PathBuf::from("./")
}
};
let env = env::var(APP_ENV).unwrap_or_else(|_| "dev".into());
let config_path = format!("{}/application.{}.json", cdir.display(), env);
let settings = Config::builder()
.add_source(File::with_name(&config_path).required(true))
.add_source(Environment::with_prefix(APP_PREFIX).prefix_separator("_"))
.build();
let settings = match settings {
Ok(settings) => settings,
Err(err) => panic!("Error loading config: {}", err),
};
let mut app_config: AppConfig = match settings.try_deserialize() {
Ok(app_config) => app_config,
Err(err) => panic!("Error parsing config: {}", err),
};
println!("redis:url {:?}", app_config.redis.url);
println!("main_database:dbtype {:?}", app_config.main_database.dbtype);
println!("main_database:host {:?}", app_config.main_database.host);
println!("main_database:port {:?}", app_config.main_database.port);
println!(
"main_database:username {:?}",
app_config.main_database.username
);
println!(
"main_database:password {:?}",
app_config.main_database.password
);
println!(
"main_database:database {:?}",
app_config.main_database.database
);
println!(
"secondary_database:dbtype {:?}",
app_config.secondary_database.dbtype
);
println!(
"secondary_database:host {:?}",
app_config.secondary_database.host
);
println!(
"secondary_database:port {:?}",
app_config.secondary_database.port
);
println!(
"secondary_database:username {:?}",
app_config.secondary_database.username
);
println!(
"secondary_database:password {:?}",
app_config.secondary_database.password
);
println!(
"secondary_database:database {:?}",
app_config.secondary_database.database
);
println!("server:port {:?}", app_config.server.port);
println!("server:host {:?}", app_config.server.host);
println!("server:log_level {:?}", app_config.server.allow_origin);
println!(
"local_storage:dicm_store_path {:?}",
app_config.local_storage.dicm_store_path
);
if app_config.local_storage.dicm_store_path.ends_with("/") {
app_config.local_storage.dicm_store_path.pop();
}
if app_config.local_storage.dicm_store_path.len() > 64 {
panic!("dicm_store_path length must be less than 64 characters");
}
match std::fs::exists(&app_config.local_storage.dicm_store_path) {
Ok(exists) => {
if !exists {
std::fs::create_dir_all(&app_config.local_storage.dicm_store_path)
.unwrap_or_else(|e| {
panic!("Could not create dicm_store_path directory: {}", e);
});
}
}
Err(e) => {
panic!("Could not check if dicm_store_path directory exists: {}", e);
}
}
let test_dir = format!(
"{}/{}/{}/{}",
app_config.local_storage.dicm_store_path, "1.222", "1.444", "3.5555"
);
std::fs::create_dir_all(&test_dir).unwrap_or_else(|e| {
panic!("Could not create test_dir directory: {}", e);
});
let test_file = format!("{}/test.dcm", test_dir);
std::fs::write(
&test_file,
b"903290903234092409383404903409289899889jkkallklkj",
)
.unwrap_or_else(|e| {
panic!("Could not write test_file file: {}", e);
});
std::fs::remove_file(&test_file).unwrap_or_else(|e| {
panic!("Could not remove test_file file: {}", e);
});
std::fs::remove_dir_all(&test_dir).unwrap_or_else(|e| {
panic!("Could not remove test_dir directory: {}", e);
});
println!(
"local_storage:json_store_path {:?}",
app_config.local_storage.json_store_path
);
if app_config.local_storage.json_store_path.ends_with("/") {
app_config.local_storage.json_store_path.pop();
}
if app_config.local_storage.json_store_path.len() > 64 {
panic!("json_store_path length must be less than 64 characters");
}
match std::fs::exists(&app_config.local_storage.json_store_path) {
Ok(exists) => {
if !exists {
std::fs::create_dir_all(&app_config.local_storage.json_store_path)
.unwrap_or_else(|e| {
panic!("Could not create json_store_path directory: {}", e);
});
}
}
Err(e) => {
panic!("Could not check if json_store_path directory exists: {}", e);
}
}
let json_test_dir = format!(
"{}/{}/{}/{}",
app_config.local_storage.json_store_path, "1.222", "2.444", "3.555"
);
std::fs::create_dir_all(&json_test_dir).unwrap_or_else(|e| {
panic!("Could not create json_test_dir directory: {}", e);
});
let json_test_file = format!("{}/test.json", json_test_dir);
std::fs::write(
&json_test_file,
b"903290903234092409383404903409289899889jkkallklkj",
)
.unwrap_or_else(|e| {
panic!("Could not write json_test_file file: {}", e);
});
std::fs::remove_file(&json_test_file).unwrap_or_else(|e| {
panic!("Could not remove json_test_file file: {}", e);
});
std::fs::remove_dir_all(&json_test_dir).unwrap_or_else(|e| {
panic!("Could not remove json_test_dir directory: {}", e);
});
println!("dicom_store_scp:port {:?}", app_config.dicom_store_scp.port);
println!(
"dicom_store_scp:ae_title {:?}",
app_config.dicom_store_scp.ae_title
);
println!(
"dicom_store_scp:tenant_group {:?}",
app_config.dicom_store_scp.tenant_group
);
println!(
"dicom_store_scp:tenant_element {:?}",
app_config.dicom_store_scp.tenant_element
);
println!(
"dicom_store_scp:tenant_default {}",
"1234567890"
);
if app_config.dicom_store_scp.tenant_group !="0x1211"
|| app_config.dicom_store_scp.tenant_element !="0x1217"
{
println!(" (tenant_group ,tenant_element ) must be set as ( 0x1211,0x1217) Otherwise, an unknown error will occur when receiving the map..");
println!(" (tenant_group ,tenant_element ) CAN BE DELIVERED THROUGH THE CSTOREREQUEST");
println!(" THE SYSTEM DEFAULT TENANT VALUE IS 1234567890");
}
println!(
"dicom_store_scp:cornerstonejs_supported_transfer_syntax {:?}",
app_config
.dicom_store_scp
.cornerstonejs_supported_transfer_syntax
);
println!(
"dicom_store_scp:unsupported_ts_change_to {:?}",
app_config.dicom_store_scp.unsupported_ts_change_to
);
if !TransferSyntaxRegistry
.get(&app_config.dicom_store_scp.unsupported_ts_change_to)
.is_some()
{
panic!(
"Invalid unsupported_ts_change_to transfer syntax UID: {}",
app_config.dicom_store_scp.unsupported_ts_change_to
);
}
if app_config
.dicom_store_scp
.cornerstonejs_supported_transfer_syntax
.is_empty()
{
panic!("scp_config.cornerstonejs_supported_transfer_syntax is empty");
} else {
for transfer_syntax in &app_config
.dicom_store_scp
.cornerstonejs_supported_transfer_syntax
{
if !TransferSyntaxRegistry.get(transfer_syntax).is_some() {
panic!("Invalid transfer syntax UID: {}", transfer_syntax);
}
}
}
println!("kafka:brokers {:?}", app_config.kafka.brokers);
println!(
"kafka:queue_buffering_max_messages {:?}",
app_config.kafka.queue_buffering_max_messages
);
println!(
"kafka:queue_buffering_max_kbytes {:?}",
app_config.kafka.queue_buffering_max_kbytes
);
println!(
"kafka:batch_num_messages {:?}",
app_config.kafka.batch_num_messages
);
println!(
"kafka:queue_buffering_max_ms {:?}",
app_config.kafka.queue_buffering_max_ms
);
println!("kafka:linger_ms {:?}", app_config.kafka.linger_ms);
println!(
"kafka:compression_codec {:?}",
app_config.kafka.compression_codec
);
println!(
"kafka:consumer_group_id {:?}",
app_config.message_queue.consumer_group_id
);
println!(
"message_queue:topic_main {:?}",
app_config.message_queue.topic_main
);
println!(
"message_queue:topic_log {:?}",
app_config.message_queue.topic_log
);
if let Some(license_server) = app_config.dicom_license_server.as_ref() {
println!("dicom_license_server: certificate");
println!(
"dicom_license_server:client_id {:?}",
license_server.client_id
);
println!(
"dicom_license_server:license_key {:?}",
license_server.license_key
);
}
if let Some(oa2) = app_config.wado_oauth2.as_ref() {
println!("wado_oauth2: OAuth2 / OpenID authentication configuration");
println!("wado_oauth2:issuer_url {:?}", oa2.issuer_url);
println!("wado_oauth2:audience {:?}", oa2.audience);
println!("wado_oauth2:jwks_url {:?}", oa2.jwks_url);
println!("wado_oauth2:roles {:?}", oa2.roles);
println!("wado_oauth2:permissions {:?}", oa2.permissions);
}
if let Some(ww) = app_config.webworker.as_ref() {
println!("webworker:interval_minute {:?} DicomStateMeta.updated_time X", ww.interval_minute);
println!("webworker:cpu_usage {:?} ", ww.cpu_usage);
println!("webworker:memory_usage {:?} ", ww.memory_usage);
}
CONFIG = Some(app_config);
});
if let Some(ref config) = CONFIG {
Ok(config.clone())
} else {
Err(ConfigError::Message(
"Failed to load configuration".to_string(),
))
}
}
}
pub fn generate_database_connection(dbconfig: &DatabaseConfig) -> Result<String, String> {
let password = dbconfig
.password
.replace("@", "%40")
.replace(":", "%3A")
.replace("/", "%2F")
.replace("?", "%3F")
.replace("&", "%26")
.replace("#", "%23")
.replace("[", "%5B")
.replace("]", "%5D")
.replace("{", "%7B")
.replace("}", "%7D")
.replace("|", "%7C")
.replace("<", "%3C")
.replace(">", "%3E")
.replace("\\", "%5C")
.replace("^", "%5E")
.replace("`", "%60");
let db_conn = format!(
"mysql://{}:{}@{}:{}/{}?allowPublicKeyRetrieval=true&characterEncoding=UTF-8&serverTimezone=Asia/Shanghai&useSSL=false",
dbconfig.username, password, dbconfig.host, dbconfig.port, dbconfig.database
);
println!("database connection string: {}", db_conn);
Ok(db_conn)
}
pub fn generate_pg_database_connection(dbconfig: &DatabaseConfig) -> Result<String, String> {
let password = dbconfig
.password
.replace("@", "%40")
.replace(":", "%3A")
.replace("/", "%2F")
.replace("?", "%3F")
.replace("&", "%26")
.replace("#", "%23")
.replace("[", "%5B")
.replace("]", "%5D")
.replace("{", "%7B")
.replace("}", "%7D")
.replace("|", "%7C")
.replace("<", "%3C")
.replace(">", "%3E")
.replace("\\", "%5C")
.replace("^", "%5E")
.replace("`", "%60");
let db_conn = format!(
"postgresql://{}:{}@{}:{}/{}",
dbconfig.username, password, dbconfig.host, dbconfig.port, dbconfig.database
);
println!("postgresql database connection string: {}", db_conn);
Ok(db_conn)
}
Key Configuration Features
- Multi-Environment Support: Seamlessly switch between development, testing, and production configurations
- Security-First Design: Secure handling of sensitive credentials and connection strings
- DICOM-Specific Settings: Transfer syntax validation and SCP configuration for medical imaging compliance
- Validation: Built-in validation for paths, transfer syntaxes, and configuration parameters
- Flexible Storage: Support for multiple storage backends with path validation
GoTo Summary: how-to-build-cloud-dicom
Keywords and Descriptions
- Primary Keywords: DICOM configuration, server management, medical imaging, multi-environment setup, healthcare software
- Secondary Keywords: database configuration, security settings, Kafka integration, configuration management
- Meta Description: Comprehensive DICOM server configuration management system with multi-environment support, security features, and DICOM-specific settings for healthcare applications.
- Target Audience: Healthcare software developers, medical imaging system architects, DICOM server administrators
- Content Value: Complete guide to DICOM server configuration with practical examples and security best practices