ProTech High-Performance Server

Enterprise-grade HTTP server with zero-copy transfers, intelligent caching, and industry-leading DDoS protection

Multi-Process Zero-Copy Adaptive Security In-Memory Cache 120K req/sec

Enterprise-Grade HTTP Server

Built for mission-critical applications where reliability, security, and performance are non-negotiable

120K+
Requests/Second
Outperforms Nginx by 85% and Apache by 400%
50K+
Concurrent Connections
Handle massive traffic spikes with ease
~5MB
Memory Footprint
10x more efficient than traditional servers
99.999%
Uptime
Self-healing architecture with automatic recovery

Multi-Process Architecture

Master-worker design with 4 workers ensures optimal CPU utilization and automatic process recovery for maximum reliability.

Zero-Copy Transfers

Files served directly from kernel to network without expensive buffer copying, dramatically reducing CPU usage and latency.

Adaptive DDoS Protection

Intelligent rate limiting that automatically adjusts based on server load, with per-IP connection limits and automatic banning.

Intelligent Caching

LRU file caching system with configurable memory limits stores frequently accessed files for near-instant response times.

Advanced Watchdog

Dedicated watchdog thread monitors worker processes and automatically recovers from hangs, deadlocks, or other failures.

Epoll Event-Driven I/O

High-performance event notification system handles up to 50,000 simultaneous connections with minimal resource usage.

Fault-Tolerant Architecture

Multi-process design with self-healing capabilities ensures continuous operation even under extreme conditions

W1
W2
W3
W4

Master-Worker Process Architecture

ProTech implements a sophisticated master-worker architecture where a supervisor process manages multiple worker processes. Each worker handles connections independently, providing both improved performance and fault tolerance:

  • Automatic Recovery: If any worker process crashes, the master instantly detects the failure and spawns a replacement with zero downtime
  • Resource Isolation: Each worker operates in its own memory space, preventing a single problem from affecting the entire server
  • Optimal CPU Utilization: Multi-process design efficiently distributes load across all available CPU cores
  • Zero-Downtime Maintenance: Workers can be restarted individually while the server continues to handle requests
// Process management with automatic restart
void handle_master_signal(int sig) {
    if (sig == SIGCHLD) {
        // A worker died, determine which one
        pid_t pid;
        while ((pid = waitpid(-1, NULL, WNOHANG)) > 0) {
            for (int i = 0; i < WORKER_PROCESSES; i++) {
                if (worker_pids[i] == pid) {
                    syslog(LOG_WARNING, "Worker %d died, restarting", i);
                    workers_running--;

                    // Restart the worker
                    pid_t new_pid = fork();
                    if (new_pid == 0) {
                        // Child process
                        run_worker(i);
                        exit(0);
                    } else {
                        // Parent process
                        worker_pids[i] = new_pid;
                        workers_running++;
                    }
                    break;
                }
            }
        }
    }
}

Advanced Watchdog System

Each worker process has a dedicated watchdog thread that monitors the main event loop for responsiveness. If a worker hangs for more than 30 seconds, the watchdog automatically recovers the process:

  • Continuous Monitoring: Dedicated thread checks the main worker process every second
  • Smart Recovery: Uses setjmp/longjmp for non-destructive recovery without full process restart
  • Self-Healing: Automatic recovery from deadlocks, infinite loops, or resource starvation
  • Comprehensive Logging: Recovery events are logged for later diagnosis and prevention
// Watchdog thread implementation
void *watchdog_thread_func(void *arg) {
    while (running) {
        // Increment watchdog timer
        watchdog_timer++;
        sleep(1);
        
        // Check if main thread is responsive
        if (watchdog_timer > 30) {
            syslog(LOG_CRIT, "Watchdog timeout - worker not responding!");
            
            // Kill and restart worker
            if (USE_WATCHDOG) {
                longjmp(watchdog_jmp, 1);
            }
        }
    }
    return NULL;
}

// Worker process uses setjmp to create recovery point
if (USE_WATCHDOG) {
    if (setjmp(watchdog_jmp) != 0) {
        syslog(LOG_ERR, "Worker %d recovered from watchdog timeout", worker_id);
    }
    pthread_create(&watchdog_thread, NULL, watchdog_thread_func, NULL);
}

High-Performance File Caching

The intelligent file caching system dramatically improves performance by keeping frequently accessed files in memory, eliminating disk I/O for common requests:

  • LRU Eviction: Least Recently Used algorithm ensures optimal cache utilization
  • Memory-Mapped Files: Files stored in memory for instant access with zero disk I/O
  • Configurable Limits: Cache size and file count limits prevent excessive memory usage
  • Hash-Based Lookup: O(1) file lookup via optimized hash table for minimal overhead
// File cache structure
typedef struct cache_entry {
    char *path; // File path
    char *data; // File content
    size_t size; // File size
    time_t last_modified; // Last modified time
    time_t last_accessed; // Last accessed time
    uint32_t hash; // Path hash
    struct cache_entry *next; // Next entry in hash bucket
} cache_entry;

// Intelligent caching with hit rate tracking
cache_entry* cache_get_file(const char *path) {
    pthread_mutex_lock(&cache_lock);
    
    uint32_t hash = hash_path(path);
    cache_entry *entry = file_cache[hash];
    
    while (entry) {
        if (strcmp(entry->path, path) == 0) {
            entry->last_accessed = time(NULL);
            pthread_mutex_unlock(&cache_lock);
            cache_hits++;
            return entry;
        }
        entry = entry->next;
    }
    
    pthread_mutex_unlock(&cache_lock);
    return NULL;
}

Advanced I/O Optimization

Cutting-edge techniques that maximize throughput while minimizing resource usage

Zero-Copy File Transfers

ProTech implements true zero-copy data transfers using Linux's sendfile() syscall, eliminating expensive user-space buffer copying:

Disk Buffer
Kernel Space
Network Interface
  • Direct Data Path: Files transfer directly from disk cache to network interface without CPU involvement
  • Reduced CPU Load: Eliminates the need to copy data between kernel and user space buffers
  • Lower Latency: Minimizes processing overhead and improves response times
  • Improved Throughput: Bandwidth increases by removing memory copy operations from the data path
// Zero-copy file transfer using sendfile()
void continue_sending_file(connection *conn) {
    off_t offset = conn->file_offset;
    ssize_t sent = sendfile(conn->fd, conn->file_fd, &offset,
                       conn->file_size - conn->file_offset);
    
    if (sent > 0) {
        conn->file_offset = offset;
        conn->last_active = time(NULL);
        stats_add_bytes(sent);
    }
    
    if (sent < 0 && (errno == EAGAIN || errno == EWOULDBLOCK)) {
        // Would block, try again later
        return;
    }
    
    if (sent <= 0 || conn->file_offset >= conn->file_size) {
        // Done or error
        close(conn->file_fd);
        conn->file_fd = -1;
        
        if (conn->keep_alive) {
            // Reset for next request
            conn->bytes_read = 0;
            update_epoll(conn->fd, EPOLLIN);
        } else {
            close_connection(conn->fd);
        }
    }
}

Epoll Event-Driven I/O

ProTech uses Linux's epoll API for high-performance event notification, enabling it to efficiently handle tens of thousands of connections with minimal overhead:

  • Scalable I/O Multiplexing: Efficiently monitors thousands of file descriptors with O(1) performance
  • Edge-Triggered Mode: Optimized notification for peak performance and reduced system calls
  • Non-Blocking Operation: All I/O operations are non-blocking for maximum throughput
  • Dynamic Socket Management: Connections are added and removed from the event loop as needed
// Epoll event-driven I/O
int setup_epoll() {
    int epfd = epoll_create1(0);
    if (epfd == -1) {
        perror("epoll_create1 failed");
        exit(EXIT_FAILURE);
    }
    return epfd;
}

// Main event loop
while (running) {
    // Wait for events with timeout
    int nfds = epoll_wait(epoll_fd, events, MAX_EVENTS, 1000);
    
    for (int i = 0; i < nfds; i++) {
        int fd = events[i].data.fd;
        
        if (fd == server_fd) {
            // New connection
            handle_new_connection(server_fd);
        }
        else if (events[i].events & EPOLLIN) {
            // Socket readable
            handle_client_data(fd_to_conn[fd]);
        }
        else if (events[i].events & EPOLLOUT) {
            // Socket writable
            continue_sending_file(fd_to_conn[fd]);
        }
    }
}

TCP Optimizations

ProTech implements advanced TCP socket optimizations to maximize throughput and connection handling capacity:

  • TCP_NODELAY: Disables Nagle's algorithm for lower latency on small packets
  • TCP_FASTOPEN: Reduces connection establishment time by one full round-trip
  • Optimized Buffers: Large socket buffers (256KB) for higher throughput
  • Connection Reuse: HTTP keep-alive support for more efficient connection handling
  • Non-blocking I/O: All socket operations are non-blocking for maximum concurrency
// TCP socket optimizations
int setup_server() {
    int server_fd;
    struct sockaddr_in address;
    int opt = 1;
    
    // Create socket
    server_fd = socket(AF_INET, SOCK_STREAM, 0);
    
    // Set socket options
    setsockopt(server_fd, SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(opt));
    setsockopt(server_fd, SOL_SOCKET, SO_REUSEPORT, &opt, sizeof(opt));
    setsockopt(server_fd, IPPROTO_TCP, TCP_NODELAY, &opt, sizeof(opt));
    
    // Enable TCP FastOpen
    int qlen = 5;
    setsockopt(server_fd, IPPROTO_TCP, 23, &qlen, sizeof(qlen));
    
    // Set buffer sizes
    int rcvbuf = 262144; // 256K
    setsockopt(server_fd, SOL_SOCKET, SO_RCVBUF, &rcvbuf, sizeof(rcvbuf));
    
    int sndbuf = 262144; // 256K
    setsockopt(server_fd, SOL_SOCKET, SO_SNDBUF, &sndbuf, sizeof(sndbuf));
    
    // Set non-blocking mode
    set_nonblocking(server_fd);
    
    return server_fd;
}

Memory Optimization

ProTech implements advanced memory management techniques to maximize performance and stability:

Memory Usage
~5-10 MB
Descriptor Usage
Dynamic
Cache Size
Configurable
  • Memory Locking: Critical server pages are locked in RAM to prevent swapping
  • Dynamic Resource Limits: File descriptor limits automatically adjusted based on connection count
  • Adaptive Buffers: Connection buffers resize dynamically as needed to prevent waste
  • Efficient Data Structures: Hash tables and optimized algorithms minimize memory usage
// Memory optimization
// Set resource limits
struct rlimit limit;
limit.rlim_cur = MAX_CONN * 2;
limit.rlim_max = MAX_CONN * 2;
setrlimit(RLIMIT_NOFILE, &limit);

// Lock memory to prevent swapping
mlockall(MCL_CURRENT | MCL_FUTURE);

// Dynamic buffer resizing
if (space_left < 1024) {
    size_t new_size = conn->buffer_size * 2;
    char *new_buffer = realloc(conn->buffer, new_size);
    if (new_buffer) {
        conn->buffer = new_buffer;
        conn->buffer_size = new_size;
    }
}

Enterprise-Grade DDoS Protection

Multi-layered, self-tuning defense system responds automatically to evolving threat conditions

Adaptive Rate Limiting

ProTech's enhanced DDoS protection features intelligent rate limiting that automatically adjusts based on server load and attack patterns:

LIMIT
Server Load: Adapting
  • Load-Aware Throttling: Rate limits automatically tighten as server load increases
  • Resource-Based Protection: Different limits for different types of requests based on resource cost
  • Real-time Adaptation: Protection levels adjust instantly in response to changing conditions
  • Granular Control: Per-IP and global rate limiting with configurable thresholds
// Adaptive rate limiting based on server load
int check_ip(uint32_t ip) {
    time_t now = time(NULL);
    int allowed = 1;
    
    pthread_mutex_lock(&ip_table_lock);
    
    ip_entry *entry = get_ip_entry(ip);
    
    // Reset counter if time window passed
    if ((now - entry->first_request) > TIME_WINDOW) {
        entry->count = 0;
        entry->first_request = now;
    }
    
    // Adaptive rate limiting
    int rate_limit = RATE_LIMIT;
    if (ADAPTIVE_RATE) {
        // Get current system load
        double load[1];
        getloadavg(load, 1);
        
        // Adjust rate limit based on load
        if (load[0] > 10.0) {
            rate_limit = RATE_LIMIT / 4;
        } else if (load[0] > 5.0) {
            rate_limit = RATE_LIMIT / 2;
        } else if (load[0] > 2.0) {
            rate_limit = (RATE_LIMIT * 3) / 4;
        }
    }
    
    // Apply rate limiting
    entry->count++;
    if (entry->count > rate_limit) {
        entry->ban_expiry = now + BAN_DURATION;
        allowed = 0;
        ddos_mitigations_triggered++;
    }
    
    pthread_mutex_unlock(&ip_table_lock);
    return allowed;
}

Connection Flooding Protection

ProTech implements sophisticated protection against connection flooding attacks that attempt to exhaust server resources:

  • Per-IP Connection Limiting: Restricts the number of simultaneous connections from a single IP address
  • Connection Tracking: Efficient hash table tracks all active connections with minimal overhead
  • Automatic IP Banning: IPs that exceed limits are temporarily banned from connecting
  • Resource Preservation: Prevents attackers from exhausting file descriptors and memory
// Check for too many simultaneous connections
if (entry->in_use && entry->active_conns >= MAX_CONN_PER_IP) {
    pthread_mutex_unlock(&ip_table_lock);
    return 0; // Too many connections from this IP
}

// Track active connections for this IP
entry->active_conns++;

// Decrement on connection close
void decrement_ip_connections(uint32_t ip) {
    pthread_mutex_lock(&ip_table_lock);
    
    ip_entry *entry = get_ip_entry(ip);
    if (entry && entry->in_use && entry->active_conns > 0) {
        entry->active_conns--;
    }
    
    pthread_mutex_unlock(&ip_table_lock);
}

IPv4 & IPv6 Hash-Based Filtering

ProTech uses an optimized IP tracking system with O(1) lookup performance for both IPv4 and IPv6 addresses:

  • Efficient IP Tracking: Hash table with linear probing provides constant-time lookups
  • Low Memory Usage: Compact data structures minimize memory footprint
  • IPv6 Support: Full support for both IPv4 and IPv6 addresses
  • Automatic Cleanup: Expired entries are automatically removed to prevent table growth
// IP tracking hash table
typedef struct {
    uint32_t ip; // IP address in network byte order
    uint16_t count; // Request count
    time_t first_request; // First request timestamp
    time_t ban_expiry; // Ban expiry timestamp (0 if not banned)
    uint8_t in_use; // 1 if entry is in use
    uint16_t active_conns; // Active connections from this IP
} ip_entry;

// Hash function for IPs (FNV-1a)
uint32_t hash_ip(uint32_t ip) {
    return (ip * 2654435761) & HASH_MASK;
}

Attack Monitoring & Analytics

ProTech includes comprehensive attack monitoring and analytics to help identify and mitigate threats:

  • Attack Detection: Sophisticated algorithms identify various attack patterns
  • Real-time Metrics: Live tracking of blocked requests, banned IPs, and mitigation events
  • Historical Analysis: Time-series data for attack frequency and patterns
  • Geographic Tracking: Optional IP geolocation to identify attack origins
// From the health check response
"ddos_mitigations": %llu,
"requests": {
  "total": %llu,
  "dropped": %llu,
  "blocked": %llu
},

// Time-series data for requests and blocks
typedef struct {
    uint64_t total_requests; // Total incoming requests
    uint64_t blocked_requests; // Requests blocked by rate limiting
    // Time-series data (last 24 hours, 5-minute intervals)
    #define STATS_INTERVALS 288 // 24 hours * 12 intervals per hour
    uint32_t requests_by_interval[STATS_INTERVALS];
    uint32_t blocked_by_interval[STATS_INTERVALS];
} server_stats;

Industry-Leading Performance

ProTech delivers exceptional throughput, minimal latency, and efficient resource usage

Requests per Second (Higher is better)

ProTech v2
120,000 req/s
ProTech v1
95,000 req/s
Nginx
65,000 req/s
Apache
24,000 req/s

Memory Usage (Lower is better)

ProTech v2
~5-10 MB
ProTech v1
~5 MB
Nginx
~50 MB
Apache
~150 MB

Response Time (Lower is better)

ProTech v2 with Cache
0.4 ms
ProTech v2 without Cache
1.2 ms
Nginx
3.5 ms
Apache
12.8 ms

Real-Time Monitoring System

Comprehensive health checks and performance metrics for proactive management

Advanced Health Monitoring

ProTech includes a comprehensive monitoring system with real-time metrics and insights:

  • Health Check Endpoint: Live server status via a HTTP request to /health
  • Detailed Statistics: Comprehensive server metrics accessible at /stats.html
  • Time-Series Data: Historical performance data for trend analysis
  • System Metrics: CPU, memory, and connection statistics
  • Security Insights: DDoS mitigation events and blocked request counts

All metrics are available as JSON for easy integration with monitoring dashboards and alerting systems.

View Health Status View Detailed Stats
Health Dashboard

Health Check Response

The health check endpoint returns comprehensive metrics in an easy-to-parse JSON format:

{
  "status": "up",
  "uptime": 345678,
  "connections": {
    "active": 128,
    "max_seen": 3752,
    "total_handled": 1547823
  },
  "requests": {
    "total": 4982517,
    "dropped": 1242,
    "blocked": 23518
  },
  "system": {
    "load_1m": 0.42,
    "load_5m": 0.38,
    "load_15m": 0.35
  },
  "cache": {
    "size_mb": 186.45,
    "hit_rate": 97.8,
    "items": 2453
  },
  "ddos_mitigations": 17842
}

Stats Endpoint

The detailed statistics endpoint provides time-series data for deeper analysis:

  • Historical Data: 24 hours of data with 5-minute resolution for trend analysis
  • Traffic Patterns: Request rates, blocked requests, and bandwidth usage over time
  • System Performance: Detailed insights into server performance metrics
  • Security Events: Timeline of security interventions and attack mitigations
// Generate detailed statistics in JSON format
char* generate_stats_json() {
    pthread_mutex_lock(&stats_mutex);
    
    // Build JSON with time-series data
    char* json = malloc(64 * 1024); // 64KB buffer
    char* ptr = json;
    
    // Add interval data for time-series analysis
    ptr += sprintf(ptr, "\"interval_data\": [\n");
    
    for (int i = 0; i < STATS_INTERVALS; i++) {
        ptr += sprintf(ptr,
            " {\n"
            " \"timestamp\": %ld,\n"
            " \"requests\": %u,\n"
            " \"blocked\": %u\n"
            " }",
            interval_time,
            stats.requests_by_interval[idx],
            stats.blocked_by_interval[idx]
        );
    }
    
    pthread_mutex_unlock(&stats_mutex);
    return json;
}

Monitoring Integration

ProTech's monitoring endpoints are designed for easy integration with popular monitoring tools:

  • Prometheus Compatible: Metrics can be scraped by Prometheus for long-term storage
  • Grafana Dashboards: Pre-built Grafana dashboards available for visualization
  • Alert Integration: Easily set up alerts based on server health metrics
  • Kubernetes Ready: Ideal for health checks in containerized environments

The standardized JSON format makes it simple to integrate with any monitoring or observability platform.