Denial-of-Service (DoS) Attacks: What They Are, How They Work, and How to Defend Your Site

Denial-of-Service (DoS) Attacks

If your website suddenly crawls to a halt, pages time out, or customers report they can’t log in, you might be staring down a Denial-of-Service (DoS) attack. These incidents don’t require exotic zero-days or deep levels of access. More often, they’re brutally simple: overwhelm the target with traffic or requests until legitimate users can’t get through. For online businesses, the end result is the same: lost revenue, support tickets piling up, and shaken trust.

Below we’ll go over some DoS basics: what a DoS attack is, how it differs from distributed variants (DDoS), what happens under the hood, common techniques, the warning signs, and practical steps to reduce your risk and respond effectively.

What is a Denial-of-Service (DoS) attack?

A Denial-of-Service (DoS) attack is a cyberattack designed to make a system (website, API, or application) unavailable to its intended users. The attacker overwhelms the target with requests or otherwise disrupts normal operations so that legitimate traffic can’t be processed. The defining trait of a traditional DoS is its origin: it typically comes from a single source.

In practice, attackers aim to exceed some bottleneck, whether it’s the CPU, memory, disk I/O, network bandwidth, application thread pools, or database connections. Once that bottleneck is saturated, every legitimate request starts to feel sluggish or fails outright. Even short outages can translate into real losses for ecommerce, SaaS, or media sites.

DoS vs. DDoS (and why the distinction matters)

You’ll often hear “DoS” and “DDoS” used interchangeably, but they’re not the same:

  • DoS: Traffic originates from one host or a small number of hosts acting together.
  • DDoS (Distributed Denial-of-Service): Traffic originates simultaneously from multiple sources, typically a botnet of compromised devices. This distribution makes filtering far more challenging and amplifies the total volume.

From a defender’s point of view, the mechanics of denial (exhausting resources) are similar in both cases. The difference is scale and complexity. A single-source DoS may be blocked or rate-limited at the edge; a widespread DDoS typically requires layered mitigation and, often, outside help (e.g., a scrubbing provider). It’s also common for a smaller DoS to evolve into a DDoS once the attacker sees initial success.

What actually happens during a DoS attack?

The goal in a DoS attack is to push the target past normal operating capacity. Attackers will:

  1. Generate excessive requests or packets: Far more than the system is designed to handle.
  2. Exploit resource limits: Like filling a memory buffer or tying up CPU with expensive operations (think: expensive database queries or heavy dynamic rendering).
  3. Trigger instability: Slowdowns, timeouts, crashes, or watchdog restarts. Even if your infrastructure auto-recovers, the repeated churn can keep you effectively offline.

Although the public often associates DoS with “big traffic numbers,” many attacks don’t need astronomical bandwidth. A precisely crafted request that forces the app to do a lot of work (server-side rendering, large search aggregation, or repeated cache misses) can deny service with surprisingly little traffic.

The main categories of DoS attacks

While there are many ways to cause denial-of-service, most tactics fall into two broad categories.

1) Buffer overflow–style attacks

Buffer overflow attacks aim to overwhelm a memory buffer (or related resource) until the system misbehaves: slowed performance, service crashes, or outright kernel panics. Historically, buffer overflows were also a path to arbitrary code execution; in the DoS context, the attacker’s goal is instability, not necessarily control. Side effects often include spikes in CPU and memory, filled disks (via log storms or files), and persistent service restarts that keep the site unavailable.

2) Flood-style attacks

Flood attacks bombard a system with more packets or requests than it can handle. Success depends on the attacker’s ability to outpace the target’s capacity for accepting, processing, and responding. Floods can happen at multiple layers:

  • Network/transport layer: Massive packet floods that saturate bandwidth or exhaust connection tracking.
  • Application layer: HTTP floods that look superficially legitimate but force expensive work (e.g., search endpoints, login flows, or dynamic pages).

Even a single host with greater outbound bandwidth than your inbound capacity can cause a local denial if your edge isn’t prepared.

Common forms of DoS attacks you’ll hear about

Below are classic examples you may encounter in the wild or in incident retrospectives. Some are “oldies,” but they’re still worth knowing for context or for spotting echoes in modern variants.

  • Ping Flood: A simple flood of ICMP Echo Request (“ping”) packets that overwhelms the target’s ability to respond. If the target or upstream network gear can’t rate-limit effectively, packet processing becomes the bottleneck and service degrades.
  • Ping of Death: An attack that sends malformed or oversized packets to trigger crashes on systems with older or unpatched network stacks. Modern systems are generally resilient here, but poorly maintained or legacy devices may still be vulnerable.
  • Smurf Attack: The attacker spoofs the victim’s IP and sends ICMP requests to a broadcast address. Every host on that network replies to the spoofed source, amplifying traffic back at the victim. Properly configured routers and hosts generally mitigate Smurfing today, but misconfigurations still exist.

Note: In today’s web-app landscape, you’ll also see HTTP floods (e.g., expensive page routes or API endpoints), SYN floods that exhaust connection tables, and slow-request tactics that keep server connections open just long enough to starve workers. The core idea is the same: exhaust a critical resource to deny service.

How to recognize you’re under a DoS attack

Distinguishing a true attack from a traffic spike (say, a flash sale or viral post) can be tricky. That said, a handful of patterns are strong indicators:

  • Sudden, widespread connectivity loss across multiple systems or services.
  • Unusually slow performance: Pages or APIs that normally respond in milliseconds now take seconds or time out.
  • Unexpected inability to access specific sites or routes while others remain fine (e.g., dynamic pages fail but static assets still load).
  • Sharp, sustained increase in specific metrics:
    • Edge: Connection attempts, SYN backlog, ICMP or UDP spikes.
    • App: Requests per second (RPS) to a single endpoint, surge in 5xx/429 responses, increased queue depth, worker saturation, cache miss storms.
    • Infra: CPU pegged, memory pressure, noisy log bursts, or disk filling abnormally fast.

If your telemetry shows a highly concentrated request pattern (same path, same query shape, same user-agent with minor variations) or an implausible distribution of source IPs, you’re likely under attack.

Business impact (and why minimal downtime isn’t good enough)

Even a “small” DoS that lasts 10–20 minutes can translate to missed conversions, failed checkouts, and increased churn. On the ops side, firefighting consumes people hours and interrupts roadmaps. Repeated incidents also make performance tuning and capacity planning harder, as organic peaks get confounded with hostile traffic. Beyond the immediate impact, remember the soft costs: support burden, brand damage, and lower search-engine confidence if uptime suffers.

Defensive strategies: Layered, practical, and realistic

You can’t prevent every hostile packet from reaching your perimeter, but you can make denial much harder and recovery much faster. Think in layers.

1) Reduce attack surface and collapse points

  • Decouple critical services so one noisy tier doesn’t take down everything else.
  • Use caching and CDNs aggressively for static and semi-static content; this offloads origin and narrows the dynamic “blast radius.”
  • Harden defaults: Disable unnecessary ICMP responses on public interfaces, avoid IP-direct addressing (force traffic through controlled edge components), and ensure broadcast traffic is blocked where appropriate.

2) Rate-limit and shape traffic at the edge

  • Network-level controls: ACLs, ingress filtering, and if available, SYN cookies to protect connection tables.
  • Application-aware rate limits: Per-IP, per-token, and per-endpoint quotas. Focus on routes with expensive back-end work (login, search, report generation, etc.).
  • Backoff and circuit breakers: Automatic 429/503 responses under pressure to protect core dependencies.

3) Make the application cheaper to serve

  • Cache the right things: HTML fragments, API responses with safe TTLs, feature-flag decisions. Anything that avoids repeated heavy work.
  • Precompute expensive views: Report snapshots, recommendations, or aggregation layers updated on a schedule rather than on every request.
  • Guardrails at the code level: Input validation that rejects pathological parameters early (e.g., page size caps, search depth limits).

4) Observe, detect, and alert fast

  • Baseline performance so you can tell “viral traffic” from “hostile traffic.”
  • Dashboards keyed to denial indicators: RPS by route, error rates by class (4xx vs 5xx), queue depth, worker utilization, cache hit ratio, and network counters at the edge.
  • Automatic anomaly detection that alerts on sudden deviations with sensible cooldowns to avoid pager fatigue.

5) Prepare an incident runbook

  • Containment steps: Flip WAF rulesets, tighten rate limits, route dynamic pages through a lightweight “safe mode,” or temporarily disable non-critical features that create heavy load.
  • Communication templates: Internal updates for stakeholders and a brief, factual status page note for customers. Clarity reduces ticket volume.
  • Escalation paths: Who can change network policies, who can adjust autoscaling, who owns the CDN/WAF knobs, and how to contact your upstream provider if you need temporary filtering.

6) After the storm: learn and harden

  • Post-incident review: What was saturated first? Which metric would have given 5–10 minutes’ earlier warning? What quick tuning (cache, limits, indexes) would have blunted the impact?
  • Permanent controls: Convert ad-hoc mitigations into codified rules, add synthetic probes to catch regressions, and expand capacity where justified.

Quick FAQ for teams and stakeholders

Isn’t DoS a solved problem?

Not exactly. Many legacy vectors (like classic Ping of Death) are mitigated on modern stacks, but attackers adapt. Application-layer DoS remains effective because it piggybacks on legitimate protocols (HTTP/HTTPS) and targets expensive work that’s hard to distinguish from normal behavior without context.

We’re small, so are we really a target?

Yes. Motives range from extortion and competitive sabotage to simple mischief. Automated tooling makes it easy to attack sites of all sizes, and smaller teams often have less capacity and fewer safeguards.

How do we tell DoS from a marketing spike?

Look for consistency and concentration in the traffic patterns (same user-agent families, identical request shapes, non-human behavior like hitting only one expensive endpoint). Pair traffic analysis with business telemetry: if sales/referrals aren’t up, but RPS is, be suspicious.

Will a CDN or WAF alone stop DoS?

They help a lot, especially for static assets and obvious abuse patterns, but they’re not silver bullets. You still need sane application limits, good caching, and observability. Defense in depth wins.

Key takeaways and next steps

  • A DoS attack makes your site or service unavailable by overwhelming some resource; it typically originates from a single source, unlike DDoS, which uses many.
  • The two broad modes are buffer overflow–style (crash/instability via resource exhaustion) and flood-style (overwhelming packets/requests).
  • Classic forms you’ll encounter include Ping Flood, Ping of Death, and Smurf, alongside modern application-layer floods.
  • Warning signs include sudden connectivity loss, unusual slowness, route-specific failures, and sharp metric anomalies.
  • Defense is layered: Shrink the attack surface, rate-limit aggressively, make expensive work cheaper, monitor what matters, and practice your response.

Where Sucuri fits into your layered defense

A DoS storm doesn’t have to become a revenue-draining outage. Pair sound architecture with the Sucuri Website Firewall and monitoring platform, and hostile traffic becomes just another metric on the dashboard: visible, contained, and handled.

Chat with Sucuri

You May Also Like