Recrutement
Si vous êtes intéressés pour bosser sur des sujets sympas tout en restant loin de Paris, consultez nos offres d'emploi et envoyez nous votre CV à rh@amossys.fr.

Fragscapy: Fuzzing protocols to evade firewalls and IDS

Fragscapy is a tool that aims at detecting flaws in firewall and IDS by fuzzing the network messages sent through it. This open source project is available at Amossys' Github.

Hammering your network data

What is this so-called "Fragscapy"?

At Amossys, we run into the analysis of firewalls and IDS frequently and testing their configuration can be time consuming. Of course, some basic tools and scripts were developed to automate these tests. Fragscapy aims at modernizing and improving these tools. It replaces them with a Python 3 project that is automated, modular, extensible and high-level. Fragscapy, as its name suggests, is based on Scapy for the network packet mangling.

To make it short, Fragscapy is a tool that can run successive tests against a firewall's protections. The principle behind it is actually pretty simple:

Fragscapy general principle
Figure 1: General principle of Fragscapy

The process ran by Fragscapy is:

  1. Start an application that will interact with the network (typically wget if testing HTTP-based rules or ping for testing lower layers).
  2. Intercept the outgoing packets.
  3. Apply a set of modifications to the packets. This is the most important step in the process, this is where the fuzzing is actually defined. The goal is to get a modified packet that is sufficiently modified to bypass the firewall checks but is still understandable by the service behind the firewall.
  4. Send the modified packets and wait for an answer.
  5. Restart from step 1 but with a different modification set.

What's a "modification"?

Ok then, the most interesting part of the tool happens in the modification of step 3. The rest is simply about orchestrating these modifications. So what's happening exactly in those modifications? Well, the answer is "Anything can be done" ...

Indeed, a modification is technically speaking a function that receives a list of packets and returns another list of packets. And in between, anything can happen: the packets can be modified, delayed, dropped, replaced, sliced, regrouped... Some basic modifications are included with the tool to perform generic modifications like:

  • dropping packets (either with a fixed probability for each or specific ones),
  • delaying packets,
  • filtering and reordering of the packets,
  • fragmenting IPv4 and IPv6 packets (with some exotic options),
  • segmenting TCP packets,
  • modifying any field of the packets.

For more details on how these modifications are done, one can have a look at the python code behind it at fragscapy/modifications/

How to use this awesome tool?

Example 1: How to discover undocumented behaviour in Linux

For the first usage example, let's take a simple case. Let's say we have the following setup and we want to try to access a web page on ports 80 and 8080 on the server but there is an iptables firewall stopping requests to port 8080.

Example 1 situation
Figure 2: Initial situation for the first example

Let's configure Fragscapy

We will now try to bypass the firewall security by fuzzing around with IPv6 fragmentation. For that, we need to configure Fragscapy. Everything is intended to be configured by:

  • a JSON configuration file (for behaviour of the tests)
  • the command line options (for output, logs and esthetics)

Thus, the following JSON file (ipv6_frag.json) defines 3 aspects of the tests:

  1. the command to run: a curl to fetch the web page on both ports and exit with a 0 status code only if the page on port 80 could be fetched i.e. the firewall did its job,
  2. the firewall configuration to catch only specific packets: the tool cannot guess that so by default everything is caught,
  3. the list of modifications to apply: parameters can be specified and the modifications may be different on INPUT and OUTPUT chains.
{
  "cmd": "/usr/bin/curl -6 -f -m 1 http://www.example.com:80 -o results/http_{i}_{j}.html; e1=$?; /usr/bin/curl -6 -f -m 1 http://www.example.com:8080 -o results/alt_{i}_{j}.html; e2=$? if [ $e1 -eq 0 && $e2 -ne 0 ]; then return 0; else return 1; fi",

  "nfrules": [
    {"host": "www.example.com", "port": 80, "ipv4": false, "input_chain": false}
  ],

  "input": [
  ],

  "output": [
    {
      "mod_name": "ipv6_frag",
      "mod_opts": "range 10 3000 10"
    }
  ]
}

Here we are planning to fragment all the packets with a fragmentation size ranging from 10 to 3000 bytes with a step of 10. Then, all there is left is to start running the tests by launching the following command:

fragscapy start fragment.json \
  -o run/std/stdout_{i}_{j}.txt \
  -e run/std/stderr_{i}_{j}.txt \
  -W run/pcap/local_{i}_{j}.txt \
  -w run/pcap/remote_{i}_{j}.txt

This command has multiple options to specify where to output some logs (standard output and errors, packet captures) about each test. So that way we can easily inspect what happened in detail in case of a test triggered an interesting behaviour. The {i} and {j} syntax is simply the python format syntax, it is replaced by the test number and the repetition number (in cases where a test is non-deterministic it can be repeated multiple times).

For more about the available options, one can run the following commands to get the detail of each possible parameters:

fragscapy --help
fragscapy start --help

The "surprising" results

There are multiple ways to check if the tests were correct. First the output of the fragscapy start command shows a summary of the tests ran based on the exit code of the command (0 = pass, other = fail).

100%|██████████████████████████████████████████████████████████████████|300/300
Results (300 tests done over 300 scenarios)
==================
Pass : 174
    n°0_0, n°127_0, n°128_0, n°129_0, n°130_0, n°131_0, n°132_0, n°133_0, ...
Fail : 126
    n°1_0, n°2_0, n°3_0, n°4_0, n°5_0, n°6_0, n°7_0, n°8_0, n°9_0, n°10_0, ...
Not Done : 0

Ok so what all of this means? First there is the progress bar which is used... well... to indicate the progression during the tests. As we can see there is a total of 300 tests to run with this configuration. The lines after that only appear when all tests are finished. It contains 3 sections:

  • the number of tests that passed (here 174) followed by the first test numbers that passed
  • the number of tests that failed (here 126) followed by the first test numbers that failed
  • the number of tests that were not done (here 0) for cases like interrupting in the middle of the tests followed by the first test numbers that were not done (if any)

So, as we can see in our little example, 174 tests passed meaning, the page on port 80 was fetched and the one on port 8080 was not. But, strangely, in the first 126 cases (fragmentation size between 10 and 1270), something happened. That's where the logs we specified in the command line are coming handy.

By looking at the results/ output directory we can see that the page on port 8080 was never retrieved (\o/ iptables did its job) however the pages on port 80 are missing for the tests #1 to #126. What happened? Let's continue the investigation.

In run/pcap/ we have the packet capture of what happened. After a short study, it appears that fragments under 1280 bytes are rejected somehow... which may be strange. However 1280 may ring a bell: it's the low limit size of IPv6 packets and it's probably not a coincidence.

And yes indeed, after a few searches it appears that rejecting fragments smaller than 1280 bytes was introduced in Linux without any documentation or specification "to avoid pathological cases". So that's where everything came from: Linux kernel is dropping these fragments but no documentation mentions it. However it was recently reverted in Linux 5.0 for compatibility issues.

Well, ok that's how Fragscapy can be used: configure, run the test, analyze the logs and results. Of course in such a simple example we did not reveal any flaws in iptables but we unveiled the iptables strange behaviour.

Example 2: Asserting the correct handling of fragmentation

Now, let's go deeper and do a proper test using Fragscapy. This example comes from a real example, thus the results are only reproducible on this specific equipment. The test situation is the following:

Example 2 situation
Figure 3: Virtual configuration of the real-case example

We are going to fuzz the web server in the DMZ from the "User" zone. The Web Application Firewall is configured so we are not supposed to send requests with a given parameter. Let's run a lot of tests and let's see what happens:

fragment_ipv4.json

{
  "cmd": "/usr/bin/curl -f -m 1 http://www.example.com/index.html?azerty -o results/{i}_{j}.html",

  "nfrules": [
    {"host": "www.example.com", "port": 80, "ipv6": false, "input_chain": false}
  ],

  "input": [
  ],

  "output": [
    {
      "mod_name": "ipv4_frag",
      "mod_opts": "range 1 1000"
    },
    {
      "mod_name": "drop_proba",
      "mod_opts": "seq_float 0.1 0.2 0.3 0.4 0.5",
      "optional": true
    },
    {
      "mod_name": "duplicate",
      "mod_opts": "seq_str first last random",
      "optional": true
    },
    {
      "mod_name": "reorder",
      "mod_opts": "seq_str reverse random",
      "optional": true
    }
  ]
}

segment_tcp.json

{
  "cmd": "/usr/bin/curl -f -m 1 http://www.example.com/index.html?azerty -o results/{i}_{j}.html",

  "nfrules": [
    {"host": "www.example.com", "port": 80, "input_chain": false}
  ],

  "input": [
  ],

  "output": [
    {
      "mod_name": "tcp_segment",
      "mod_opts": "range 1 1000"
    },
    {
      "mod_name": "drop_proba",
      "mod_opts": "seq_float 0.1 0.2 0.3 0.4 0.5",
      "optional": true
    },
    {
      "mod_name": "duplicate",
      "mod_opts": "seq_str first last random",
      "optional": true
    },
    {
      "mod_name": "reorder",
      "mod_opts": "seq_str reverse random",
      "optional": true
    }
  ]
}
fragscapy start fragment_ipv4.json segment_tcp.json \
  -o run/std/stdout_{i}_{j}.txt \
  -e run/std/stderr_{i}_{j}.txt \
  -W run/pcap/local_{i}_{j}.txt \
  -w run/pcap/remote_{i}_{j}.txt \
  --append

So basically, we are running Fragscapy in order to fetch index.html with the parameter azerty in the request (it should not be possible). But we are also messing with IPv4 fragmentation and TCP segmentation in a lot of different configurations. And the results, after many hours needed to run the 144k tests, were quite interesting:

100%|██████████████████████████████████████████████████████████████|72000|72000
Results (666000 tests done over 72000 scenarios)
==================
Pass : 0

Fail : 666000
    n°0_0, n°1_0, n°2_0, n°2_1, n°2_2, n°2_3, n°2_4, n°2_5, n°2_6, n°2_7, ...
Not Done : 0

100%|██████████████████████████████████████████████████████████████|72000|72000
Results (666000 tests done over 72000 scenarios)
==================
Pass : 98
    n°0_0, n°1_0, n°2_0, n°2_1, n°2_2, n°2_3, n°2_4, n°2_5, n°2_6, n°2_7, ...
Fail : 665987
    n°22_5, n°23_3, n°25_5, n°32_1, n°32_6, n°33_8, n°34_1, n°35_6, n°35_7, ...
Not Done : 0

Ok, lots of results here. The first half shows the results of the IPv4 fragmentation. Nothing fancy, everything failed, meaning either the firewall blocked the packets or the packets that passed were meaningless to the web server. However the second half deals with TCP segmentation and shows that some tests managed to retrieve the web page. Indeed, after looking at the results/ directory, it appears that segmentation ranging from 1 to 46 was not well handled by the firewall and the blocking rule was not triggered, while the web server was perfectly reconstructing the data and responding to the requests.

So yes, here are the results for this particular firewall: one can transmit valid data through the firewall using small-sized segmentation. And this was one of the results of the product analysis: it does not properly ensure the promised protections.

Side note: the case where the firewall lets some misconstructed packets go through (and are not understood by the web server) is plausible but not really meant to be detectable by Fragscapy. Although it is clearly a security issue that can be easily exploited, Fragscapy has a higher level interpretation: did the protocol correctly bypass the firewall while still being valid? However one could imagine a scenario where there is a client and server on both sides of a firewall using malformed packets. The test command for Fragscapy could send data from the client side and check what was received and interpreted on the server side (just like an attacker could exfiltrate data). Since Fragscapy doesn't interpret the command's meaning, it makes no difference to it: this is a command, no matter what it does.

Example 3: Hijacking Fragscapy for fun stuff

Let's have a last and fun example to demonstrate what else can be done with Fragscapy. We now know it can be used to run tests against a security product and assert its conformity, but the same mechanisms can also be leveraged for other purposes. We are going to configure Fragscapy to look like an HTTP proxy, to talk like an HTTP proxy, but not to be an HTTP proxy.

This is actually pretty simple, you just have to start Fragscapy with the following configuration file:

{
  "cmd": "while true; do sleep 1; done",

  "nfrules": [
    {"port": 80, "output_chain": false},
    {"port": 8080, "input_chain": false}
  ],

  "input": [
    {
      "mod_name": "field",
      "mod_opts": ["TCP", "sport", 8080]
    },
    {
      "mod_name": "field",
      "mod_opts": ["TCP", "chksum", "none"]
    }
  ],

  "output": [
    {
      "mod_name": "field",
      "mod_opts": ["TCP", "dport", 80]
    },
    {
      "mod_name": "field",
      "mod_opts": ["TCP", "chksum", "none"]
    }
  ]
}

What does this configuration do? This is simply an infinite loop command so it does not really run any test. It intercepts all packets going out on port 8080 and going in on port 80. Then it modifies the outgoing packets on port 8080 to make them go on port 80 and do the reverse with incoming packets. To test it, start Fragscapy (no need for saving outputs and packet captures here):

fragscapy start http_proxy.json

And that's all, now all websites can be accessed on port 8080 instead of 80, at least that's what it looks like for your browser and local tools using HTTP. You can go to http://www.example.com:8080 and it is working because you are in reality sending packets on port 80.

I want to contribute now, how can I improve Fragscapy?

Whoa this tool is awesome, it can do so many amazing tricks!! Yes, but... it still needs to improve and you can give your help on many aspects. Everything is accessible at Amossys's Github so go check it out for in-depth understanding and improvements.

Adding modifications

The main contribution is certainly about new modifications. Indeed the tool was designed to be easily extensible through new modifications and the precise behaviour that comes with it. To add a new modification that covers one's needs, simply have a look at how other modifications are already defined and follow the instruction in the documentation (about the fragscapy.modifications.Mod class).

Improving the core engine

Fragscapy can be seen as a big orchestrating tool around modifications. Thus the second part of the tool that can be improved is its core, the code that set everything up and run all the tests successively. Let's be honest this is part is more difficult to grasp, less intuitive than adding a modification. But any contribution is welcome of course, so if you feel the courage to go through this code, feel free, it should be at least documented and commented well enough to be understandable by someone new to the project.