2.04b: See changelog or extended commit message

- Option -V eliminated in favor of -W / -S.
- Option -l added to limit the maximum requests per second
  (contributed by Sebastian Roschke)
- Option -k added to limit the maximum duration of a scan (contributed
  by Sebastian Roschke)
- Support for #ro, -W-; related documentation changes.
- HTTPS -> HTTP form detection.
- Added more diverse traversal and file disclosure tests (including
  file:// scheme tests)
- Improved injection detection in <script> sections, where a ' or "
  is all we need to inject js code.
- Added check to see if our injection strings end up server
  Set-Cookie,
  Set-Cookie2 and Content-Type reponse headers
- URLs that give us a Javascript response are now tested with a
  "callback=" parameter to find JSONP issues.
- Fixed "response varies" bug in 404 detection where a stable page
  would be marked unstable.
- Bugfix to es / eg handling in dictionaries.
- Added the "complete-fast.wl" wordlist which is an es / eg optimized
  version of "complete.wl" (resulting in 20-30% fewer requests).
This commit is contained in:
Steve Pinkham 2012-03-17 09:59:08 -04:00
parent 987151620c
commit a46315b1ec
23 changed files with 2744 additions and 219 deletions

View File

@ -1,3 +1,35 @@
Version 2.04b:
--------------
- Option -V eliminated in favor of -W / -S.
- Option -l added to limit the maximum requests per second (contributed by Sebastian Roschke)
- Option -k added to limit the maximum duration of a scan (contributed by Sebastian Roschke)
- Support for #ro, -W-; related documentation changes.
- HTTPS -> HTTP form detection.
- Added more diverse traversal and file disclosure tests (including file:// scheme tests)
- Improved injection detection in <script> sections, where a ' or "
is all we need to inject js code.
- Added check to see if our injection strings end up server Set-Cookie,
Set-Cookie2 and Content-Type reponse headers
- URLs that give us a Javascript response are now tested with a
"callback=" parameter to find JSONP issues.
- Fixed "response varies" bug in 404 detection where a stable page would be marked unstable.
- Bugfix to es / eg handling in dictionaries.
- Added the "complete-fast.wl" wordlist which is an es / eg optimized
version of "complete.wl" (resulting in 20-30% fewer requests).
Version 2.03b: Version 2.03b:
-------------- --------------

View File

@ -20,7 +20,7 @@
# #
PROGNAME = skipfish PROGNAME = skipfish
VERSION = 2.03b VERSION = 2.04b
OBJFILES = http_client.c database.c crawler.c analysis.c report.c OBJFILES = http_client.c database.c crawler.c analysis.c report.c
INCFILES = alloc-inl.h string-inl.h debug.h types.h http_client.h \ INCFILES = alloc-inl.h string-inl.h debug.h types.h http_client.h \

72
README
View File

@ -4,8 +4,14 @@ skipfish - web application security scanner
http://code.google.com/p/skipfish/ http://code.google.com/p/skipfish/
* Written and maintained by Michal Zalewski <lcamtuf@google.com>. * Written and maintained by:
* Copyright 2009, 2010, 2011 Google Inc, rights reserved.
Michal Zalewski <lcamtuf@google.com>
Niels Heinen <heinenn@google.com>
Sebastian Roschke <s.roschke@googlemail.com>
* Copyright 2009 - 2012 Google Inc, rights reserved.
* Released under terms and conditions of the Apache License, version 2.0. * Released under terms and conditions of the Apache License, version 2.0.
-------------------- --------------------
@ -111,7 +117,7 @@ A rough list of the security checks offered by the tool is outlined below.
* Stored and reflected XSS vectors in document body (minimal JS XSS support). * Stored and reflected XSS vectors in document body (minimal JS XSS support).
* Stored and reflected XSS vectors via HTTP redirects. * Stored and reflected XSS vectors via HTTP redirects.
* Stored and reflected XSS vectors via HTTP header splitting. * Stored and reflected XSS vectors via HTTP header splitting.
* Directory traversal / RFI (including constrained vectors). * Directory traversal / LFI / RFI (including constrained vectors).
* Assorted file POIs (server-side sources, configs, etc). * Assorted file POIs (server-side sources, configs, etc).
* Attacker-supplied script and CSS inclusion vectors (stored and reflected). * Attacker-supplied script and CSS inclusion vectors (stored and reflected).
* External untrusted script and CSS inclusion vectors. * External untrusted script and CSS inclusion vectors.
@ -130,6 +136,7 @@ A rough list of the security checks offered by the tool is outlined below.
* Attacker-supplied embedded content (stored and reflected). * Attacker-supplied embedded content (stored and reflected).
* External untrusted embedded content. * External untrusted embedded content.
* Mixed content on non-scriptable subresources (optional). * Mixed content on non-scriptable subresources (optional).
* HTTPS -> HTTP submission of HTML forms (optional).
* HTTP credentials in URLs. * HTTP credentials in URLs.
* Expired or not-yet-valid SSL certificates. * Expired or not-yet-valid SSL certificates.
* HTML forms with no XSRF protection. * HTML forms with no XSRF protection.
@ -228,20 +235,25 @@ behavior there.
To compile it, simply unpack the archive and try make. Chances are, you will To compile it, simply unpack the archive and try make. Chances are, you will
need to install libidn first. need to install libidn first.
Next, you need to copy the desired dictionary file from dictionaries/ to Next, you need to read the instructions provided in dictionaries/README-FIRST
skipfish.wl. Please read dictionaries/README-FIRST carefully to make the to select the right dictionary file and configure it correctly. This step has a
right choice. This step has a profound impact on the quality of scan results profound impact on the quality of scan results later on, so don't skip it.
later on.
Once you have the dictionary selected, you can try: Once you have the dictionary selected, you can use -S to load that dictionary,
and -W to specify an initially empty file for any newly learned site-specific
keywords (which will come handy in future assessments):
$ ./skipfish -o output_dir http://www.example.com/some/starting/path.txt $ touch new_dict.wl
$ ./skipfish -o output_dir -S existing_dictionary.wl -W new_dict.wl \
http://www.example.com/some/starting/path.txt
You can use -W- if you don't want to store auto-learned keywords anywhere.
Note that you can provide more than one starting URL if so desired; all of Note that you can provide more than one starting URL if so desired; all of
them will be crawled. It is also possible to read URLs from file, using them will be crawled. It is also possible to read URLs from file, using
the following syntax: the following syntax:
$ ./skipfish -o output_dir @../path/to/url_list.txt $ ./skipfish [...other options...] @../path/to/url_list.txt
The tool will display some helpful stats while the scan is in progress. You The tool will display some helpful stats while the scan is in progress. You
can also switch to a list of in-flight HTTP requests by pressing return. can also switch to a list of in-flight HTTP requests by pressing return.
@ -373,6 +385,18 @@ restricted on HTTP/1.1 level, but no explicit HTTP/1.0 caching directive is
given on specifying -E in the command-line causes skipfish to log all such given on specifying -E in the command-line causes skipfish to log all such
cases carefully. cases carefully.
In some occasions, you want to limit the requests per second to limit
the load on the targets server (or possibly bypass DoS protection). The
-l flag can be used to set this limit and the value given is the maximum
amount of requests per second you want skipfish to perform.
Scans typically should not take weeks. In many cases, you probably
want to limit the scan duration so that it fits within a certain time
window. This can be done with the -k flag, which allows the amount of
hours, minutes and seconds to be specified in a H:M:S format. Use of
this flag can affect the scan coverage if the scan timeout occurs before
testing all pages.
Lastly, in some assessments that involve self-contained sites without Lastly, in some assessments that involve self-contained sites without
extensive user content, the auditor may care about any external e-mails or extensive user content, the auditor may care about any external e-mails or
HTTP links seen, even if they have no immediate security impact. Use the -U HTTP links seen, even if they have no immediate security impact. Use the -U
@ -380,10 +404,10 @@ option to have these logged.
Dictionary management is a special topic, and - as mentioned - is covered in Dictionary management is a special topic, and - as mentioned - is covered in
more detail in dictionaries/README-FIRST. Please read that file before more detail in dictionaries/README-FIRST. Please read that file before
proceeding. Some of the relevant options include -W to specify a custom proceeding. Some of the relevant options include -S and -W (covered earlier),
wordlist, -L to suppress auto-learning, -V to suppress dictionary updates, -G -L to suppress auto-learning, -G to limit the keyword guess jar size, -R to
to limit the keyword guess jar size, -R to drop old dictionary entries, and drop old dictionary entries, and -Y to inhibit expensive $keyword.$extension
-Y to inhibit expensive $keyword.$extension fuzzing. fuzzing.
Skipfish also features a form auto-completion mechanism in order to maximize Skipfish also features a form auto-completion mechanism in order to maximize
scan coverage. The values should be non-malicious, as they are not meant to scan coverage. The values should be non-malicious, as they are not meant to
@ -423,21 +447,25 @@ Oh, and real-time scan statistics can be suppressed with -u.
A standard, authenticated scan of a well-designed and self-contained site A standard, authenticated scan of a well-designed and self-contained site
(warns about all external links, e-mails, mixed content, and caching header (warns about all external links, e-mails, mixed content, and caching header
issues): issues), including gentle brute-force:
$ ./skipfish -MEU -C "AuthCookie=value" -X /logout.aspx -o output_dir \ $ touch new_dict.wl
$ ./skipfish -MEU -S dictionaries/minimal.wl -W new_dict.wl \
-C "AuthCookie=value" -X /logout.aspx -o output_dir \
http://www.example.com/ http://www.example.com/
Five-connection crawl, but no brute-force; pretending to be MSIE and and Five-connection crawl, but no brute-force; pretending to be MSIE and and
trusting example.com content: trusting example.com content:
$ ./skipfish -m 5 -LV -W /dev/null -o output_dir -b ie -B example.com \ $ ./skipfish -m 5 -L -W- -o output_dir -b ie -B example.com \
http://www.example.com/ http://www.example.com/
Brute force only (no HTML link extraction), limited to a single directory and Heavy brute force only (no HTML link extraction), limited to a single
timing out after 5 seconds: directory and timing out after 5 seconds:
$ ./skipfish -P -I http://www.example.com/dir1/ -o output_dir -t 5 -I \ $ touch new_dict.wl
$ ./skipfish -S dictionaries/complete.wl -W new_dict.wl \
-P -I http://www.example.com/dir1/ -o output_dir -t 5 -I \
http://www.example.com/dir1/ http://www.example.com/dir1/
For a short list of all command-line options, try ./skipfish -h. For a short list of all command-line options, try ./skipfish -h.
@ -516,8 +544,6 @@ know:
* Scheduling and management web UI. * Scheduling and management web UI.
* QPS throttling and maximum scan time limit.
* A database for banner / version checks or other configurable rules? * A database for banner / version checks or other configurable rules?
------------------------------------- -------------------------------------
@ -560,4 +586,4 @@ Skipfish is made possible thanks to the contributions of, and valuable
feedback from, Google's information security engineering team. feedback from, Google's information security engineering team.
If you have any bug reports, questions, suggestions, or concerns regarding If you have any bug reports, questions, suggestions, or concerns regarding
the application, the author can be reached at lcamtuf@google.com. the application, the primary author can be reached at lcamtuf@google.com.

View File

@ -111,7 +111,7 @@ void pivot_header_checks(struct http_request* req,
/* Helper for scrape_response(). Tries to add a previously extracted link, /* Helper for scrape_response(). Tries to add a previously extracted link,
also checks for cross-site and mixed content issues and similar woes. also checks for cross-site and mixed content issues and similar woes.
Subres is: 1 - redirect; 2 - IMG; 3 - IFRAME, EMBED, OBJECT, APPLET; Subres is: 1 - redirect; 2 - IMG; 3 - IFRAME, EMBED, OBJECT, APPLET;
4 - SCRIPT, LINK REL=STYLESHEET; 0 - everything else. */ 4 - SCRIPT, LINK REL=STYLESHEET; 5 - form; 0 - everything else. */
static void test_add_link(u8* str, struct http_request* ref, static void test_add_link(u8* str, struct http_request* ref,
struct http_response* res, u8 subres, u8 sure) { struct http_response* res, u8 subres, u8 sure) {
@ -187,8 +187,11 @@ static void test_add_link(u8* str, struct http_request* ref,
if (ref->proto == PROTO_HTTPS && n->proto == PROTO_HTTP && if (ref->proto == PROTO_HTTPS && n->proto == PROTO_HTTP &&
subres > 2 && warn_mixed) subres > 2 && warn_mixed)
problem((subres == 4) ? PROB_MIXED_SUB : PROB_MIXED_OBJ, switch (subres) {
ref, res, str, ref->pivot, 0); case 4: problem(PROB_MIXED_SUB, ref, res, str, ref->pivot, 0); break;
case 5: problem(PROB_MIXED_FORM, ref, res, str, ref->pivot, 0); break;
default: problem(PROB_MIXED_OBJ, ref, res, str, ref->pivot, 0);
}
} else if (!ref->proto) { } else if (!ref->proto) {
@ -829,7 +832,7 @@ void scrape_response(struct http_request* req, struct http_response* res) {
if (!parse_url(clean_url, n, base ? base : req) && url_allowed(n) && if (!parse_url(clean_url, n, base ? base : req) && url_allowed(n) &&
R(100) < crawl_prob && !no_forms) { R(100) < crawl_prob && !no_forms) {
collect_form_data(n, req, res, tag_end + 1, (parse_form == 2)); collect_form_data(n, req, res, tag_end + 1, (parse_form == 2));
maybe_add_pivot(n, NULL, 2); maybe_add_pivot(n, NULL, 5);
} }
destroy_request(n); destroy_request(n);
@ -1279,6 +1282,16 @@ static void check_js_xss(struct http_request* req, struct http_response* res,
problem(PROB_URL_XSS, req, res, problem(PROB_URL_XSS, req, res,
(u8*)"injected URL in JS/CSS code", req->pivot, 0); (u8*)"injected URL in JS/CSS code", req->pivot, 0);
u8* end_quote = text;
while(*end_quote && end_quote++ && *end_quote != in_quot)
if(*end_quote == '\\') end_quote++;
/* Injected string is 'skip'''"fish""" */
if(end_quote && !case_prefix(end_quote + 1,"skip'''"))
problem(PROB_URL_XSS, req, res, (u8*)"injected string in JS/CSS code (single quote not escaped)", req->pivot, 0);
if(end_quote && !case_prefix(end_quote + 1,"fish\"\"\""))
problem(PROB_URL_XSS, req, res, (u8*)"injected string in JS/CSS code (double quote not escaped)", req->pivot, 0);
} else if (in_quot && *text == in_quot) in_quot = 0; } else if (in_quot && *text == in_quot) in_quot = 0;
else if (!in_quot && !case_prefix(text, "sfi") && else if (!in_quot && !case_prefix(text, "sfi") &&
@ -1414,8 +1427,9 @@ next_elem:
void content_checks(struct http_request* req, struct http_response* res) { void content_checks(struct http_request* req, struct http_response* res) {
u8* tmp; u8* tmp;
u32 tag_id, scan_id; u32 off, tag_id, scan_id;
u8 high_risk = 0; u8 high_risk = 0;
struct http_request* n;
DEBUG_CALLBACK(req, res); DEBUG_CALLBACK(req, res);
@ -1461,6 +1475,24 @@ void content_checks(struct http_request* req, struct http_response* res) {
h10c = 1; h10c = 1;
} }
/* Check if injection strings ended up in one of our cookie name or
values and complain */
u32 i = 0;
while(injection_headers[i]) {
off = 0;
do {
tmp = GET_HDR_OFF((u8*)injection_headers[i], &res->hdr,off++);
if(tmp && strstr((char*)tmp, "skipfish://invalid/;"))
problem(PROB_HEADER_INJECT,req, res,
(u8*)injection_headers[i], req->pivot, 0);
} while(tmp);
i++;
}
/* Check HTTP/1.1 intent next. Detect conflicting keywords. */ /* Check HTTP/1.1 intent next. Detect conflicting keywords. */
if (cc) { if (cc) {
@ -1563,6 +1595,18 @@ void content_checks(struct http_request* req, struct http_response* res) {
!inl_findstr(res->payload, (u8*)"function(", 1024)) !inl_findstr(res->payload, (u8*)"function(", 1024))
problem(PROB_JS_XSSI, req, res, NULL, req->pivot, 0); problem(PROB_JS_XSSI, req, res, NULL, req->pivot, 0);
/* If the response resembles javascript and a callback parameter does
not exist, we'll add this parameter in an attempt to catch JSONP
issues */
if(is_javascript(res) && !GET_PAR((u8*)"callback", &req->par)) {
n = req_copy(RPREQ(req), req->pivot, 1);
SET_PAR((u8*)"callback",(u8*)"hello",&n->par);
maybe_add_pivot(n, NULL, 2);
destroy_request(n);
}
tmp = res->payload; tmp = res->payload;
do { do {
@ -1884,6 +1928,7 @@ binary_checks:
static void detect_mime(struct http_request* req, struct http_response* res) { static void detect_mime(struct http_request* req, struct http_response* res) {
u8 sniffbuf[SNIFF_LEN]; u8 sniffbuf[SNIFF_LEN];
s32 fuzzy_match = -1;
if (res->sniff_mime_id) return; if (res->sniff_mime_id) return;
@ -1900,7 +1945,7 @@ static void detect_mime(struct http_request* req, struct http_response* res) {
while (mime_map[i][j]) { while (mime_map[i][j]) {
if (mime_map[i][j][0] == '?') { if (mime_map[i][j][0] == '?') {
if (!strncasecmp((char*)mime_map[i][j] + 1, (char*)res->header_mime, if (!strncasecmp((char*)mime_map[i][j] + 1, (char*)res->header_mime,
strlen((char*)mime_map[i][j] + 1))) break; strlen((char*)mime_map[i][j] + 1))) fuzzy_match = i;
} else { } else {
if (!strcasecmp((char*)mime_map[i][j], (char*)res->header_mime)) if (!strcasecmp((char*)mime_map[i][j], (char*)res->header_mime))
break; break;
@ -1912,8 +1957,11 @@ static void detect_mime(struct http_request* req, struct http_response* res) {
} }
if (i != MIME_COUNT) res->decl_mime_id = i; if (i != MIME_COUNT) {
res->decl_mime_id = i;
} else if (fuzzy_match != -1) {
res->decl_mime_id = fuzzy_match;
}
} }
/* Next, work out the actual MIME that should be set. Mostly /* Next, work out the actual MIME that should be set. Mostly

View File

@ -25,7 +25,6 @@
#include "types.h" #include "types.h"
#include "http_client.h" #include "http_client.h"
#include "database.h" #include "database.h"
#include "crawler.h"
extern u8 no_parse, /* Disable HTML link detection */ extern u8 no_parse, /* Disable HTML link detection */
warn_mixed, /* Warn on mixed content */ warn_mixed, /* Warn on mixed content */
@ -202,6 +201,17 @@ static char* mime_map[MIME_COUNT][8] = {
}; };
/* A set of headers that we check to see if our injection string ended
up in their value. This list should only contain headers where control
over the value could potentially be exploited. */
static const char* injection_headers[] = {
"Set-Cookie",
"Set-Cookie2",
"Content-Type",
0,
};
#endif /* _VIA_ANALYSIS_C */ #endif /* _VIA_ANALYSIS_C */
#endif /* !_HAVE_ANALYSIS_H */ #endif /* !_HAVE_ANALYSIS_H */

View File

@ -294,10 +294,12 @@ var issue_desc= {
"30402": "Attacker-supplied URLs in embedded content (lower risk)", "30402": "Attacker-supplied URLs in embedded content (lower risk)",
"30501": "External content embedded on a page (lower risk)", "30501": "External content embedded on a page (lower risk)",
"30502": "Mixed content embedded on a page (lower risk)", "30502": "Mixed content embedded on a page (lower risk)",
"30503": "HTTPS form submitting to a HTTP URL",
"30601": "HTML form with no apparent XSRF protection", "30601": "HTML form with no apparent XSRF protection",
"30602": "JSON response with no apparent XSSI protection", "30602": "JSON response with no apparent XSSI protection",
"30701": "Incorrect caching directives (lower risk)", "30701": "Incorrect caching directives (lower risk)",
"30801": "User-controlled response prefix (BOM / plugin attacks)", "30801": "User-controlled response prefix (BOM / plugin attacks)",
"30901": "HTTP header injection vector",
"40101": "XSS vector in document body", "40101": "XSS vector in document body",
"40102": "XSS vector via arbitrary URLs", "40102": "XSS vector via arbitrary URLs",

View File

@ -36,13 +36,13 @@
/* Default paths to runtime files: */ /* Default paths to runtime files: */
#define ASSETS_DIR "assets" #define ASSETS_DIR "assets"
#define DEF_WORDLIST "skipfish.wl"
/* Various default settings for HTTP client (cmdline override): */ /* Various default settings for HTTP client (cmdline override): */
#define MAX_CONNECTIONS 40 /* Simultaneous connection cap */ #define MAX_CONNECTIONS 40 /* Simultaneous connection cap */
#define MAX_CONN_HOST 10 /* Per-host connction cap */ #define MAX_CONN_HOST 10 /* Per-host connction cap */
#define MAX_REQUESTS 1e8 /* Total request count cap */ #define MAX_REQUESTS 1e8 /* Total request count cap */
#define MAX_REQUESTS_SEC 0.0 /* Max requests per second */
#define MAX_FAIL 100 /* Max consecutive failed requests */ #define MAX_FAIL 100 /* Max consecutive failed requests */
#define RW_TMOUT 10 /* Individual network R/W timeout */ #define RW_TMOUT 10 /* Individual network R/W timeout */
#define RESP_TMOUT 20 /* Total request time limit */ #define RESP_TMOUT 20 /* Total request time limit */
@ -148,6 +148,17 @@
#define XSRF_B64_NUM2 3 /* ...digit count override */ #define XSRF_B64_NUM2 3 /* ...digit count override */
#define XSRF_B64_SLASH 2 /* ...maximum slash count */ #define XSRF_B64_SLASH 2 /* ...maximum slash count */
#ifdef _VIA_CRAWLER_C
/* The URL and string we use in the RFI test */
#ifdef RFI_SUPPORT
#define RFI_HOST "http://www.google.com/humans.txt#foo="
#define RFI_STRING "we can shake a stick"
#endif
#endif /* _VIA_CRAWLER_C */
#ifdef _VIA_DATABASE_C #ifdef _VIA_DATABASE_C
/* Domains we always trust (identical to -B options). These entries do not /* Domains we always trust (identical to -B options). These entries do not

168
crawler.c
View File

@ -598,7 +598,8 @@ static void secondary_ext_start(struct pivot_desc* pv, struct http_request* req,
struct http_response* res, u8 is_param) { struct http_response* res, u8 is_param) {
u8 *base_name, *fpos, *lpos, *ex; u8 *base_name, *fpos, *lpos, *ex;
s32 tpar = -1, i = 0, spar = -1; s32 tpar = -1, spar = -1;
u32 i = 0;
DEBUG_HELPER(req->pivot); DEBUG_HELPER(req->pivot);
DEBUG_HELPER(pv); DEBUG_HELPER(pv);
@ -720,12 +721,25 @@ static void inject_start(struct pivot_desc* pv) {
if (pv->type == PIVOT_DIR || pv->type == PIVOT_SERV) { if (pv->type == PIVOT_DIR || pv->type == PIVOT_SERV) {
struct http_request* n; struct http_request* n;
/* First a PUT request */
n = req_copy(pv->req, pv, 1); n = req_copy(pv->req, pv, 1);
if (n->method) ck_free(n->method); if (n->method) ck_free(n->method);
n->method = ck_strdup((u8*)"PUT"); n->method = ck_strdup((u8*)"PUT");
n->user_val = 0;
n->callback = put_upload_check; n->callback = put_upload_check;
replace_slash(n, (u8*)("PUT-" BOGUS_FILE)); replace_slash(n, (u8*)("PUT-" BOGUS_FILE));
async_request(n); async_request(n);
/* Second a FOO for false positives */
n = req_copy(pv->req, pv, 1);
if (n->method) ck_free(n->method);
n->method = ck_strdup((u8*)"FOO");
n->user_val = 1;
n->callback = put_upload_check;
replace_slash(n, (u8*)("PUT-" BOGUS_FILE));
async_request(n);
} else { } else {
inject_start2(pv); inject_start2(pv);
} }
@ -740,13 +754,22 @@ static u8 put_upload_check(struct http_request* req,
if (FETCH_FAIL(res)) { if (FETCH_FAIL(res)) {
handle_error(req, res, (u8*)"during PUT checks", 0); handle_error(req, res, (u8*)"during PUT checks", 0);
} else { goto schedule_next;
if (res->code >= 200 && res->code < 300 &&
!same_page(&RPRES(req)->sig, &res->sig)) {
problem(PROB_PUT_DIR, req, res, 0, req->pivot, 0);
}
} }
req->pivot->misc_req[req->user_val] = req;
req->pivot->misc_res[req->user_val] = res;
if ((++req->pivot->misc_cnt) != 2) return 1;
/* If PUT and FOO of the page does not give the same result. And if
additionally we get a 2xx code, than we'll mark the issue as detected */
if(same_page(&MRES(0)->sig, &MRES(1)->sig) &&
MRES(0)->code >= 200 && MRES(1)->code < 300)
problem(PROB_PUT_DIR, MREQ(0), MRES(0), 0, req->pivot, 0);
schedule_next:
destroy_misc_data(req->pivot, req);
inject_start2(req->pivot); inject_start2(req->pivot);
return 0; return 0;
@ -773,8 +796,9 @@ static void inject_start2(struct pivot_desc* pv) {
static u8 inject_behavior_check(struct http_request* req, static u8 inject_behavior_check(struct http_request* req,
struct http_response* res) { struct http_response* res) {
struct http_request* n; struct http_request* n;
u32 orig_state = req->pivot->state;
u8* tmp = NULL; u8* tmp = NULL;
u32 orig_state = req->pivot->state;
u32 i;
/* pv->state may change after async_request() calls in /* pv->state may change after async_request() calls in
insta-fail mode, so we should cache accordingly. */ insta-fail mode, so we should cache accordingly. */
@ -867,29 +891,37 @@ static u8 inject_behavior_check(struct http_request* req,
if (orig_state != PSTATE_CHILD_INJECT) { if (orig_state != PSTATE_CHILD_INJECT) {
/* We combine the traversal and file disclosure attacks here since
the checks are almost identical */
i = 0;
while(disclosure_tests[i]) {
n = req_copy(req->pivot->req, req->pivot, 1);
ck_free(TPAR(n));
TPAR(n) = ck_strdup((u8*)disclosure_tests[i]);
n->callback = inject_dir_listing_check;
n->user_val = 4 + i;
async_request(n);
i++;
}
#ifdef RFI_SUPPORT
/* Optionally try RFI */
n = req_copy(req->pivot->req, req->pivot, 1); n = req_copy(req->pivot->req, req->pivot, 1);
ck_free(TPAR(n)); ck_free(TPAR(n));
TPAR(n) = ck_strdup((u8*)"../../../../../../../../etc/hosts"); TPAR(n) = ck_strdup((u8*)RFI_HOST);
n->callback = inject_dir_listing_check; n->callback = inject_dir_listing_check;
n->user_val = 4; n->user_val = 4 + i;
async_request(n);
n = req_copy(req->pivot->req, req->pivot, 1);
ck_free(TPAR(n));
TPAR(n) = ck_strdup((u8*)"..\\..\\..\\..\\..\\..\\..\\..\\boot.ini");
n->callback = inject_dir_listing_check;
n->user_val = 5;
async_request(n); async_request(n);
#endif
} }
return 0; return 0;
} }
@ -912,7 +944,11 @@ static u8 inject_dir_listing_check(struct http_request* req,
req->pivot->misc_res[req->user_val] = res; req->pivot->misc_res[req->user_val] = res;
if (req->pivot->i_skip_add) { if (req->pivot->i_skip_add) {
if ((++req->pivot->misc_cnt) != 6) return 1; #ifdef RFI_SUPPORT
if ((++req->pivot->misc_cnt) != 12) return 1;
#else
if ((++req->pivot->misc_cnt) != 11) return 1;
#endif
} else { } else {
if ((++req->pivot->misc_cnt) != 4) return 1; if ((++req->pivot->misc_cnt) != 4) return 1;
} }
@ -935,13 +971,22 @@ static u8 inject_dir_listing_check(struct http_request* req,
misc[1] = ./known_val misc[1] = ./known_val
misc[2] = ...\known_val misc[2] = ...\known_val
misc[3] = .\known_val misc[3] = .\known_val
misc[4] = ../../../../../../../../etc/hosts
misc[5] = ..\..\..\..\..\..\..\..\boot.ini
Here, the test is simpler: if misc[1] != misc[0], or misc[3] != Here, the test is simpler: if misc[1] != misc[0], or misc[3] !=
misc[2], we probably have a bug. The same if misc[4] or misc[5] misc[2], we probably have a bug. The same if misc[4] or misc[5]
contain magic strings, but misc[0] doesn't. contain magic strings, but misc[0] doesn't.
Finally, we perform some directory traveral and file inclusion tests.
misc[4] = ../../../../../../../../etc/hosts
misc[5] = ../../../../../../../../etc/passwd
misc[6] = ..\..\..\..\..\..\..\..\boot.ini
misc[7] = ../../../../../../../../WEB-INF/web.xml
misc[8] = file:///etc/hosts
misc[9] = file:///etc/passwd
misc[10] = file:///boot.ini
misc[11] = RFI (optional)
*/ */
if (orig_state == PSTATE_CHILD_INJECT) { if (orig_state == PSTATE_CHILD_INJECT) {
@ -984,18 +1029,56 @@ static u8 inject_dir_listing_check(struct http_request* req,
RESP_CHECKS(MREQ(2), MRES(2)); RESP_CHECKS(MREQ(2), MRES(2));
} }
if (inl_findstr(MRES(4)->payload, (u8*)"127.0.0.1", 512) && /* Check on the /etc/hosts file disclosure */
!inl_findstr(MRES(0)->payload, (u8*)"127.0.0.1", 512)) { if(!inl_findstr(MRES(0)->payload, (u8*)"127.0.0.1", 1024)) {
problem(PROB_DIR_TRAVERSAL, MREQ(4), MRES(4), if (inl_findstr(MRES(4)->payload, (u8*)"127.0.0.1", 1024)) {
(u8*)"response resembles /etc/hosts", req->pivot, 0);
problem(PROB_FI_LOCAL, MREQ(4), MRES(4),
(u8*)"response resembles /etc/hosts (via traversal)", req->pivot, 0);
} else if(inl_findstr(MRES(8)->payload, (u8*)"127.0.0.1", 1024)) {
problem(PROB_FI_LOCAL, MREQ(8), MRES(8),
(u8*)"response resembles /etc/hosts (via file://)", req->pivot, 0);
}
} }
if (inl_findstr(MRES(5)->payload, (u8*)"[boot loader]", 512) && /* Check on the /etc/passwd file disclosure */
!inl_findstr(MRES(0)->payload, (u8*)"[boot loader]", 512)) { if(!inl_findstr(MRES(0)->payload, (u8*)"root:x:0:0:root", 1024)) {
problem(PROB_DIR_TRAVERSAL, MREQ(5), MRES(5), if(inl_findstr(MRES(5)->payload, (u8*)"root:x:0:0:root", 1024)) {
(u8*)"response resembles c:\\boot.ini", req->pivot, 0); problem(PROB_FI_LOCAL, MREQ(5), MRES(5),
(u8*)"response resembles /etc/passwd (via traversal)", req->pivot, 0);
} else if(inl_findstr(MRES(9)->payload, (u8*)"root:x:0:0:root", 1024)) {
problem(PROB_FI_LOCAL, MREQ(9), MRES(9),
(u8*)"response resembles /etc/passwd (via file://)", req->pivot, 0);
}
} }
/* Windows boot.ini disclosure */
if(!inl_findstr(MRES(0)->payload, (u8*)"[boot loader]", 1024)) {
if (inl_findstr(MRES(6)->payload, (u8*)"[boot loader]", 1024)) {
problem(PROB_FI_LOCAL, MREQ(6), MRES(6),
(u8*)"response resembles c:\\boot.ini (via traversal)", req->pivot, 0);
} else if (inl_findstr(MRES(10)->payload, (u8*)"[boot loader]", 1024)) {
problem(PROB_FI_LOCAL, MREQ(10), MRES(9),
(u8*)"response resembles c:\\boot.ini (via file://)", req->pivot, 0);
}
}
/* Check the web.xml disclosure */
if(!inl_findstr(MRES(0)->payload, (u8*)"<servlet-mapping>", 1024)) {
if (inl_findstr(MRES(7)->payload, (u8*)"<servlet-mapping>", 1024))
problem(PROB_FI_LOCAL, MREQ(7), MRES(7),
(u8*)"response resembles ./WEB-INF/web.xml (via traversal)", req->pivot, 0);
}
#ifdef RFI_SUPPORT
if (!inl_findstr(MRES(0)->payload, (u8*)RFI_STRING, 1024) &&
inl_findstr(MRES(11)->payload, (u8*)RFI_STRING, 1024)) {
problem(PROB_FI_REMOTE, MREQ(11), MRES(11),
(u8*)"remote file inclusion", req->pivot, 0);
}
#endif
} }
schedule_next: schedule_next:
@ -1232,7 +1315,7 @@ schedule_next:
if (req->user_val) return 0; if (req->user_val) return 0;
n = req_copy(RPREQ(req), req->pivot, 1); n = req_copy(RPREQ(req), req->pivot, 1);
SET_VECTOR(orig_state, n, (u8*)"SKIPFISH~STRING"); SET_VECTOR(orig_state, n, (u8*)"+/skipfish-bom");
n->callback = inject_prologue_check; n->callback = inject_prologue_check;
async_request(n); async_request(n);
@ -1255,13 +1338,13 @@ static u8 inject_prologue_check(struct http_request* req,
goto schedule_next; goto schedule_next;
} }
if (res->pay_len && !prefix(res->payload, (u8*)"SKIPFISH~STRING") && if (res->pay_len && !prefix(res->payload, (u8*)"+/skipfish-bom") &&
!GET_HDR((u8*)"Content-Disposition", &res->hdr)) !GET_HDR((u8*)"Content-Disposition", &res->hdr))
problem(PROB_PROLOGUE, req, res, NULL, req->pivot, 0); problem(PROB_PROLOGUE, req, res, NULL, req->pivot, 0);
schedule_next: schedule_next:
/* XSS checks - 3 requests */ /* XSS checks - 4 requests */
n = req_copy(RPREQ(req), req->pivot, 1); n = req_copy(RPREQ(req), req->pivot, 1);
SET_VECTOR(orig_state, n, "http://skipfish.invalid/;?"); SET_VECTOR(orig_state, n, "http://skipfish.invalid/;?");
@ -1281,8 +1364,13 @@ schedule_next:
n->user_val = 2; n->user_val = 2;
async_request(n); async_request(n);
return 0; n = req_copy(RPREQ(req), req->pivot, 1);
SET_VECTOR(orig_state, n, "'skip'''\"fish\"\"\"");
n->callback = inject_redir_check;
n->user_val = 3;
async_request(n);
return 0;
} }
@ -2755,9 +2843,14 @@ bad_404:
} else { } else {
if (req->pivot->type != PIVOT_SERV) { if (req->pivot->type != PIVOT_SERV) {
/* todo(niels) improve behavior by adding a new pivot */
n = req_copy(RPREQ(req), req->pivot, 1);
replace_slash(n, NULL);
maybe_add_pivot(n, NULL, 2);
req->pivot->type = PIVOT_PATHINFO; req->pivot->type = PIVOT_PATHINFO;
replace_slash(req->pivot->req, NULL); destroy_request(n);
} else
} else
problem(PROB_404_FAIL, RPREQ(req), RPRES(req), problem(PROB_404_FAIL, RPREQ(req), RPRES(req),
(u8*)"no distinctive 404 behavior detected", req->pivot, 0); (u8*)"no distinctive 404 behavior detected", req->pivot, 0);
} }
@ -3395,4 +3488,3 @@ schedule_next:
return keep; return keep;
} }

View File

@ -26,6 +26,25 @@
#include "http_client.h" #include "http_client.h"
#include "database.h" #include "database.h"
#ifdef _VIA_CRAWLER_C
/* Strings for traversal and file disclosure tests. The order should
not be changed */
static const char* disclosure_tests[] = {
"../../../../../../../../etc/hosts",
"../../../../../../../../etc/passwd",
"..\\..\\..\\..\\..\\..\\..\\..\\boot.ini",
"../../../../../../../../WEB-INF/web.xml",
"file:///etc/hosts",
"file:///etc/passwd",
"file:///boot.ini",
0
};
#endif
extern u32 crawl_prob; /* Crawl probability (1-100%) */ extern u32 crawl_prob; /* Crawl probability (1-100%) */
extern u8 no_parse, /* Disable HTML link detection */ extern u8 no_parse, /* Disable HTML link detection */
warn_mixed, /* Warn on mixed content? */ warn_mixed, /* Warn on mixed content? */
@ -64,11 +83,12 @@ void add_form_hint(u8* name, u8* value);
/* Macros to access various useful pivot points: */ /* Macros to access various useful pivot points: */
#define MREQ(_x) (req->pivot->misc_req[_x])
#define MRES(_x) (req->pivot->misc_res[_x])
#define RPAR(_req) ((_req)->pivot->parent) #define RPAR(_req) ((_req)->pivot->parent)
#define RPREQ(_req) ((_req)->pivot->req) #define RPREQ(_req) ((_req)->pivot->req)
#define RPRES(_req) ((_req)->pivot->res) #define RPRES(_req) ((_req)->pivot->res)
#define MREQ(_x) (req->pivot->misc_req[_x])
#define MRES(_x) (req->pivot->misc_res[_x])
/* Debugging instrumentation for callbacks and callback helpers: */ /* Debugging instrumentation for callbacks and callback helpers: */

View File

@ -82,14 +82,14 @@ struct ext_entry {
u32 index; u32 index;
}; };
static struct ext_entry *extension, /* Extension list */ static struct ext_entry *wg_extension, /* Extension list */
*sp_extension; *ws_extension;
static u8 **guess; /* Keyword candidate list */ static u8 **guess; /* Keyword candidate list */
u32 guess_cnt, /* Number of keyword candidates */ u32 guess_cnt, /* Number of keyword candidates */
extension_cnt, /* Number of extensions */ ws_extension_cnt, /* Number of specific extensions */
sp_extension_cnt, /* Number of specific extensions */ wg_extension_cnt, /* Number of extensions */
keyword_total_cnt, /* Current keyword count */ keyword_total_cnt, /* Current keyword count */
keyword_orig_cnt; /* At-boot keyword count */ keyword_orig_cnt; /* At-boot keyword count */
@ -165,8 +165,9 @@ void maybe_add_pivot(struct http_request* req, struct http_response* res,
if (PATH_SUBTYPE(req->par.t[i])) { if (PATH_SUBTYPE(req->par.t[i])) {
if (req->par.t[i] == PARAM_PATH && !req->par.n[i] && !req->par.v[i][0]) if (req->par.t[i] == PARAM_PATH && !req->par.n[i] &&
ends_with_slash = 1; req->par.v[i] && !req->par.v[i][0])
ends_with_slash = 1;
else else
ends_with_slash = 0; ends_with_slash = 0;
@ -898,10 +899,10 @@ static void wordlist_confirm_single(u8* text, u8 is_ext, u8 class, u8 read_only,
if (!keyword[kh][i].is_ext && is_ext) { if (!keyword[kh][i].is_ext && is_ext) {
keyword[kh][i].is_ext = 1; keyword[kh][i].is_ext = 1;
extension = ck_realloc(extension, (extension_cnt + 1) * wg_extension = ck_realloc(wg_extension, (wg_extension_cnt + 1) *
sizeof(struct ext_entry)); sizeof(struct ext_entry));
extension[extension_cnt].bucket = kh; wg_extension[wg_extension_cnt].bucket = kh;
extension[extension_cnt++].index = i; wg_extension[wg_extension_cnt++].index = i;
} }
return; return;
@ -929,20 +930,18 @@ static void wordlist_confirm_single(u8* text, u8 is_ext, u8 class, u8 read_only,
if (is_ext) { if (is_ext) {
extension = ck_realloc(extension, (extension_cnt + 1) * wg_extension = ck_realloc(wg_extension, (wg_extension_cnt + 1) *
sizeof(struct ext_entry)); sizeof(struct ext_entry));
extension[extension_cnt].bucket = kh; wg_extension[wg_extension_cnt].bucket = kh;
extension[extension_cnt++].index = i; wg_extension[wg_extension_cnt++].index = i;
if (class == KW_SPECIFIC) {
sp_extension = ck_realloc(sp_extension, (sp_extension_cnt + 1) *
sizeof(struct ext_entry));
sp_extension[sp_extension_cnt].bucket = kh;
sp_extension[sp_extension_cnt++].index = i;
/* We only add generic extensions to the ws list. */
if (class == KW_GENERIC) {
ws_extension = ck_realloc(ws_extension, (ws_extension_cnt + 1) *
sizeof(struct ext_entry));
ws_extension[ws_extension_cnt].bucket = kh;
ws_extension[ws_extension_cnt++].index = i;
} }
} }
} }
@ -1065,12 +1064,12 @@ u8* wordlist_get_guess(u32 offset, u8* specific) {
u8* wordlist_get_extension(u32 offset, u8 specific) { u8* wordlist_get_extension(u32 offset, u8 specific) {
if (!specific) { if (!specific) {
if (offset >= extension_cnt) return NULL; if (offset >= wg_extension_cnt) return NULL;
return keyword[extension[offset].bucket][extension[offset].index].word; return keyword[wg_extension[offset].bucket][wg_extension[offset].index].word;
} }
if (offset >= sp_extension_cnt) return NULL; if (offset >= ws_extension_cnt) return NULL;
return keyword[sp_extension[offset].bucket][sp_extension[offset].index].word; return keyword[ws_extension[offset].bucket][ws_extension[offset].index].word;
} }
@ -1089,12 +1088,16 @@ void load_keywords(u8* fname, u8 read_only, u32 purge_age) {
in = fopen((char*)fname, "r"); in = fopen((char*)fname, "r");
if (!in) { if (!in) {
PFATAL("Unable to open wordlist '%s'", fname); if (read_only)
return; PFATAL("Unable to open read-only wordlist '%s'.", fname);
else
PFATAL("Unable to open read-write wordlist '%s' (see dictionaries/README-FIRST).", fname);
} }
sprintf(fmt, "%%2s %%u %%u %%u %%%u[^\x01-\x1f]", MAX_WORD); sprintf(fmt, "%%2s %%u %%u %%u %%%u[^\x01-\x1f]", MAX_WORD);
wordlist_retry:
while ((fields = fscanf(in, fmt, type, &hits, &total_age, &last_age, kword)) while ((fields = fscanf(in, fmt, type, &hits, &total_age, &last_age, kword))
== 5) { == 5) {
@ -1113,12 +1116,24 @@ void load_keywords(u8* fname, u8 read_only, u32 purge_age) {
fgetc(in); /* sink \n */ fgetc(in); /* sink \n */
} }
if (fields == 1 && !strcmp((char*)type, "#r")) {
printf("Found %s (readonly:%d)\n", type, read_only);
if (!read_only)
FATAL("Attempt to load read-only wordlist '%s' via -W (use -S instead).\n", fname);
fgetc(in); /* sink \n */
goto wordlist_retry;
}
if (fields != -1 && fields != 5) if (fields != -1 && fields != 5)
FATAL("Wordlist '%s': syntax error in line %u.\n", fname, lines); FATAL("Wordlist '%s': syntax error in line %u.\n", fname, lines);
if (!lines) if (!lines && (read_only || !keyword_total_cnt))
WARN("Wordlist '%s' contained no valid entries.", fname); WARN("Wordlist '%s' contained no valid entries.", fname);
DEBUG("* Read %d lines from dictionary '%s' (read-only = %d).\n", lines,
fname, read_only);
keyword_orig_cnt = keyword_total_cnt; keyword_orig_cnt = keyword_total_cnt;
fclose(in); fclose(in);
@ -1151,6 +1166,8 @@ void save_keywords(u8* fname) {
#define O_NOFOLLOW 0 #define O_NOFOLLOW 0
#endif /* !O_NOFOLLOW */ #endif /* !O_NOFOLLOW */
/* Don't save keywords for /dev/null and other weird files. */
if (stat((char*)fname, &st) || !S_ISREG(st.st_mode)) return; if (stat((char*)fname, &st) || !S_ISREG(st.st_mode)) return;
/* First, sort the list. */ /* First, sort the list. */
@ -1281,7 +1298,7 @@ void database_stats() {
pivot_serv, pivot_dir, pivot_file, pivot_pinfo, pivot_unknown, pivot_serv, pivot_dir, pivot_file, pivot_pinfo, pivot_unknown,
pivot_param, pivot_value, issue_cnt[1], issue_cnt[2], issue_cnt[3], pivot_param, pivot_value, issue_cnt[1], issue_cnt[2], issue_cnt[3],
issue_cnt[4], issue_cnt[5], keyword_total_cnt, keyword_total_cnt - issue_cnt[4], issue_cnt[5], keyword_total_cnt, keyword_total_cnt -
keyword_orig_cnt, extension_cnt, guess_cnt); keyword_orig_cnt, wg_extension_cnt, guess_cnt);
} }
@ -1486,8 +1503,8 @@ void destroy_database() {
} }
/* Extensions just referenced keyword[][] entries. */ /* Extensions just referenced keyword[][] entries. */
ck_free(extension); ck_free(wg_extension);
ck_free(sp_extension); ck_free(ws_extension);
for (i=0;i<guess_cnt;i++) ck_free(guess[i]); for (i=0;i<guess_cnt;i++) ck_free(guess[i]);
ck_free(guess); ck_free(guess);

View File

@ -133,7 +133,7 @@ struct pivot_desc {
/* Injection attack logic scratchpad: */ /* Injection attack logic scratchpad: */
#define MISC_ENTRIES 10 #define MISC_ENTRIES 15
struct http_request* misc_req[MISC_ENTRIES]; /* Saved requests */ struct http_request* misc_req[MISC_ENTRIES]; /* Saved requests */
struct http_response* misc_res[MISC_ENTRIES]; /* Saved responses */ struct http_response* misc_res[MISC_ENTRIES]; /* Saved responses */
@ -258,6 +258,7 @@ u8 is_c_sens(struct pivot_desc* pv);
#define PROB_EXT_OBJ 30501 /* External obj standalone */ #define PROB_EXT_OBJ 30501 /* External obj standalone */
#define PROB_MIXED_OBJ 30502 /* Mixed content standalone */ #define PROB_MIXED_OBJ 30502 /* Mixed content standalone */
#define PROB_MIXED_FORM 30503 /* HTTPS -> HTTP form */
#define PROB_VULN_FORM 30601 /* Form w/o anti-XSRF token */ #define PROB_VULN_FORM 30601 /* Form w/o anti-XSRF token */
#define PROB_JS_XSSI 30602 /* Script with no XSSI prot */ #define PROB_JS_XSSI 30602 /* Script with no XSSI prot */
@ -266,6 +267,8 @@ u8 is_c_sens(struct pivot_desc* pv);
#define PROB_PROLOGUE 30801 /* User-supplied prologue */ #define PROB_PROLOGUE 30801 /* User-supplied prologue */
#define PROB_HEADER_INJECT 30901 /* Injected string in header */
/* - Moderate severity issues (data compromise): */ /* - Moderate severity issues (data compromise): */
#define PROB_BODY_XSS 40101 /* Document body XSS */ #define PROB_BODY_XSS 40101 /* Document body XSS */
@ -297,6 +300,8 @@ u8 is_c_sens(struct pivot_desc* pv);
#define PROB_SQL_INJECT 50103 /* SQL injection */ #define PROB_SQL_INJECT 50103 /* SQL injection */
#define PROB_FMT_STRING 50104 /* Format string attack */ #define PROB_FMT_STRING 50104 /* Format string attack */
#define PROB_INT_OVER 50105 /* Integer overflow attack */ #define PROB_INT_OVER 50105 /* Integer overflow attack */
#define PROB_FI_LOCAL 50106 /* Local file inclusion */
#define PROB_FI_REMOTE 50107 /* Local remote inclusion */
#define PROB_SQL_PARAM 50201 /* SQL-like parameter */ #define PROB_SQL_PARAM 50201 /* SQL-like parameter */
@ -353,7 +358,7 @@ extern u32 max_depth,
max_guesses; max_guesses;
extern u32 guess_cnt, extern u32 guess_cnt,
extension_cnt, wg_extension_cnt,
keyword_total_cnt, keyword_total_cnt,
keyword_orig_cnt; keyword_orig_cnt;

View File

@ -1,33 +1,50 @@
This directory contains four alternative, hand-picked Skipfish dictionaries. This directory contains four alternative, hand-picked Skipfish dictionaries.
PLEASE READ THIS FILE CAREFULLY BEFORE PICKING ONE. This is *critical* to PLEASE READ THESE INSTRUCTIONS CAREFULLY BEFORE PICKING ONE. This is *critical*
getting good results in your work. to getting decent results in your scans.
------------------------ ------------------------
Key command-line options Key command-line options
------------------------ ------------------------
The dictionary to be used by the tool can be specified with the -W option, Skipfish automatically builds and maintain dictionaries based on the URLs
and must conform to the format outlined at the end of this document. If you and HTML content encountered while crawling the site. These dictionaries
omit -W in the command line, 'skipfish.wl' is assumed. This file does not are extremely useful for subsequent scans of the same target, or for
exist by default. That part is by design: THE SCANNER WILL MODIFY THE future assessments of other platforms belonging to the same customer.
SUPPLIED FILE UNLESS SPECIFICALLY INSTRUCTED NOT TO.
That's because the scanner automatically learns new keywords and extensions Exactly one read-write dictionary needs to be specified for every scan. To
based on any links discovered during the scan, and on random sampling of create a blank one and feed it to skipfish, you can use this syntax:
site contents. The information is consequently stored in the dictionary
for future reuse, along with other bookkeeping information useful for
determining which keywords perform well, and which ones don't.
All this means that it is very important to maintain a separate dictionary $ touch new_dict.wl
for every separate set of unrelated target sites. Otherwise, undesirable $ ./skipfish -W new_dict.wl [...other options...]
interference will occur.
In addition, it is almost always beneficial to seed the scanner with any
number of supplemental, read-only dictionaries with common keywords useful
for brute-force attacks against any website. Supplemental dictionaries can
be loaded using the following syntax:
$ ./skipfish -S supplemental_dict1.wl -S supplemental_dict2.wl \
-W new_dict.wl [...other options...]
Note that the -W option should be used with care. The target file will be
modified at the end of the scan. The -S dictionaries, on the other hand, are
purely read-only.
If you don't want to create a dictionary and store discovered keywords, you
can use -W- (an alias for -W /dev/null).
You can and should share read-only dictionaries across unrelated scans, but
a separate read-write dictionary should be used for scans of unrelated targets.
Otherwise, undesirable interference may occur.
With this out of the way, let's quickly review the options that may be used With this out of the way, let's quickly review the options that may be used
to fine-tune various aspects of dictionary handling: to fine-tune various aspects of dictionary handling:
-L - do not automatically learn new keywords based on site content. -L - do not automatically learn new keywords based on site content.
The scanner will still use keywords found in the specified
dictionaries, if any; but will not go beyond that set.
This option should not be normally used in most scanning This option should not be normally used in most scanning
modes; if supplied, the scanner will not be able to discover modes; if supplied, the scanner will not be able to discover
and leverage technology-specific terms and file extensions and leverage technology-specific terms and file extensions
@ -42,19 +59,12 @@ to fine-tune various aspects of dictionary handling:
gradually replaced with new picks, and then discarded at the gradually replaced with new picks, and then discarded at the
end of the scan. The default jar size is 256. end of the scan. The default jar size is 256.
-V - prevent the scanner from updating the dictionary file. -R num - purge all entries in the read-write that had no non-404 hits for
Normally, the primary read-write dictionary specified with the
-W option is updated at the end of the scan to add any newly
discovered keywords, and to update keyword usage stats. Using
this option eliminates this step.
-R num - purge all dictionary entries that had no non-404 hits for
the last <num> scans. the last <num> scans.
This option prevents dictionary creep in repeated assessments, This option prevents dictionary creep in repeated assessments,
but needs to be used with care: it will permanently nuke a but needs to be used with care: it will permanently nuke a
part of the dictionary! part of the dictionary.
-Y - inhibit full ${filename}.${extension} brute-force. -Y - inhibit full ${filename}.${extension} brute-force.
@ -77,7 +87,7 @@ associated request cost):
scanner will not discover non-linked resources such as /admin, scanner will not discover non-linked resources such as /admin,
/index.php.old, etc: /index.php.old, etc:
./skipfish -W /dev/null -LV [...other options...] $ ./skipfish -W- -LV [...other options...]
This mode is very fast, but *NOT* recommended for general use because of This mode is very fast, but *NOT* recommended for general use because of
limited coverage. Use only where absolutely necessary. limited coverage. Use only where absolutely necessary.
@ -86,8 +96,9 @@ associated request cost):
will not discover resources such as /admin, but will discover cases such as will not discover resources such as /admin, but will discover cases such as
/index.php.old (once index.php itself is spotted during an orderly crawl): /index.php.old (once index.php itself is spotted during an orderly crawl):
cp dictionaries/extensions-only.wl dictionary.wl $ touch new_dict.wl
./skipfish -W dictionary.wl -Y [...other options...] $ ./skipfish -S dictionaries/extensions-only.wl -W new_dict.wl -Y \
[...other options...]
This method is only slightly more request-intensive than #1, and therefore, This method is only slightly more request-intensive than #1, and therefore,
is a marginally better alternative in cases where time is of essence. It's is a marginally better alternative in cases where time is of essence. It's
@ -98,8 +109,9 @@ associated request cost):
try fuzzing the file name, or the extension, at any given time - but will try fuzzing the file name, or the extension, at any given time - but will
not try every possible ${filename}.${extension} pair from the dictionary. not try every possible ${filename}.${extension} pair from the dictionary.
cp dictionaries/complete.wl dictionary.wl $ touch new_dict.wl
./skipfish -W dictionary.wl -Y [...other options...] $ ./skipfish -S dictionaries/complete.wl -W new_dict.wl -Y \
[...other options...]
This method has a cost of about 2,000 requests per fuzzed location, and is This method has a cost of about 2,000 requests per fuzzed location, and is
recommended for rapid assessments, especially when working with slow recommended for rapid assessments, especially when working with slow
@ -109,41 +121,27 @@ associated request cost):
pair will be attempted. This mode is significantly slower, but offers pair will be attempted. This mode is significantly slower, but offers
superior coverage, and should be your starting point. superior coverage, and should be your starting point.
cp dictionaries/XXX.wl dictionary.wl $ touch new_dict.wl
./skipfish -W dictionary.wl [...other options...] $ ./skipfish -S dictionaries/XXX.wl -W new_dict.wl [...other options...]
Replace XXX with: Replace XXX with:
minimal - recommended starter dictionary, mostly focusing on backup minimal - recommended starter dictionary, mostly focusing on
and source files, about 60,000 requests per fuzzed location. backup and source files, about 60,000 requests
per fuzzed location.
medium - more thorough dictionary, focusing on common frameworks, medium - more thorough dictionary, focusing on common
about 140,000 requests. frameworks, about 140,000 requests.
complete - all-inclusive dictionary, over 210,000 requests. complete - all-inclusive dictionary, over 210,000 requests.
complete-fast - An optimized version of the 'complete' dictionary
with 20-30% less requests.
Normal fuzzing mode is recommended when doing thorough assessments of Normal fuzzing mode is recommended when doing thorough assessments of
reasonably responsive servers; but it may be prohibitively expensive reasonably responsive servers; but it may be prohibitively expensive
when dealing with very large or very slow sites. when dealing with very large or very slow sites.
----------------------------------
Using separate master dictionaries
----------------------------------
A recently introduced feature allows you to load any number of read-only
supplementary dictionaries in addition to the "main" read-write one (-W
dictionary.wl).
This is a convenient way to isolate (and be able to continually update) your
customized top-level wordlist, whilst still acquiring site-specific data in
a separate file. The following syntax may be used to accomplish this:
./skipfish -W initially_empty_site_specific_dict.wl -W +supplementary_dict1.wl \
-W +supplementary_dict2.wl [...other options...]
Only the main dictionary will be modified as a result of the scan, and only
newly discovered site-specific keywords will be appended there.
---------------------------- ----------------------------
More about dictionary design More about dictionary design
---------------------------- ----------------------------
@ -204,7 +202,7 @@ files such as ${this_word}.php or ${this_word}.class. If not, tag the keyword
as 'ws'. as 'ws'.
Similarly, to decide between 'eg' and 'es', think about the possibility of Similarly, to decide between 'eg' and 'es', think about the possibility of
encoutering cgi-bin.${this_ext} or formmail.${this_ext}. If it seems unlikely, encountering cgi-bin.${this_ext} or zencart.${this_ext}. If it seems unlikely,
choose 'es'. choose 'es'.
For your convenience, all legacy keywords and extensions, as well as any entries For your convenience, all legacy keywords and extensions, as well as any entries
@ -222,6 +220,9 @@ Other notes about dictionaries:
and all keywords should be NOT url-encoded (e.g., 'Program Files', not and all keywords should be NOT url-encoded (e.g., 'Program Files', not
'Program%20Files'). No keyword should exceed 64 characters. 'Program%20Files'). No keyword should exceed 64 characters.
- Any valuable dictionary can be tagged with an optional '#ro' line at the
beginning. This prevents it from being loaded in read-write mode.
- Tread carefully; poor wordlists are one of the reasons why some web security - Tread carefully; poor wordlists are one of the reasons why some web security
scanners perform worse than expected. You will almost always be better off scanners perform worse than expected. You will almost always be better off
narrowing down or selectively extending the supplied set (and possibly narrowing down or selectively extending the supplied set (and possibly

File diff suppressed because it is too large Load Diff

View File

@ -1,3 +1,4 @@
#ro
e 1 1 1 7z e 1 1 1 7z
e 1 1 1 as e 1 1 1 as
e 1 1 1 asmx e 1 1 1 asmx

View File

@ -1,3 +1,4 @@
#ro
e 1 1 1 7z e 1 1 1 7z
e 1 1 1 as e 1 1 1 as
e 1 1 1 asmx e 1 1 1 asmx

View File

@ -1,3 +1,4 @@
#ro
e 1 1 1 asmx e 1 1 1 asmx
e 1 1 1 asp e 1 1 1 asp
e 1 1 1 aspx e 1 1 1 aspx

View File

@ -1,3 +1,4 @@
#ro
e 1 1 1 bak e 1 1 1 bak
e 1 1 1 cfg e 1 1 1 cfg
e 1 1 1 class e 1 1 1 class

5
dictionaries/test.wl Normal file
View File

@ -0,0 +1,5 @@
es 1 2 2 php
ws 1 2 2 cgi-bin
eg 1 2 2 old
wg 1 2 2 admin
w? 1 0 0 localhost

View File

@ -48,22 +48,26 @@
/* Assorted exported settings: */ /* Assorted exported settings: */
u32 max_connections = MAX_CONNECTIONS, u32 max_connections = MAX_CONNECTIONS,
max_conn_host = MAX_CONN_HOST, max_conn_host = MAX_CONN_HOST,
max_requests = MAX_REQUESTS, max_requests = MAX_REQUESTS,
max_fail = MAX_FAIL, max_fail = MAX_FAIL,
idle_tmout = IDLE_TMOUT, idle_tmout = IDLE_TMOUT,
resp_tmout = RESP_TMOUT, resp_tmout = RESP_TMOUT,
rw_tmout = RW_TMOUT, rw_tmout = RW_TMOUT,
size_limit = SIZE_LIMIT; size_limit = SIZE_LIMIT;
u8 browser_type = BROWSER_FAST; u8 browser_type = BROWSER_FAST;
u8 auth_type = AUTH_NONE; u8 auth_type = AUTH_NONE;
float max_requests_sec = MAX_REQUESTS_SEC;
struct param_array global_http_par; struct param_array global_http_par;
/* Counters: */ /* Counters: */
float req_sec;
u32 req_errors_net, u32 req_errors_net,
req_errors_http, req_errors_http,
req_errors_cur, req_errors_cur,
@ -81,18 +85,21 @@ u32 req_errors_net,
u64 bytes_sent, u64 bytes_sent,
bytes_recv, bytes_recv,
bytes_deflated, bytes_deflated,
bytes_inflated; bytes_inflated,
iterations_cnt = 0;
u8 *auth_user, u8 *auth_user,
*auth_pass; *auth_pass;
#ifdef PROXY_SUPPORT #ifdef PROXY_SUPPORT
u8* use_proxy; u8* use_proxy;
u32 use_proxy_addr; u32 use_proxy_addr;
u16 use_proxy_port; u16 use_proxy_port;
#endif /* PROXY_SUPPORT */ #endif /* PROXY_SUPPORT */
u8 ignore_cookies; u8 ignore_cookies,
idle;
/* Internal globals for queue management: */ /* Internal globals for queue management: */
@ -127,7 +134,6 @@ u8* get_value(u8 type, u8* name, u32 offset,
} }
/* Inserts or overwrites parameter value in param_array. If offset /* Inserts or overwrites parameter value in param_array. If offset
== -1, will append parameter to list. Duplicates strings, == -1, will append parameter to list. Duplicates strings,
name and val can be NULL. */ name and val can be NULL. */
@ -1483,6 +1489,7 @@ u8 parse_response(struct http_request* req, struct http_response* res,
*val = 0; *val = 0;
while (isspace(*(++val))); while (isspace(*(++val)));
SET_HDR(cur_line, val, &res->hdr);
if (!strcasecmp((char*)cur_line, "Set-Cookie") || if (!strcasecmp((char*)cur_line, "Set-Cookie") ||
!strcasecmp((char*)cur_line, "Set-Cookie2")) { !strcasecmp((char*)cur_line, "Set-Cookie2")) {
@ -1516,6 +1523,7 @@ u8 parse_response(struct http_request* req, struct http_response* res,
strncmp((char*)cval, (char*)orig_val, 3))) { strncmp((char*)cval, (char*)orig_val, 3))) {
res->cookies_set = 1; res->cookies_set = 1;
problem(PROB_NEW_COOKIE, req, res, val, req->pivot, 0); problem(PROB_NEW_COOKIE, req, res, val, req->pivot, 0);
} }
/* Set cookie globally, but ignore obvious attempts to delete /* Set cookie globally, but ignore obvious attempts to delete
@ -1525,8 +1533,7 @@ u8 parse_response(struct http_request* req, struct http_response* res,
SET_CK(val, cval, &global_http_par); SET_CK(val, cval, &global_http_par);
} }
}
} else SET_HDR(cur_line, val, &res->hdr);
/* Content-Type is worth mining for MIME, charset data at this point. */ /* Content-Type is worth mining for MIME, charset data at this point. */
@ -2289,9 +2296,22 @@ SSL_read_more:
struct queue_entry *q = queue; struct queue_entry *q = queue;
while (q) { while (q) {
struct queue_entry* next = q->next;
u32 to_host = 0; u32 to_host = 0;
// enforce the max requests per seconds requirement
if (max_requests_sec && req_sec > max_requests_sec) {
u32 diff = req_sec - max_requests_sec;
DEBUG("req_sec=%f max=%f diff=%u\n", req_sec, max_requests_sec, diff);
if ((iterations_cnt++)%(diff + 1) != 0) {
idle = 1;
return queue_cur;
}
}
idle = 0;
struct queue_entry* next = q->next;
if (!q->c) { if (!q->c) {
struct conn_entry* c = conn; struct conn_entry* c = conn;
@ -2457,7 +2477,6 @@ struct http_response* res_copy(struct http_response* res) {
} }
/* Dumps HTTP request data, for diagnostic purposes: */ /* Dumps HTTP request data, for diagnostic purposes: */
void dump_http_request(struct http_request* r) { void dump_http_request(struct http_request* r) {

View File

@ -266,10 +266,13 @@ void set_value(u8 type, u8* name, u8* val, s32 offset, struct param_array* par);
/* Simplified macros for value table access: */ /* Simplified macros for value table access: */
#define GET_HDR(_name, _p) get_value(PARAM_HEADER, _name, 0, _p) #define GET_CK(_name, _p) get_value(PARAM_COOKIE, _name, 0, _p)
#define SET_HDR(_name, _val, _p) set_value(PARAM_HEADER, _name, _val, -1, _p) #define SET_CK(_name, _val, _p) set_value(PARAM_COOKIE, _name, _val, 0, _p)
#define GET_CK(_name, _p) get_value(PARAM_COOKIE, _name, 0, _p) #define GET_PAR(_name, _p) get_value(PARAM_QUERY, _name, 0, _p)
#define SET_CK(_name, _val, _p) set_value(PARAM_COOKIE, _name, _val, 0, _p) #define SET_PAR(_name, _val, _p) set_value(PARAM_QUERY, _name, _val, -1, _p)
#define GET_HDR(_name, _p) get_value(PARAM_HEADER, _name, 0, _p)
#define SET_HDR(_name, _val, _p) set_value(PARAM_HEADER, _name, _val, -1, _p)
#define GET_HDR_OFF(_name, _p, _o) get_value(PARAM_HEADER, _name, _o, _p)
void tokenize_path(u8* str, struct http_request* req, u8 add_slash); void tokenize_path(u8* str, struct http_request* req, u8 add_slash);
@ -383,12 +386,17 @@ extern u32 max_connections,
conn_failed, conn_failed,
queue_cur; queue_cur;
extern float req_sec,
max_requests_sec;
extern u64 bytes_sent, extern u64 bytes_sent,
bytes_recv, bytes_recv,
bytes_deflated, bytes_deflated,
bytes_inflated; bytes_inflated,
iterations_cnt;
extern u8 ignore_cookies; extern u8 ignore_cookies,
idle;
/* Flags for browser type: */ /* Flags for browser type: */

View File

@ -421,7 +421,7 @@ static void save_req_res(struct http_request* req, struct http_response* res, u8
ck_free(rd); ck_free(rd);
} }
if (res && res->state == STATE_OK) { if (res && req && res->state == STATE_OK) {
u32 i; u32 i;
f = fopen("response.dat", "w"); f = fopen("response.dat", "w");
if (!f) PFATAL("Cannot create 'response.dat'"); if (!f) PFATAL("Cannot create 'response.dat'");
@ -430,8 +430,6 @@ static void save_req_res(struct http_request* req, struct http_response* res, u8
for (i=0;i<res->hdr.c;i++) for (i=0;i<res->hdr.c;i++)
if (res->hdr.t[i] == PARAM_HEADER) if (res->hdr.t[i] == PARAM_HEADER)
fprintf(f, "%s: %s\n", res->hdr.n[i], res->hdr.v[i]); fprintf(f, "%s: %s\n", res->hdr.n[i], res->hdr.v[i]);
else
fprintf(f, "Set-Cookie: %s=%s\n", res->hdr.n[i], res->hdr.v[i]);
fprintf(f, "\n"); fprintf(f, "\n");
fwrite(res->payload, res->pay_len, 1, f); fwrite(res->payload, res->pay_len, 1, f);

View File

@ -4,7 +4,7 @@
skipfish \- active web application security reconnaissance tool skipfish \- active web application security reconnaissance tool
.SH SYNOPSIS .SH SYNOPSIS
.B skipfish .B skipfish
.RI [ options ] " -o output-directory start-url [start-url2 ...]" .RI [ options ] " -W wordlist -o output-directory start-url [start-url2 ...]"
.br .br
.SH DESCRIPTION .SH DESCRIPTION
.PP .PP
@ -100,15 +100,15 @@ be quiet, do not display realtime scan statistics
.SS Dictionary management options: .SS Dictionary management options:
.TP .TP
.B \-S wordlist
load a specified read-only wordlist for brute-force tests
.TP
.B \-W wordlist .B \-W wordlist
load an alternative wordlist (skipfish.wl) load a specified read-write wordlist for any site-specific learned words. This option is required but the specified file can be empty, to store the newly learned words and alternatively, you can use -W- to discard new words.
.TP .TP
.B \-L .B \-L
do not auto-learn new keywords for the site do not auto-learn new keywords for the site
.TP .TP
.B \-V
do not update wordlist based on scan results
.TP
.B \-Y .B \-Y
do not fuzz extensions during most directory brute-force steps do not fuzz extensions during most directory brute-force steps
.TP .TP
@ -123,6 +123,9 @@ maximum number of keyword guesses to keep in the jar (default: 256)
.SS Performance settings: .SS Performance settings:
.TP .TP
.B \-l max_req
max requests per second (0 = unlimited)
.TP
.B \-g max_conn .B \-g max_conn
maximum simultaneous TCP connections, global (default: 50) maximum simultaneous TCP connections, global (default: 50)
.TP .TP
@ -147,11 +150,15 @@ response size limit (default: 200000 B)
.B \-e .B \-e
do not keep binary responses for reporting do not keep binary responses for reporting
.SS Performance settings:
.TP .TP
.B \-h, \-\-help .B \-k duration
Show summary of options. stop scanning after the given duration (format: h:m:s)
.SH AUTHOR .SH AUTHOR
skipfish was written by Michal Zalewski <lcamtuf@google.com>. skipfish was written by Michal Zalewski <lcamtuf@google.com>,
with contributions from Niels Heinen <heinenn@google.com>,
Sebastian Roschke <s.roschke@googlemail.com>, and other parties.
.PP .PP
This manual page was written by Thorsten Schifferdecker <tsd@debian.systs.org>, This manual page was written by Thorsten Schifferdecker <tsd@debian.systs.org>,
for the Debian project (and may be used by others). for the Debian project (and may be used by others).

View File

@ -68,7 +68,7 @@ static void resize_handler(int sig) {
/* Usage info. */ /* Usage info. */
static void usage(char* argv0) { static void usage(char* argv0) {
SAY("Usage: %s [ options ... ] -o output_dir start_url [ start_url2 ... ]\n\n" SAY("Usage: %s [ options ... ] -W wordlist -o output_dir start_url [ start_url2 ... ]\n\n"
"Authentication and access options:\n\n" "Authentication and access options:\n\n"
@ -110,9 +110,9 @@ static void usage(char* argv0) {
"Dictionary management options:\n\n" "Dictionary management options:\n\n"
" -W wordlist - load an alternative wordlist (%s)\n" " -W wordlist - use a specified read-write wordlist (required)\n"
" -S wordlist - load a supplemental read-only wordlist\n"
" -L - do not auto-learn new keywords for the site\n" " -L - do not auto-learn new keywords for the site\n"
" -V - do not update wordlist based on scan results\n"
" -Y - do not fuzz extensions in directory brute-force\n" " -Y - do not fuzz extensions in directory brute-force\n"
" -R age - purge words hit more than 'age' scans ago\n" " -R age - purge words hit more than 'age' scans ago\n"
" -T name=val - add new form auto-fill rule\n" " -T name=val - add new form auto-fill rule\n"
@ -120,6 +120,7 @@ static void usage(char* argv0) {
"Performance settings:\n\n" "Performance settings:\n\n"
" -l max_req - max requests per second (%f)\n"
" -g max_conn - max simultaneous TCP connections, global (%u)\n" " -g max_conn - max simultaneous TCP connections, global (%u)\n"
" -m host_conn - max simultaneous connections, per target IP (%u)\n" " -m host_conn - max simultaneous connections, per target IP (%u)\n"
" -f max_fail - max number of consecutive HTTP errors (%u)\n" " -f max_fail - max number of consecutive HTTP errors (%u)\n"
@ -129,10 +130,14 @@ static void usage(char* argv0) {
" -s s_limit - response size limit (%u B)\n" " -s s_limit - response size limit (%u B)\n"
" -e - do not keep binary responses for reporting\n\n" " -e - do not keep binary responses for reporting\n\n"
"Safety settings:\n\n"
" -k duration - stop scanning after the given duration h:m:s\n\n"
"Send comments and complaints to <lcamtuf@google.com>.\n", argv0, "Send comments and complaints to <lcamtuf@google.com>.\n", argv0,
max_depth, max_children, max_descendants, max_requests, DEF_WORDLIST, max_depth, max_children, max_descendants, max_requests,
MAX_GUESSES, max_connections, max_conn_host, max_fail, resp_tmout, MAX_GUESSES, max_requests_sec, max_connections, max_conn_host,
rw_tmout, idle_tmout, size_limit); max_fail, resp_tmout, rw_tmout, idle_tmout, size_limit);
exit(1); exit(1);
} }
@ -164,14 +169,14 @@ void splash_screen(void) {
"More info: " cYEL "http://code.google.com/p/skipfish/wiki/KnownIssues\n\n" cBRI); "More info: " cYEL "http://code.google.com/p/skipfish/wiki/KnownIssues\n\n" cBRI);
if (!no_fuzz_ext && (keyword_orig_cnt * extension_cnt) > 1000) { if (!no_fuzz_ext && (keyword_orig_cnt * wg_extension_cnt) > 1000) {
SAY(cLRD SAY(cLRD
"NOTE: The scanner is currently configured for directory brute-force attacks,\n" "NOTE: The scanner is currently configured for directory brute-force attacks,\n"
"and will make about " cBRI "%u" cLRD " requests per every fuzzable location. If this is\n" "and will make about " cBRI "%u" cLRD " requests per every fuzzable location. If this is\n"
"not what you wanted, stop now and consult the documentation.\n\n", "not what you wanted, stop now and consult the documentation.\n\n",
keyword_orig_cnt * extension_cnt); keyword_orig_cnt * wg_extension_cnt);
} }
@ -237,9 +242,10 @@ static void read_urls(u8* fn) {
int main(int argc, char** argv) { int main(int argc, char** argv) {
s32 opt; s32 opt;
u32 loop_cnt = 0, purge_age = 0, seed; u32 loop_cnt = 0, purge_age = 0, seed;
u8 dont_save_words = 0, show_once = 0, be_quiet = 0, display_mode = 0, u8 show_once = 0, be_quiet = 0, display_mode = 0, has_fake = 0;
has_fake = 0;
u8 *wordlist = NULL, *output_dir = NULL; u8 *wordlist = NULL, *output_dir = NULL;
u8* gtimeout_str = NULL;
u32 gtimeout = 0;
struct termios term; struct termios term;
struct timeval tv; struct timeval tv;
@ -258,7 +264,8 @@ int main(int argc, char** argv) {
SAY("skipfish version " VERSION " by <lcamtuf@google.com>\n"); SAY("skipfish version " VERSION " by <lcamtuf@google.com>\n");
while ((opt = getopt(argc, argv, while ((opt = getopt(argc, argv,
"+A:F:C:H:b:Nd:c:x:r:p:I:X:D:POYQMZUEK:W:LVT:J:G:R:B:q:g:m:f:t:w:i:s:o:hue")) > 0) "+A:B:C:D:EF:G:H:I:J:K:LMNOPQR:S:T:UW:X:YZ"
"b:c:d:ef:g:hi:k:l:m:o:p:q:r:s:t:uw:x:")) > 0)
switch (opt) { switch (opt) {
@ -371,10 +378,6 @@ int main(int argc, char** argv) {
no_parse = 1; no_parse = 1;
break; break;
case 'V':
dont_save_words = 1;
break;
case 'M': case 'M':
warn_mixed = 1; warn_mixed = 1;
break; break;
@ -421,12 +424,16 @@ int main(int argc, char** argv) {
break; break;
case 'W': case 'W':
if (optarg[0] == '+') load_keywords((u8*)optarg + 1, 1, 0); if (wordlist)
else { FATAL("Only one -W parameter permitted (use -S to load supplemental dictionaries).");
if (wordlist)
FATAL("Only one -W parameter permitted (unless '+' used)."); if (!strcmp(optarg, "-")) wordlist = (u8*)"/dev/null";
wordlist = (u8*)optarg; else wordlist = (u8*)optarg;
}
break;
case 'S':
load_keywords((u8*)optarg, 1, 0);
break; break;
case 'b': case 'b':
@ -456,6 +463,11 @@ int main(int argc, char** argv) {
if (!max_requests) FATAL("Invalid value '%s'.", optarg); if (!max_requests) FATAL("Invalid value '%s'.", optarg);
break; break;
case 'l':
max_requests_sec = atof(optarg);
if (!max_requests_sec) FATAL("Invalid value '%s'.", optarg);
break;
case 'f': case 'f':
max_fail = atoi(optarg); max_fail = atoi(optarg);
if (!max_fail) FATAL("Invalid value '%s'.", optarg); if (!max_fail) FATAL("Invalid value '%s'.", optarg);
@ -500,6 +512,11 @@ int main(int argc, char** argv) {
delete_bin = 1; delete_bin = 1;
break; break;
case 'k':
if (gtimeout_str) FATAL("Multiple -k options not allowed.");
gtimeout_str = (u8*)optarg;
break;
case 'Z': case 'Z':
no_500_dir = 1; no_500_dir = 1;
break; break;
@ -531,7 +548,27 @@ int main(int argc, char** argv) {
if (max_connections < max_conn_host) if (max_connections < max_conn_host)
max_connections = max_conn_host; max_connections = max_conn_host;
if (!wordlist) wordlist = (u8*)DEF_WORDLIST; /* Parse the timeout string - format h:m:s */
if (gtimeout_str) {
int i = 0;
int m[3] = { 1, 60, 3600 };
u8* tok = (u8*)strtok((char*)gtimeout_str, ":");
while(tok && i <= 2) {
gtimeout += atoi((char*)tok) * m[i];
tok = (u8*)strtok(NULL, ":");
i++;
}
if(!gtimeout)
FATAL("Wrong timeout format, please use h:m:s (hours, minutes, seconds)");
DEBUG("* Scan timeout is set to %d seconds\n", gtimeout);
}
if (!wordlist)
FATAL("Wordlist not specified (try -h for help; see dictionaries/README-FIRST).");
load_keywords(wordlist, 0, purge_age); load_keywords(wordlist, 0, purge_age);
@ -587,7 +624,23 @@ int main(int argc, char** argv) {
u8 keybuf[8]; u8 keybuf[8];
if (be_quiet || ((loop_cnt++ % 100) && !show_once)) continue; u64 end_time;
u64 run_time;
struct timeval tv_tmp;
gettimeofday(&tv_tmp, NULL);
end_time = tv_tmp.tv_sec * 1000LL + tv_tmp.tv_usec / 1000;
run_time = end_time - st_time;
if (gtimeout > 0 && run_time && run_time/1000 > gtimeout) {
DEBUG("* Stopping scan due to timeout\n");
stop_soon = 1;
}
req_sec = (req_count - queue_cur / 1.15) * 1000 / (run_time + 1);
if (be_quiet || ((loop_cnt++ % 100) && !show_once && idle == 0))
continue;
if (clear_screen) { if (clear_screen) {
SAY("\x1b[H\x1b[2J"); SAY("\x1b[H\x1b[2J");
@ -629,7 +682,7 @@ int main(int argc, char** argv) {
tcsetattr(0, TCSANOW, &term); tcsetattr(0, TCSANOW, &term);
fcntl(0, F_SETFL, O_SYNC); fcntl(0, F_SETFL, O_SYNC);
if (!dont_save_words) save_keywords((u8*)wordlist); save_keywords((u8*)wordlist);
write_report(output_dir, en_time - st_time, seed); write_report(output_dir, en_time - st_time, seed);