skipfish/skipfish.1

162 lines
3.7 KiB
Groff

.\" vi:set wm=5
.TH SKIPFISH 1 "March 23, 2010"
.SH NAME
skipfish \- active web application security reconnaissance tool
.SH SYNOPSIS
.B skipfish
.RI [ options ] " -o output-directory start-url [start-url2 ...]"
.br
.SH DESCRIPTION
.PP
\fBskipfish\fP is an active web application security reconnaissance tool.
It prepares an interactive sitemap for the targeted site by carrying out a recursive crawl and dictionary-based probes.
The resulting map is then annotated with the output from a number of active (but hopefully non-disruptive) security checks.
The final report generated by the tool is meant to serve as a foundation for professional web application security assessments.
.SH OPTIONS
.SS Authentication and access options:
.TP
.B \-A user:pass
use specified HTTP authentication credentials
.TP
.B \-F host:IP
pretend that 'host' resolves to 'IP'
.TP
.B \-C name=val
append a custom cookie to all requests
.TP
.B \-H name=val
append a custom HTTP header to all requests
.TP
.B \-b (i|f)
use headers consistent with MSIE / Firefox
.TP
.B \-N
do not accept any new cookies
.SS Crawl scope options:
.TP
.B \-d max_depth
maximum crawl tree depth (default: 16)
.TP
.B \-c max_child
maximum children to index per node (default: 512)
.TP
.B \-x max_desc
maximum descendants to index per crawl tree branch (default: 8192)
.TP
.B \-r r_limit
max total number of requests to send (default: 100000000)
.TP
.B \-p crawl%
node and link crawl probability (default: 100%)
.TP
.B \-q hex
repeat a scan with a particular random seed
.TP
.B \-I string
only follow URLs matching 'string'
.TP
.B \-X string
exclude URLs matching 'string'
.TP
.B \-S string
exclude pages containing 'string'
.TP
.B \-K string
do not fuzz query parameters or form fields named 'string'
.TP
.B \-Z
do not descend into directories that return HTTP 500 code
.TP
.B \-D domain
also crawl cross-site links to a specified domain
.TP
.B \-B domain
trust, but do not crawl, content included from a third-party domain
.TP
.B \-O
do not submit any forms
.TP
.B \-P
do not parse HTML and other documents to find new links
.SS Reporting options:
.TP
.B \-o dir
write output to specified directory (required)
.TP
.B \-J
be less noisy about MIME / charset mismatches on probably
static content
.TP
.B \-M
log warnings about mixed content
.TP
.B \-E
log all HTTP/1.0 / HTTP/1.1 caching intent mismatches
.TP
.B \-U
log all external URLs and e-mails seen
.TP
.B \-Q
completely suppress duplicate nodes in reports
.TP
.B \-u
be quiet, do not display realtime scan statistics
.SS Dictionary management options:
.TP
.B \-W wordlist
load an alternative wordlist (skipfish.wl)
.TP
.B \-L
do not auto-learn new keywords for the site
.TP
.B \-V
do not update wordlist based on scan results
.TP
.B \-Y
do not fuzz extensions during most directory brute-force steps
.TP
.B \-R age
purge words that resulted in a hit more than 'age' scans ago
.TP
.B \-T name=val
add new form auto-fill rule
.TP
.B \-G max_guess
maximum number of keyword guesses to keep in the jar (default: 256)
.SS Performance settings:
.TP
.B \-g max_conn
maximum simultaneous TCP connections, global (default: 50)
.TP
.B \-m host_conn
maximum simultaneous connections, per target IP (default: 10)
.TP
.B \-f max_fail
maximum number of consecutive HTTP errors to accept (default: 100)
.TP
.B \-t req_tmout
total request response timeout (default: 20 s)
.TP
.B \-w rw_tmout
individual network I/O timeout (default: 10 s)
.TP
.B \-i idle_tmout
timeout on idle HTTP connections (default: 10 s)
.TP
.B \-s s_limit
response size limit (default: 200000 B)
.TP
.B \-h, \-\-help
Show summary of options.
.SH AUTHOR
skipfish was written by Michal Zalewski <lcamtuf@google.com>.
.PP
This manual page was written by Thorsten Schifferdecker <tsd@debian.systs.org>,
for the Debian project (and may be used by others).