1.06b - Minor documentation updates

This commit is contained in:
Steve Pinkham 2010-03-21 19:59:55 -04:00
parent 3720b4840a
commit a7f9000161
4 changed files with 68 additions and 41 deletions

View File

@ -1,7 +1,12 @@
Version 1.06b:
--------------
- Minor documentation updates.
Version 1.05b:
--------------
- Final workaround for FORTIFY_SOURCE on MacOS X.
- Another workaround for FORTIFY_SOURCE on MacOS X.
Version 1.04b:
--------------
@ -14,7 +19,7 @@ Version 1.04b:
Version 1.01b:
--------------
- Workaround for a glitch in glibc "fortify".
- Workaround for a glitch in FORTIFY_SOURCE on Linux.
Version 1.00b:
--------------

37
README
View File

@ -178,16 +178,27 @@ First and foremost, please do not be evil. Use skipfish only against services
you own, or have a permission to test.
Keep in mind that all types of security testing can be disruptive. Although the
scanner is designed not to carry out disruptive malicious attacks, it may
accidentally interfere with the operations of the site. You must accept the
risk, and plan accordingly. Run the scanner against test instances where
feasible, and be prepared to deal with the consequences if things go wrong.
scanner is designed not to carry out malicious attacks, it may accidentally
interfere with the operations of the site. You must accept the risk, and plan
accordingly. Run the scanner against test instances where feasible, and be
prepared to deal with the consequences if things go wrong.
Also note that the tool is meant to be used by security professionals, and is
experimental in nature. It may return false positives or miss obvious security
problems - and even when it operates perfectly, it is simply not meant to be a
point-and-click application. Do not rely on its output at face value.
How to run the scanner?
Running the tool against vendor-supplied demo sites is not a good way to
evaluate it, as they usually approximate vulnerabilities very imperfectly; we
made no effort to accommodate these cases.
Lastly, the scanner is simply not designed for dealing with rogue and
misbehaving HTTP servers - and offers no guarantees of safe (or sane) behavior
there.
--------------------------
4. How to run the scanner?
--------------------------
To compile it, simply unpack the archive and try make. Chances are, you will
need to install libidn first.
@ -357,7 +368,7 @@ willing to see before aborting the scan; and -s sets the maximum length of a
response to fetch and parse (longer responses will be truncated).
--------------------------------
4. But seriously, how to run it?
5. But seriously, how to run it?
--------------------------------
A standard, authenticated scan of a well-designed and self-contained site
@ -380,7 +391,7 @@ $ ./skipfish -B .example.com -O -o output_dir -t 5 http://www.example.com/
For a short list of all command-line options, try ./skipfish -h.
----------------------------------------------------
5. How to interpret and address the issues reported?
6. How to interpret and address the issues reported?
----------------------------------------------------
Most of the problems reported by skipfish should self-explanatory, assuming you
@ -405,7 +416,7 @@ cannot offer any assistance with the inner wokings of third-party web
applications.
---------------------------------------
6. Known limitations / feature wishlist
7. Known limitations / feature wishlist
---------------------------------------
Below is a list of features currently missing in skipfish. If you wish to
@ -447,13 +458,17 @@ improve the tool by contributing code in one of these areas, please let me know:
* Config file support.
-------------------------------------
7. Oy! Something went horribly wrong!
8. Oy! Something went horribly wrong!
-------------------------------------
There is no web crawler so good that there wouldn't be a web framework to one
day set it on fire. If you encounter what appears to be bad behavior (e.g., a
scan that takes forever and generates too many requests, completely bogus nodes
in scan output, or outright crashes), please recompile the scanner with:
in scan output, or outright crashes), please first check this page:
http://code.google.com/p/skipfish/wiki/KnownIssues
If you can't find a satisfactory answer there, recompile the scanner with:
$ make clean debug
@ -474,7 +489,7 @@ $ gdb --batch -ex back ./skipfish core
...and be sure to send the author the output of that last command as well.
-----------------------
8. Credits and feedback
9. Credits and feedback
-----------------------
Skipfish is made possible thanks to the contributions of, and valuable feedback

View File

@ -23,7 +23,7 @@
#ifndef _HAVE_CONFIG_H
#define _HAVE_CONFIG_H
#define VERSION "1.05b"
#define VERSION "1.06b"
#define USE_COLOR 1 /* Use terminal colors */

View File

@ -12,8 +12,8 @@ Dictionary management basics:
"regular" keywords. Extensions are considered just a special subset of
the keyword list.
2) You can specify the dictionary to use with a -W option. The file must
conform to the following format:
2) Use -W to specify the dictionary file to use. The dictionary may be
custom, but must conform to the following format:
type hits total_age last_age keyword
@ -26,32 +26,32 @@ Dictionary management basics:
Do not duplicate extensions as keywords - if you already have 'html' as
an 'e' entry, there is no need to also create a 'w' one.
There must be no empty or malformed lines, comments, etc, in the wordlist
There must be no empty or malformed lines, comments in the wordlist
file. Extension keywords must have no leading dot (e.g., 'exe', not '.exe'),
and all keywords should be NOT url-encoded (e.g., 'Program Files', not
'Program%20Files'). No keyword should exceed 64 characters.
If you omit -W in the command line, 'skipfish.wl' is assumed.
If you omit -W in the command line, 'skipfish.wl' is assumed. This
file does not exist by default; this is by design.
3) When loading a dictionary, you can use -R option to drop any entries
that had no hits for a specified number of scans.
3) The scanner will automatically learn new keywords and extensions based on
any links discovered during the scan; and will also analyze pages and
extract words to use as keyword candidates.
4) Unless -L is specified in the command line, the scanner will also
automatically learn new keywords and extensions based on any links
discovered during the scan.
A capped number of candidates is kept in memory (you can set the jar size
with the -G option) in FIFO mode, and are used for brute-force attacks.
When a particular candidate results in a non-404 hit, it is promoted to
the "real" dictionary; other candidates are discarded at the end of the
scan.
5) Unless -L is specified, the scanner will also analyze pages and extract
words that would serve as keyword guesses. A capped number of guesses
is maintained by the scanner, with older entries being removed from the
list as new ones are found (the size of this jar is adjustable with the
-G option).
You can inhibit this auto-learning behavior by specifying -L in the
command line.
These guesses would be tested along with regular keywords during brute-force
steps. If they result in a non-404 hit at some point, they are promoted to
the "proper" keyword list.
4) Keyword hit counts and age information will be updated at the end of the
scan. This can be prevented with -V.
6) Unless -V is specified in the command line, all newly discovered keywords
are saved back to the input wordlist file, along with their hit statistics.
5) Old dictionary entries with no hits for a specified number of scans can
be purged by specifying the -R <cnt> option.
----------------------------------------------
Dictionaries are used for the following tasks:
@ -61,14 +61,17 @@ Dictionaries are used for the following tasks:
the scanner attempts passing all possible <keyword> values to discover new
files, directories, etc.
2) If you did NOT specify -Y in the command line, the scanner also tests all
possible <keyword>.<extension> pairs in these cases. Note that this may
result in several orders of magnitude more requests, but is the only way
to discover files such as 'backup.tar.gz', 'database.csv', etc.
2) The scanner also tests all possible <keyword>.<extension> pairs. Note that
this results in several orders of magnitude more requests, but is the only
way to discover files such as 'backup.tar.gz', 'database.csv', etc.
In some cases, you might want to inhibit this step. This can be achieved
with the -Y switch.
3) For any non-404 file or directory discovered by any other means, the scanner
also attempts all <node_filename>.<extension> combinations, to discover,
for example, entries such as 'index.php.old'.
for example, entries such as 'index.php.old'. This behavior is independent
of the -Y option.
----------------------
Supplied dictionaries:
@ -107,10 +110,14 @@ Supplied dictionaries:
This is useful for quick assessments where no obscure technologies are used.
The principal scan cost is about 42,000 requests per each fuzzed directory.
Using it without -L is recommended, as the list of extensions does not
include standard framework-specific cases (.asp, .jsp, .php, etc), and
these are best learned on the fly.
** This dictionary is strongly recommended for your first experiments with
** Jellyfish, as it's reasonably lightweight.
You can also use this dictionary with -Y option enabled, approximating the
behavior of most other security scanners; in this case, it will send only
about 1,700 requests per directory, and will look for 25 secondary extensions
@ -127,7 +134,7 @@ Supplied dictionaries:
directory.
In -Y mode, it behaves nearly identical to minimal.wl, but will test a
greater set of extensions on otherwise discovered resources, at a relatively
greater set of extensions on otherwise discovered resources at a relatively
minor expense.
4) Complete extensions dictionary (complete.wl).
@ -139,7 +146,7 @@ Supplied dictionaries:
Useful for comprehensive assessments, over 150,000 requests per each fuzzed
directory.
In -Y mode - see default.wl, offers the best coverage of all three wordlists
In -Y mode, this dictionary offers the best coverage of all three wordlists
at a relatively low cost.
Of course, you can customize these dictionaries as seen fit. It might be, for
@ -148,7 +155,7 @@ the technologies used by your target host to regular 'w' records.
Whichever option you choose, be sure to make a *copy* of this dictionary, and
load that copy, not the original, via -W. The specified file will be overwritten
with site-specific information (unless -V used).
with site-specific information unless -V used.
----------------------------------
Bah, these dictionaries are small!