Version 1.00b as released

This commit is contained in:
Steve Pinkham 2010-03-20 11:46:08 -04:00
commit fcf0650b5e
49 changed files with 20483 additions and 0 deletions

202
COPYING Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

54
Makefile Normal file
View File

@ -0,0 +1,54 @@
#
# skipfish - Makefile
# -------------------
#
# Author: Michal Zalewski <lcamtuf@google.com>
#
# Copyright 2009, 2010 by Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
PROGNAME = skipfish
OBJFILES = http_client.c database.c crawler.c analysis.c report.c
INCFILES = alloc-inl.h string-inl.h debug.h types.h http_client.h \
database.h crawler.h analysis.h config.h report.h
CFLAGS_GEN = -Wall -funsigned-char -g -ggdb
CFLAGS_DBG = $(CFLAGS_GEN) -DLOG_STDERR=1 -DDEBUG_ALLOCATOR=1
CFLAGS_OPT = $(CFLAGS_GEN) -O3 -Wno-format
LDFLAGS = -lcrypto -lssl -lidn -lz
all: $(PROGNAME)
$(PROGNAME): $(PROGNAME).c $(OBJFILES) $(INCFILES)
$(CC) $(PROGNAME).c -o $(PROGNAME) $(CFLAGS_OPT) $(OBJFILES) $(LDFLAGS)
@echo
@echo "NOTE: See dictionaries/README-FIRST to pick a dictionary for the tool."
@echo
debug: $(PROGNAME).c $(OBJFILES) $(INCFILES)
$(CC) $(PROGNAME).c -o $(PROGNAME) $(CFLAGS_DBG) $(OBJFILES) $(LDFLAGS)
clean:
rm -f $(PROGNAME) *.exe *.o *~ a.out core core.[1-9][0-9]* *.stackdump \
LOG same_test
rm -rf tmpdir
same_test: same_test.c $(OBJFILES) $(INCFILES)
$(CC) same_test.c -o same_test $(CFLAGS_DBG) $(OBJFILES) $(LDFLAGS)
publish: clean
cd ..; tar cfvz ~/www/skipfish.tgz skipfish
chmod 644 ~/www/skipfish.tgz

484
README Normal file
View File

@ -0,0 +1,484 @@
===========================================
skipfish - web application security scanner
===========================================
http://code.google.com/p/skipfish/
* Written and maintained by Michal Zalewski <lcamtuf@google.com>.
* Copyright 2009, 2010 Google Inc, rights reserved.
* Released under terms and conditions of the Apache License, version 2.0.
--------------------
1. What is skipfish?
--------------------
Skipfish is an active web application security reconnaissance tool. It prepares
an interactive sitemap for the targeted site by carrying out a recursive crawl
and dictionary-based probes. The resulting map is then annotated with the
output from a number of active (but hopefully non-disruptive) security checks.
The final report generated by the tool is meant to serve as a foundation for
professional web application security assessments.
Why should I bother with this particular tool?
A number of commercial and open source tools with analogous functionality is
readily available (e.g., Nikto, Nessus); stick to the one that suits you best.
That said, skipfish tries to address some of the common problems associated
with web security scanners. Specific advantages include:
* High performance: 500+ requests per second against responsive Internet
targets, 2000+ requests per second on LAN / MAN networks, and 7000+ requests
against local instances has been observed, with a very modest CPU, network,
and memory footprint. This can be attributed to:
- Multiplexing single-thread, fully asynchronous network I/O and data
processing model that eliminates memory management, scheduling, and IPC
inefficiencies present in some multi-threaded clients.
- Advanced HTTP/1.1 features such as range requests, content
compression, and keep-alive connections, as well as forced response size
limiting, to keep network-level overhead in check.
- Smart response caching and advanced server behavior heuristics are
used to minimize unnecessary traffic.
- Performance-oriented, pure C implementation, including a custom
HTTP stack.
* Ease of use: skipfish is highly adaptive and reliable. The scanner
features:
- Heuristic recognition of obscure path- and query-based parameter
handling schemes.
- Graceful handling of multi-framework sites where certain paths obey
a completely different semantics, or are subject to different filtering
rules.
- Automatic wordlist construction based on site content analysis.
- Probabilistic scanning features to allow periodic, time-bound
assessments of arbitrarily complex sites.
* Well-designed security checks: the tool is meant to provide accurate and
meaningful results:
- Three-step differential probes are preferred to signature checks
for detecting vulnerabilities.
- Ratproxy-style logic is used to spot subtle security problems:
cross-site request forgery, cross-site script inclusion, mixed content,
issues MIME- and charset mismatches, incorrect caching directive, etc.
- Bundled security checks are designed to handle tricky scenarios:
stored XSS (path, parameters, headers), blind SQL or XML injection, or
blind shell injection.
- Report post-processing drastically reduces the noise caused by any
remaining false positives or server gimmicks by identifying repetitive
patterns.
That said, skipfish is not a silver bullet, and may be unsuitable for certain
purposes. For example, it does not satisfy most of the requirements outlined in
WASC Web Application Security Scanner Evaluation Criteria (some of them on
purpose, some out of necessity); and unlike most other projects of this type,
it does not come with an extensive database of known vulnerabilities for
banner-type checks.
-----------------------------------------------------
2. Most curious! What specific tests are implemented?
-----------------------------------------------------
A rough list of the security checks offered by the tool is outlined below.
* High risk flaws (potentially leading to system compromise):
- Server-side SQL injection (including blind vectors, numerical
parameters).
- Explicit SQL-like syntax in GET or POST parameters.
- Server-side shell command injection (including blind vectors).
- Server-side XML / XPath injection (including blind vectors).
- Format string vulnerabilities.
- Integer overflow vulnerabilities.
* Medium risk flaws (potentially leading to data compromise):
- Stored and reflected XSS vectors in document body (minimal JS XSS
support present).
- Stored and reflected XSS vectors via HTTP redirects.
- Stored and reflected XSS vectors via HTTP header splitting.
- Directory traversal (including constrained vectors).
- Assorted file POIs (server-side sources, configs, etc).
- Attacker-supplied script and CSS inclusion vectors (stored and
reflected).
- External untrusted script and CSS inclusion vectors.
- Mixed content problems on script and CSS resources (optional).
- Incorrect or missing MIME types on renderables.
- Generic MIME types on renderables.
- Incorrect or missing charsets on renderables.
- Conflicting MIME / charset info on renderables.
- Bad caching directives on cookie setting responses.
* Low risk issues (limited impact or low specificity):
- Directory listing bypass vectors.
- Redirection to attacker-supplied URLs (stored and reflected).
- Attacker-supplied embedded content (stored and reflected).
- External untrusted embedded content.
- Mixed content on non-scriptable subresources (optional).
- HTTP credentials in URLs.
- Expired or not-yet-valid SSL certificates.
- HTML forms with no XSRF protection.
- Self-signed SSL certificates.
- SSL certificate host name mismatches.
- Bad caching directives on less sensitive content.
* Internal warnings:
- Failed resource fetch attempts.
- Exceeded crawl limits.
- Failed 404 behavior checks.
- IPS filtering detected.
- Unexpected response variations.
- Seemingly misclassified crawl nodes.
* Non-specific informational entries:
- General SSL certificate information.
- Significantly changing HTTP cookies.
- Changing Server, Via, or X-... headers.
- New 404 signatures.
- Resources that cannot be accessed.
- Resources requiring HTTP authentication.
- Broken links.
- Server errors.
- All external links not classified otherwise (optional).
- All external e-mails (optional).
- All external URL redirectors (optional).
- Links to unknown protocols.
- Form fields that could not be autocompleted.
- All HTML forms detected.
- Password entry forms (for external brute-force).
- Numerical file names (for external brute-force).
- User-supplied links otherwise rendered on a page.
- Incorrect or missing MIME type on less significant content.
- Generic MIME type on less significant content.
- Incorrect or missing charset on less significant content.
- Conflicting MIME / charset information on less significant content.
- OGNL-like parameter passing conventions.
Along with a list of identified issues, skipfish also provides summary
overviews of document types and issue types found; and an interactive sitemap,
with nodes discovered through brute-force denoted in a distinctive way.
-----------------------------------------------------------
3. All right, I want to try it out. What do I need to know?
-----------------------------------------------------------
First and foremost, please do not be evil. Use skipfish only against services
you own, or have a permission to test.
Keep in mind that all types of security testing can be disruptive. Although the
scanner is designed not to carry out disruptive malicious attacks, it may
accidentally interfere with the operations of the site. You must accept the
risk, and plan accordingly. Run the scanner against test instances where
feasible, and be prepared to deal with the consequences if things go wrong.
Also note that the tool is meant to be used by security professionals, and is
experimental in nature. It may return false positives or miss obvious security
problems - and even when it operates perfectly, it is simply not meant to be a
point-and-click application. Do not rely on its output at face value.
How to run the scanner?
To compile it, simply unpack the archive and try make. Chances are, you will
need to install libidn first.
Next, you need to copy the desired dictionary file from dictionaries/ to
skipfish.wl. Please read dictionaries/README-FIRST carefully to make the right
choice. This step has a profound impact on the quality of scan results later on.
Once you have the dictionary selected, you can try:
$ ./skipfish -o output_dir http://www.example.com/some/starting/path.txt
Note that you can provide more than one starting URL if so desired; all of them
will be crawled.
In the example above, skipfish will scan the entire www.example.com (including
services on other ports, if linked to from the main page), and write a report
to output_dir/index.html. You can then view this report with your favorite
browser (JavaScript must be enabled). The index.html file is static; actual
results are stored as a hierarchy of JSON files, suitable for machine
processing if needs be.
Some sites may require authentication; for simple HTTP credentials, you can try:
$ ./skipfish -A user:pass ...other parameters...
Alternatively, if the site relies on HTTP cookies instead, log in in your
browser or using a simple curl script, and then provide skipfish with a session
cookie:
$ ./skipfish -C name=val ...other parameters...
Other session cookies may be passed the same way, one per each -C option.
Certain URLs on the site may log out your session; you can combat this in two
ways: by using the -N option, which causes the scanner to reject attempts to
set or delete cookies; or with the -X parameter, which prevents matching URLs
from being fetched:
$ ./skipfish -X /logout/logout.aspx ...other parameters...
The -X option is also useful for speeding up your scans by excluding /icons/,
/doc/, /manuals/, and other standard, mundane locations along these lines. In
general, you can use -X, plus -I (only spider URLs matching a substring) and -S
(ignore links on pages where a substring appears in response body) to limit the
scope of a scan any way you like - including restricting it only to a specific
protocol and port:
$ ./skipfish -I http://example.com:1234/ ...other parameters...
Another useful scoping option is -D - allowing you to specify additional hosts
or domains to consider in-scope for the test. By default, all hosts appearing
in the command-line URLs are added to the list - but you can use -D to broaden
these rules, for example:
$ ./skipfish -D test2.example.com -o output-dir http://test1.example.com/
...or, for a domain wildcard match, use:
$ ./skipfish -D .example.com -o output-dir http://test1.example.com/
In some cases, you do not want to actually crawl a third-party domain, but you
trust the owner of that domain enough not to worry about cross-domain content
inclusion from that location. To suppress warnings, you can use the -B option,
for example:
$ ./skipfish -B .google-analytics.com -B .googleapis.com ...other parameters...
By default, skipfish sends minimalistic HTTP headers to reduce the amount of
data exchanged over the wire; some sites examine User-Agent strings or header
ordering to reject unsupported clients, however. In such a case, you can use -b
ie or -b ffox to mimic one of the two popular browsers.
When it comes to customizing your HTTP requests, you can also use the -H option
to insert any additional, non-standard headers; or -F to define a custom
mapping between a host and an IP (bypassing the resolver). The latter feature
is particularly useful for not-yet-launched or legacy services.
Some sites may be too big to scan in a reasonable timeframe. If the site
features well-defined tarpits - for example, 100,000 nearly identical user
profiles as a part of a social network - these specific locations can be
excluded with -X or -S. In other cases, you may need to resort to other
settings: -d limits crawl depth to a specified number of subdirectories; -c
limits the number of children per directory; and -r limits the total number of
requests to send in a scan.
An interesting option is available for repeated assessments: -p. By specifying
a percentage between 1 and 100%, it is possible to tell the crawler to follow
fewer than 100% of all links, and try fewer than 100% of all dictionary
entries. This - naturally - limits the completeness of a scan, but unlike most
other settings, it does so in a balanced, non-deterministic manner. It is
extremely useful when you are setting up time-bound, but periodic assessments
of your infrastructure. Another related option is -q, which sets the initial
random seed for the crawler to a specified value. This can be used to exactly
reproduce a previous scan to compare results. Randomness is relied upon most
heavily in the -p mode, but also for making a couple of other scan management
decisions elsewhere.
Some particularly complex (or broken) services may involve a very high number
of identical or nearly identical pages. Although these occurrences are by
default grayed out in the report, they still use up some screen estate and take
a while to process on JavaScript level. In such extreme cases, you may use the
-Q option to suppress reporting of duplicate nodes altogether, before the
report is written. This may give you a less comprehensive understanding of how
the site is organized, but has no impact on test coverage.
In certain quick assessments, you might also have no interest in paying any
particular attention to the desired functionality of the site - hoping to
explore non-linked secrets only. In such a case, you may specify -P to inhibit
all HTML parsing. This limits the coverage and takes away the ability for the
scanner to learn new keywords by looking at the HTML, but speeds up the test
dramatically. Another similarly crippling option that reduces the risk of
persistent effects of a scan is -O, which inhibits all form parsing and
submission steps.
By default, skipfish complains loudly about all MIME or character set
mismatches on renderable documents, and classifies many of them as "medium
risk"; this is because, if any user-controlled content is returned, the
situation could lead to cross-site scripting attacks in certain browsers. On
some poorly designed and maintained sites, this may contribute too much noise;
if so, you may use -J to mark these issues as "low risk" unless the scanner can
explicitly sees its own user input being echoed back on the resulting page.
This may miss many subtle attack vectors, though.
Some sites that handle sensitive user data care about SSL - and about getting
it right. Skipfish may optionally assist you in figuring out problematic mixed
content scenarios - use the -M option to enable this. The scanner will complain
about situations such as http:// scripts being loaded on https:// pages - but
will disregard non-risk scenarios such as images.
Likewise, certain pedantic sites may care about cases where caching is
restricted on HTTP/1.1 level, but no explicit HTTP/1.0 caching directive is
given on specifying -E in the command-line causes skipfish to log all such
cases carefully.
Lastly, in some assessments that involve self-contained sites without extensive
user content, the auditor may care about any external e-mails or HTTP links
seen, even if they have no immediate security impact. Use the -U option to have
these logged.
Dictionary management is a special topic, and - as mentioned - is covered in
more detail in dictionaries/README-FIRST. Please read that file before
proceeding. Some of the relevant options include -W to specify a custom
wordlist, -L to suppress auto-learning, -V to suppress dictionary updates, -G
to limit the keyword guess jar size, -R to drop old dictionary entries, and -Y
to inhibit expensive $keyword.$extension fuzzing.
Skipfish also features a form auto-completion mechanism in order to maximize
scan coverage. The values should be non-malicious, as they are not meant to
implement security checks - but rather, to get past input validation logic. You
can define additional rules, or override existing ones, with the -T option (-T
form_field_name=field_value, e.g. -T login=test123 -T password=test321 -
although note that -C and -A are a much better method of logging in).
There is also a handful of performance-related options. Use -g to set the
maximum number of connections to maintain, globally, to all targets (it is
sensible to keep this under 50 or so to avoid overwhelming the TCP/IP stack on
your system or on the nearby NAT / firewall devices); and -m to set the per-IP
limit (experiment a bit: 2-4 is usually good for localhost, 4-8 for local
networks, 10-20 for external targets, 30+ for really lagged or non-keep-alive
hosts). You can also use -w to set the I/O timeout (i.e., skipfish will wait
only so long for an individual read or write), and -t to set the total request
timeout, to account for really slow or really fast sites.
Lastly, -f controls the maximum number of consecutive HTTP errors you are
willing to see before aborting the scan; and -s sets the maximum length of a
response to fetch and parse (longer responses will be truncated).
--------------------------------
4. But seriously, how to run it?
--------------------------------
A standard, authenticated scan of a well-designed and self-contained site
(warns about all external links, e-mails, mixed content, and caching header
issues):
$ ./skipfish -MEU -C "AuthCookie=value" -X /logout.aspx -o output_dir \
http://www.example.com/
Five-connection crawl, but no brute-force; pretending to be MSIE and caring
less about ambiguous MIME or character set mismatches:
$ ./skipfish -m 5 -LVJ -W /dev/null -o output_dir -b ie http://www.example.com/
Brute force only (no HTML link extraction), trusting links within example.com
and timing out after 5 seconds:
$ ./skipfish -B .example.com -O -o output_dir -t 5 http://www.example.com/
For a short list of all command-line options, try ./skipfish -h.
----------------------------------------------------
5. How to interpret and address the issues reported?
----------------------------------------------------
Most of the problems reported by skipfish should self-explanatory, assuming you
have a good gasp of the fundamentals of web security. If you need a quick
refresher on some of the more complicated topics, such as MIME sniffing, you
may enjoy our comprehensive Browser Security Handbook as a starting point:
http://code.google.com/p/browsersec/
If you still need assistance, there are several organizations that put a
considerable effort into documenting and explaining many of the common web
security threats, and advising the public on how to address them. I encourage
you to refer to the materials published by OWASP and Web Application Security
Consortium, amongst others:
* http://www.owasp.org/index.php/Category:Principle
* http://www.owasp.org/index.php/Category:OWASP_Guide_Project
* http://www.webappsec.org/projects/articles/
Although I am happy to diagnose problems with the scanner itself, I regrettably
cannot offer any assistance with the inner wokings of third-party web
applications.
---------------------------------------
6. Known limitations / feature wishlist
---------------------------------------
Below is a list of features currently missing in skipfish. If you wish to
improve the tool by contributing code in one of these areas, please let me know:
* Buffer overflow checks: after careful consideration, I suspect there is
no reliable way to test for buffer overflows remotely. Much like the actual
fault condition we are looking for, proper buffer size checks may also
result in uncaught exceptions, 500 messages, etc. I would love to be proved
wrong, though.
* Fully-fledged JavaScript XSS detection: several rudimentary checks are
present in the code, but there is no proper script engine to evaluate
expressions and DOM access built in.
* Variable length encoding character consumption / injection bugs: these
problems seem to be largely addressed on browser level at this point, so
they were much lower priority at the time of this writing.
* Security checks and link extraction for third-party, plugin-based content
(Flash, Java, PDF, etc).
* Password brute-force and numerical filename brute-force probes.
* Search engine integration (vhosts, starting paths).
* VIEWSTATE decoding.
* NTLM and digest authentication.
* Proxy support: somewhat incompatible with performance control features
currently employed by skipfish; but in the long run, should be provided as
a last-resort option.
* Scan resume option.
* Standalone installation (make install) support.
* Config file support.
-------------------------------------
7. Oy! Something went horribly wrong!
-------------------------------------
There is no web crawler so good that there wouldn't be a web framework to one
day set it on fire. If you encounter what appears to be bad behavior (e.g., a
scan that takes forever and generates too many requests, completely bogus nodes
in scan output, or outright crashes), please recompile the scanner with:
$ make clean debug
...and re-run it this way:
$ ./skipfish [...previous options...] 2>logfile.txt
You can then inspect logfile.txt to get an idea what went wrong; if it looks
like a scanner problem, please scrub any sensitive information from the log
file and send it to the author.
If the scanner crashed, please recompile it as indicated above, and then type:
$ ulimit -c unlimited
$ ./skipfish [...previous options...] 2>logfile.txt
$ gdb --batch -ex back ./skipfish core
...and be sure to send the author the output of that last command as well.
-----------------------
8. Credits and feedback
-----------------------
Skipfish is made possible thanks to the contributions of, and valuable feedback
from, Google's information security engineering team.
If you have any bug reports, questions, suggestions, or concerns regarding the
application, the author can be reached at lcamtuf@google.com.

294
alloc-inl.h Normal file
View File

@ -0,0 +1,294 @@
/*
skipfish - error-checking, memory-zeroing alloc routines
--------------------------------------------------------
Note: when DEBUG_ALLOCATOR is set, a horribly slow but pedantic
allocation tracker is used. Don't enable this in production.
Author: Michal Zalewski <lcamtuf@google.com>
Copyright 2009, 2010 by Google Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#ifndef _HAVE_ALLOC_INL_H
#define _HAVE_ALLOC_INL_H
#include <stdlib.h>
#ifdef __APPLE__
#include <malloc/malloc.h>
#else
#include <malloc.h>
#endif /* __APPLE__ */
#include <string.h>
#include "config.h"
#include "types.h"
#include "debug.h"
#define ALLOC_CHECK_SIZE(_s) do { \
if ((_s) > MAX_ALLOC) \
FATAL("bad alloc request: %u bytes", (_s)); \
} while (0)
#define ALLOC_CHECK_RESULT(_r,_s) do { \
if (!(_r)) \
FATAL("out of memory: can't allocate %u bytes", (_s)); \
} while (0)
#ifdef __APPLE__
#define malloc_usable_size malloc_size
#endif /* __APPLE__ */
static inline void* __DFL_ck_alloc(u32 size) {
void* ret;
u32 usable;
if (!size) return NULL;
ALLOC_CHECK_SIZE(size);
ret = malloc(size);
ALLOC_CHECK_RESULT(ret, size);
usable = malloc_usable_size(ret);
memset(ret, 0, usable);
return ret;
}
static inline void* __DFL_ck_realloc(void* orig, u32 size) {
void* ret;
u32 old_usable = 0,
new_usable;
if (!size) {
free(orig);
return NULL;
}
if (orig) old_usable = malloc_usable_size(orig);
ALLOC_CHECK_SIZE(size);
ret = realloc(orig, size);
ALLOC_CHECK_RESULT(ret, size);
new_usable = malloc_usable_size(ret);
if (new_usable > old_usable)
memset(ret + old_usable, 0, new_usable - old_usable);
return ret;
}
static inline void* __DFL_ck_strdup(u8* str) {
void* ret;
u32 size;
u32 usable;
if (!str) return NULL;
size = strlen((char*)str) + 1;
ALLOC_CHECK_SIZE(size);
ret = malloc(size);
ALLOC_CHECK_RESULT(ret, size);
usable = malloc_usable_size(ret);
memcpy(ret, str, size);
if (usable > size)
memset(ret + size, 0, usable - size);
return ret;
}
static inline void* __DFL_ck_memdup(u8* mem, u32 size) {
void* ret;
u32 usable;
if (!mem || !size) return NULL;
ALLOC_CHECK_SIZE(size);
ret = malloc(size);
ALLOC_CHECK_RESULT(ret, size);
usable = malloc_usable_size(ret);
memcpy(ret, mem, size);
if (usable > size)
memset(ret + size, 0, usable - size);
return ret;
}
#ifndef DEBUG_ALLOCATOR
/* Non-debugging mode - straightforward aliasing. */
#define ck_alloc __DFL_ck_alloc
#define ck_realloc __DFL_ck_realloc
#define ck_strdup __DFL_ck_strdup
#define ck_memdup __DFL_ck_memdup
#define ck_free free
#else
/* Debugging mode - include additional structures and support code. */
#define ALLOC_BUCKETS 1024
struct __AD_trk_obj {
void *ptr;
char *file, *func;
u32 line;
};
extern struct __AD_trk_obj* __AD_trk[ALLOC_BUCKETS];
extern u32 __AD_trk_cnt[ALLOC_BUCKETS];
#define __AD_H(_ptr) (((((u32)(long)(_ptr)) >> 16) ^ ((u32)(long)(_ptr))) % \
ALLOC_BUCKETS)
/* Adds a new entry to the list of allocated objects. */
static inline void __AD_alloc_buf(void* ptr, const char* file, const char* func,
u32 line) {
u32 i, b;
if (!ptr) return;
b = __AD_H(ptr);
for (i=0;i<__AD_trk_cnt[b];i++)
if (!__AD_trk[b][i].ptr) {
__AD_trk[b][i].ptr = ptr;
__AD_trk[b][i].file = (char*)file;
__AD_trk[b][i].func = (char*)func;
__AD_trk[b][i].line = line;
return;
}
__AD_trk[b] = __DFL_ck_realloc(__AD_trk[b],
(__AD_trk_cnt[b] + 1) * sizeof(struct __AD_trk_obj));
__AD_trk[b][__AD_trk_cnt[b]].ptr = ptr;
__AD_trk[b][__AD_trk_cnt[b]].file = (char*)file;
__AD_trk[b][__AD_trk_cnt[b]].func = (char*)func;
__AD_trk[b][__AD_trk_cnt[b]].line = line;
__AD_trk_cnt[b]++;
}
/* Removes entry from the list of allocated objects. */
static inline void __AD_free_buf(void* ptr, const char* file, const char* func,
u32 line) {
u32 i, b;
if (!ptr) return;
b = __AD_H(ptr);
for (i=0;i<__AD_trk_cnt[b];i++)
if (__AD_trk[b][i].ptr == ptr) {
__AD_trk[b][i].ptr = 0;
return;
}
WARN("ALLOC: Attempt to free non-allocated memory in %s (%s:%u)",
func, file, line);
}
/* Does a final report on all non-deallocated objects. */
static inline void __AD_report(void) {
u32 i, b;
fflush(0);
for (b=0;b<ALLOC_BUCKETS;b++)
for (i=0;i<__AD_trk_cnt[b];i++)
if (__AD_trk[b][i].ptr)
WARN("ALLOC: Memory never freed, created in %s (%s:%u)",
__AD_trk[b][i].func, __AD_trk[b][i].file, __AD_trk[b][i].line);
}
/* Simple wrappers for non-debugging functions: */
static inline void* __AD_ck_alloc(u32 size, const char* file, const char* func,
u32 line) {
void* ret = __DFL_ck_alloc(size);
__AD_alloc_buf(ret, file, func, line);
return ret;
}
static inline void* __AD_ck_realloc(void* orig, u32 size, const char* file,
const char* func, u32 line) {
void* ret = __DFL_ck_realloc(orig, size);
__AD_free_buf(orig, file, func, line);
__AD_alloc_buf(ret, file, func, line);
return ret;
}
static inline void* __AD_ck_strdup(u8* str, const char* file, const char* func,
u32 line) {
void* ret = __DFL_ck_strdup(str);
__AD_alloc_buf(ret, file, func, line);
return ret;
}
static inline void* __AD_ck_memdup(u8* mem, u32 size, const char* file,
const char* func, u32 line) {
void* ret = __DFL_ck_memdup(mem, size);
__AD_alloc_buf(ret, file, func, line);
return ret;
}
static inline void __AD_ck_free(void* ptr, const char* file,
const char* func, u32 line) {
__AD_free_buf(ptr, file, func, line);
free(ptr);
}
/* Populates file / function / line number data to *_d wrapper calls: */
#define ck_alloc(_p1) \
__AD_ck_alloc(_p1, __FILE__, __FUNCTION__, __LINE__)
#define ck_realloc(_p1, _p2) \
__AD_ck_realloc(_p1, _p2, __FILE__, __FUNCTION__, __LINE__)
#define ck_strdup(_p1) \
__AD_ck_strdup(_p1, __FILE__, __FUNCTION__, __LINE__)
#define ck_memdup(_p1, _p2) \
__AD_ck_memdup(_p1, _p2, __FILE__, __FUNCTION__, __LINE__)
#define ck_free(_p1) \
__AD_ck_free(_p1, __FILE__, __FUNCTION__, __LINE__)
#endif /* ^!DEBUG_ALLOCATOR */
#endif /* ! _HAVE_ALLOC_INL_H */

2422
analysis.c Normal file

File diff suppressed because it is too large Load Diff

198
analysis.h Normal file
View File

@ -0,0 +1,198 @@
/*
skipfish - content analysis
---------------------------
Author: Michal Zalewski <lcamtuf@google.com>
Copyright 2009, 2010 by Google Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#ifndef _HAVE_ANALYSIS_C
#include "types.h"
#include "http_client.h"
#include "database.h"
#include "crawler.h"
extern u8 no_parse, /* Disable HTML link detection */
warn_mixed, /* Warn on mixed content */
log_ext_urls, /* Log all external URLs */
no_forms, /* Do not submit forms */
relaxed_mime, /* Relax about cset / mime */
pedantic_cache; /* Match HTTP/1.0 and HTTP/1.1 */
/* Helper macros to group various useful checks: */
#define PIVOT_CHECKS(_req, _res) do { \
pivot_header_checks(_req, _res); \
content_checks(_req, _res); \
scrape_response(_req, _res); \
} while (0)
#define RESP_CHECKS(_req, _res) do { \
content_checks(_req, _res); \
scrape_response(_req, _res); \
} while (0)
/* Runs some rudimentary checks on top-level pivot HTTP responses. */
void pivot_header_checks(struct http_request* req,
struct http_response* res);
/* Adds a new item to the form hint system. */
void add_form_hint(u8* name, u8* value);
/* Analyzes response headers (Location, etc), body to extract new links,
keyword guesses, examine forms, mixed content issues, etc. */
void scrape_response(struct http_request* req, struct http_response* res);
/* Analyzes response headers and body to detect stored XSS, redirection,
401, 500 codes, exception messages, source code, offensive comments, etc. */
void content_checks(struct http_request* req, struct http_response* res);
/* MIME detector output codes: */
#define MIME_NONE 0 /* Checks missing or failed */
#define MIME_ASC_GENERIC 1 /* Unknown, but mostly 7bit */
#define MIME_ASC_HTML 2 /* Plain, non-XML HTML */
#define MIME_ASC_JAVASCRIPT 3 /* JavaScript or JSON */
#define MIME_ASC_CSS 4 /* Cascading Style Sheets */
#define MIME_ASC_POSTSCRIPT 5 /* PostScript */
#define MIME_ASC_RTF 6 /* Rich Text Format */
#define MIME_XML_GENERIC 7 /* XML not recognized otherwise */
#define MIME_XML_OPENSEARCH 8 /* OpenSearch specification */
#define MIME_XML_RSS 9 /* Real Simple Syndication */
#define MIME_XML_ATOM 10 /* Atom feeds */
#define MIME_XML_WML 11 /* WAP WML */
#define MIME_XML_CROSSDOMAIN 12 /* crossdomain.xml (Flash) */
#define MIME_XML_SVG 13 /* Scalable Vector Graphics */
#define MIME_XML_XHTML 14 /* XML-based XHTML */
#define MIME_IMG_JPEG 15 /* JPEG */
#define MIME_IMG_GIF 16 /* GIF */
#define MIME_IMG_PNG 17 /* PNG */
#define MIME_IMG_BMP 18 /* Windows BMP (including ICO) */
#define MIME_IMG_TIFF 19 /* TIFF */
#define MIME_IMG_ANI 20 /* RIFF: ANI animated cursor */
#define MIME_AV_WAV 21 /* RIFF: WAV sound file */
#define MIME_AV_MP3 22 /* MPEG audio (commonly MP3) */
#define MIME_AV_OGG 23 /* Ogg Vorbis */
#define MIME_AV_RA 24 /* Real audio */
#define MIME_AV_AVI 25 /* RIFF: AVI container */
#define MIME_AV_MPEG 26 /* MPEG video */
#define MIME_AV_QT 27 /* QuickTime */
#define MIME_AV_FLV 28 /* Flash video */
#define MIME_AV_RV 29 /* Real video */
#define MIME_AV_WMEDIA 30 /* Windows Media audio */
#define MIME_EXT_FLASH 31 /* Adobe Flash */
#define MIME_EXT_PDF 32 /* Adobe PDF */
#define MIME_EXT_JAR 33 /* Sun Java archive */
#define MIME_EXT_CLASS 34 /* Sun Java class */
#define MIME_EXT_WORD 35 /* Microsoft Word */
#define MIME_EXT_EXCEL 36 /* Microsoft Excel */
#define MIME_EXT_PPNT 37 /* Microsoft Powerpoint */
#define MIME_BIN_ZIP 38 /* ZIP not recognized otherwise */
#define MIME_BIN_GZIP 39 /* GZIP */
#define MIME_BIN_CAB 40 /* CAB */
#define MIME_BIN_GENERIC 41 /* Binary, unknown type */
#define MIME_COUNT (MIME_BIN_GENERIC + 1)
/* NULL-terminated MIME mapping sets. Canonical name should go first; do not
put misspelled or made up entries here. This is used to match server intent
with the outcome of MIME sniffing. */
#ifdef _VIA_ANALYSIS_C
static char* mime_map[MIME_COUNT][8] = {
/* MIME_NONE */ { 0 },
/* MIME_ASC_GENERIC */ { "text/plain", "?text/x-", "?text/vnd.",
"?application/x-httpd-", "text/csv", 0 },
/* MIME_ASC_HTML */ { "text/html", 0 },
/* MIME_ASC_JAVASCRIPT */ { "application/javascript",
"application/x-javascript",
"application/json", "text/javascript", 0 },
/* MIME_ASC_CSS */ { "text/css", 0 },
/* MIME_ASC_POSTSCRIPT */ { "application/postscript", 0 },
/* MIME_ASC_RTF */ { "text/rtf", "application/rtf", 0 },
/* MIME_XML_GENERIC */ { "text/xml", "application/xml", 0 },
/* MIME_XML_OPENSEARCH */ { "application/opensearchdescription+xml", 0 },
/* MIME_XML_RSS */ { "application/rss+xml", 0 },
/* MIME_XML_ATOM */ { "application/atom+xml", 0 },
/* MIME_XML_WML */ { "text/vnd.wap.wml", 0 },
/* MIME_XML_CROSSDOMAIN */ { "text/x-cross-domain-policy", 0 },
/* MIME_XML_SVG */ { "image/svg+xml", 0 },
/* MIME_XML_XHTML */ { "application/xhtml+xml", 0 },
/* MIME_IMG_JPEG */ { "image/jpeg", 0 },
/* MIME_IMG_GIF */ { "image/gif", 0 },
/* MIME_IMG_PNG */ { "image/png", 0 },
/* MIME_IMG_BMP */ { "image/x-ms-bmp", "image/bmp", "image/x-icon", 0 },
/* MIME_IMG_TIFF */ { "image/tiff", 0 },
/* MIME_IMG_ANI */ { "application/x-navi-animation", 0 },
/* MIME_AV_WAV */ { "audio/x-wav", "audio/wav", 0 },
/* MIME_AV_MP3 */ { "audio/mpeg", 0 },
/* MIME_AV_OGG */ { "application/ogg", 0 },
/* MIME_AV_RA */ { "audio/vnd.rn-realaudio",
"audio/x-pn-realaudio", "audio/x-realaudio", 0 },
/* MIME_AV_AVI */ { "video/avi", 0 },
/* MIME_AV_MPEG */ { "video/mpeg", "video/mp4", 0 },
/* MIME_AV_QT */ { "video/quicktime", 0 },
/* MIME_AV_FLV */ { "video/flv", "video/x-flv", 0 },
/* MIME_AV_RV */ { "video/vnd.rn-realvideo", 0 },
/* MIME_AV_WMEDIA */ { "video/x-ms-wmv", "audio/x-ms-wma",
"video/x-ms-asf", 0 },
/* MIME_EXT_FLASH */ { "application/x-shockwave-flash", 0 },
/* MIME_EXT_PDF */ { "application/pdf", 0 },
/* MIME_EXT_JAR */ { "application/java-archive", 0 },
/* MIME_EXT_CLASS */ { "application/java-vm", 0 },
/* MIME_EXT_WORD */ { "application/msword", 0 },
/* MIME_EXT_EXCEL */ { "application/vnd.ms-excel", 0 },
/* MIME_EXT_PPNT */ { "application/vnd.ms-powerpoint", 0 },
/* MIME_BIN_ZIP */ { "application/zip", "application/x-zip-compressed", 0 },
/* MIME_BIN_GZIP */ { "application/x-gzip", "application/x-gunzip",
"application/x-tar-gz", 0 },
/* MIME_BIN_CAB */ { "application/vnd.ms-cab-compressed", 0 },
/* MIME_BIN_GENERIC */ { "application/binary", "application/octet-stream",
0 }
};
#endif /* _VIA_ANALYSIS_C */
#endif /* !_HAVE_ANALYSIS_H */

679
assets/COPYING Normal file
View File

@ -0,0 +1,679 @@
Icons used in HTML reports are copyrighted by the Crystal Project, and
distributed under terms and conditions of the GNU Lesser General Public
License. See http://www.everaldo.com/crystal/ for details.
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<http://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<http://www.gnu.org/philosophy/why-not-lgpl.html>.

BIN
assets/i_high.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 781 B

BIN
assets/i_low.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 766 B

BIN
assets/i_medium.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 770 B

BIN
assets/i_note.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.3 KiB

BIN
assets/i_warn.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.2 KiB

758
assets/index.html Normal file
View File

@ -0,0 +1,758 @@
<html>
<head>
<!--
skipfish - report renderer
--------------------------
Author: Michal Zalewski <lcamtuf@google.com>
Copyright 2009, 2010 by Google Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<title>Skipfish - scan results browser</title>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<style>
body {
font-family: 'Georgia', 'Arial', 'Helvetica';
background-color: white;
}
.hdr_table {
float: right;
border: 1px dotted #C0C000;
background-color: #FFFFF0;
font-size: 80%;
}
.summary1 {
padding: 0 1em 0 1em;
}
.summary2 {
color: teal;
padding: 0 1em 0 0;
}
img {
vertical-align: middle;
padding: 0 .5em 0 0;
}
.i2 {
vertical-align: middle;
padding: 0 0 0 .5em;
}
.i3 {
vertical-align: middle;
padding: 0 .2em 0 .2em;
}
.idupe {
opacity: 0.4;
filter: alpha(opacity=40);
}
.child_ctr, .child_ctr_exp {
padding: 0.2ex 0.5em 0.2ex 0.5em;
border: 1px solid white;
white-space: nowrap;
}
td.child_ctr_exp:hover {
border: 1px solid #C0C0C0;
cursor: pointer;
}
td.child_ctr:hover {
}
.name {
font-weight: bold;
}
span.sum_name {
font-weight: bold;
border: 1px solid white;
}
span.sum_name:hover {
font-weight: bold;
cursor: pointer;
border: 1px solid #C0C0C0;
}
.dupe_name {
color: gray;
}
.fetch_info {
font-size: 70%;
color: gray;
}
.fetch_data {
color: teal;
}
.issue_desc {
font-weight: bold;
}
.comment {
color: crimson;
font-size: 70%;
}
a { text-decoration: none; }
a:hover { text-decoration: underline; }
h2 {
border-width: 0 0 1px 0;
border-style: solid;
border-color: crimson;
}
ol {
margin: 0.5em 0 0 0;
padding: 0 0 0 1.5em;
}
.issue_line {
border-width: 0 0 1px 0;
margin: 0.2em 0 0.2em 0;
border-style: dashed;
border-color: red;
}
.s_cnt {
font-size: 80%;
color: teal;
}
.req_div {
position: absolute;
top: 0;
left: 0;
margin: 5% 0 0 10%;
width: 75%;
height: 80%;
border: 3px outset teal;
display: none;
background-color: white;
z-index: 10;
padding: 10px;
}
.req_hdr {
background-color: #FFFFE0;
border: 1px outset teal;
font-size: 70%;
text-align: center;
padding: 2px;
cursor: pointer;
}
.req_txtarea {
border: 1px inset teal;
padding: 2px;
margin: 1% 0px 0px 0px;
width: 100%;
height: 95%;
}
div.req_hdr:hover {
border: 1px inset teal;
}
.cover {
opacity: 0.7;
filter: alpha(opacity=70);
background-color: #F0F0F0;
position: absolute;
top: 0;
left: 0;
height: 100%;
width: 100%;
display: none;
}
.supp_cat {
color: #606060;
}
</style>
<script src="summary.js"></script>
<script src="samples.js"></script>
<script>
var c_count = 0;
var ignore_click = false;
var max_samples = 100;
/* Descriptions for issues reported by the scanner. */
var issue_desc= {
"10101": "SSL certificate issuer information",
"10201": "New HTTP cookie added",
"10202": "New 'Server' header value seen",
"10203": "New 'Via' header value seen",
"10204": "New 'X-*' header value seen",
"10205": "New 404 signature seen",
"10401": "Resource not directly accessible",
"10402": "HTTP authentication required",
"10403": "Server error triggered",
"10501": "All external links",
"10502": "External URL redirector",
"10503": "All e-mail addresses",
"10504": "Links to unknown protocols",
"10505": "Unknown form field (can't autocomplete)",
"10601": "HTML form found",
"10602": "Password entry form - consider brute-force",
"10701": "User-supplied link rendered on a page",
"10801": "Incorrect or missing MIME type (low risk)",
"10802": "Generic MIME used (low risk)",
"10803": "Incorrect or missing charset (low risk)",
"10804": "Conflicting MIME / charset info (low risk)",
"10901": "Numerical filename - consider enumerating",
"10902": "OGNL-like parameter behavior",
"20101": "Resource fetch failed",
"20102": "Limits exceeded, fetch suppressed",
"20201": "Behavior checks failed",
"20202": "IPS filtering enabled",
"20203": "IPS filtering disabled again",
"20204": "Response varies randomly, skipping injection checks",
"20301": "Node should be a directory, detection error?",
"30101": "HTTP credentials seen in URLs",
"30201": "SSL certificate expired or not yet valid",
"30202": "Self-signed SSL certificate",
"30203": "SSL certificate host name mismatch",
"30204": "No SSL certificate data found",
"30301": "Directory listing restrictions bypassed",
"30401": "Redirection to attacker-supplied URLs",
"30402": "Attacker-supplied URLs in embedded content (lower risk)",
"30501": "External content embedded on a page (lower risk)",
"30502": "Mixed content embedded on a page (lower risk)",
"30601": "HTML form with no apparent XSRF protection",
"30602": "JSON response with no apparent XSSI protection",
"30701": "Incorrect caching directives (lower risk)",
"40101": "XSS vector in document body",
"40102": "XSS vector via arbitrary URLs",
"40103": "HTTP response header splitting",
"40104": "Attacker-supplied URLs in embedded content (higher risk)",
"40201": "External content embedded on a page (higher risk)",
"40202": "Mixed content embedded on a page (higher risk)",
"40301": "Incorrect or missing MIME type (higher rirsk)",
"40302": "Generic MIME type (higher risk)",
"40304": "Incorrect or missing charset (higher risk)",
"40305": "Conflicting MIME / charset info (higher risk)",
"40401": "Interesting file",
"40402": "Interesting server message",
"40501": "Directory traversal possible",
"40601": "Incorrect caching directives (higher risk)",
"50101": "Server-side XML injection vector",
"50102": "Shell injection vector",
"50103": "SQL injection vector",
"50104": "Format string vector",
"50105": "Integer overflow vector",
"50201": "SQL query or similar syntax in parameters"
};
/* Simple HTML escaping routine. */
function H(str) { return str.replace(/</g,'&lt;').replace(/"/g,'&quot;'); }
/* Simple truncation routine. */
function TRUNC(str) { if (str.length > 70) return str.substr(0,69) + "..."; else return str; }
/* Initializes scan information, loads top-level view. */
function initialize() {
document.getElementById('sf_version').innerHTML = sf_version;
document.getElementById('scan_date').innerHTML = scan_date;
document.getElementById('scan_seed').innerHTML = scan_seed;
document.getElementById('scan_time').innerHTML =
Math.floor(scan_ms / 1000 / 60 / 60) + " hr " +
Math.floor((scan_ms / 1000 / 60)) % 60 + " min " +
Math.floor((scan_ms / 1000)) % 60 + " sec " +
(scan_ms % 1000) + " ms";
load_node('./', 'root');
load_mime_summaries();
load_issue_summaries();
}
/* Implements pretty, pointless fades. */
function next_opacity(tid, new_val) {
var t = document.getElementById(tid);
t.style.opacity = new_val;
t.style.filter = "alpha(opacity=" + (new_val * 100) + ")";
if (new_val < 1.0)
setTimeout('next_opacity("' + tid + '", ' + (new_val + 0.1) + ')', 50);
}
/* Loads or toggles visibility of a node. */
function toggle_node(dir, tid) {
var t = document.getElementById('c_' + tid);
if (ignore_click) { ignore_click = false; return; }
if (!t.loaded) {
load_node(dir, tid);
document.getElementById('exp_' + tid).src = 'n_expanded.png';
document.getElementById('exp_' + tid).title = 'Click to collapse';
t.loaded = true;
return;
}
if (t.style.display == 'none') {
document.getElementById('exp_' + tid).src = 'n_expanded.png';
t.style.display = 'block';
document.getElementById('exp_' + tid).title = 'Click to collapse';
next_opacity('c_' + tid, 0);
} else {
document.getElementById('exp_' + tid).src = 'n_collapsed.png';
t.style.display = 'none';
document.getElementById('exp_' + tid).title = 'Click to expand';
}
}
/* Displays request or response dump in a faux window. */
function show_dat(path, ignore) {
var out = document.getElementById('req_txtarea'),
cov = document.getElementById('cover');
document.body.style.overflow = 'hidden';
out.value = '';
var x = new XMLHttpRequest();
var content;
var pX = window.scrollX ? window.scrollX : document.body.scrollLeft;
var pY = window.scrollY ? window.scrollY : document.body.scrollTop;
out.parentNode.style.left = pX;
out.parentNode.style.top = pY;
cov.style.left = pX;
cov.style.top = pY;
out.parentNode.style.display = 'block';
cov.style.display = 'block';
x.open('GET', path + '/request.dat', false);
x.send(null);
content = '=== REQUEST ===\n\n' + x.responseText;
x.open('GET', path + '/response.dat', false);
x.send(null);
if (x.responseText.substr(0,5) == 'HTTP/')
content += '\n=== RESPONSE ===\n\n' + x.responseText + '\n=== END OF DATA ===\n';
else content += '\n=== RESPONSE NOT AVAILABLE ===\n\n=== END OF DATA ===\n';
out.value = content;
delete x;
out.focus();
if (ignore) ignore_click = true;
return false;
}
/* Displays request or response dump in a proper window. */
function show_win(path, ignore) {
var out = window.open('','_blank','scroll=yes,addressbar=no');
var x = new XMLHttpRequest();
var content;
x.open('GET', path + '/request.dat', false);
x.send(null);
content = '=== REQUEST ===\n\n' + x.responseText;
x.open('GET', path + '/response.dat', false);
x.send(null);
if (x.responseText.substr(0,5) == 'HTTP/')
content += '\n=== RESPONSE ===\n\n' + x.responseText + '\n=== END OF DATA ===\n';
else content += '\n=== RESPONSE NOT AVAILABLE ===\n\n=== END OF DATA ===\n';
out.document.body.innerHTML = '<pre></pre>';
out.document.body.firstChild.appendChild(out.document.createTextNode(content));
delete x;
if (ignore) ignore_click = true;
return false;
}
/* Hides request view. */
function hide_dat() {
/* Work around a glitch in WebKit. */
if (navigator.userAgent.indexOf('WebKit') == -1)
document.body.style.overflow = 'auto';
else
document.body.style.overflow = 'scroll';
document.getElementById('req_div').style.display = 'none';
document.getElementById('cover').style.display = 'none'
}
/* Loads issues, children for a node, renders HTML. */
function load_node(dir, tid) {
var x = new XMLHttpRequest();
var t = document.getElementById('c_' + tid);
x.open('GET', dir + 'child_index.js', false);
x.send(null);
eval(x.responseText);
x.open('GET', dir + 'issue_index.js', false);
x.send(null);
eval(x.responseText);
delete x;
next_opacity('c_' + tid, 0);
if (issue.length > 0)
t.innerHTML += '<div class="issue_line"></div>';
for (var cno = 0; cno < issue.length; cno++) {
var i = issue[cno];
var add_html;
add_html = '<table><tr><td valign="top">\n';
switch (i.severity) {
case 0: add_html += '<img src="i_note.png" title="Informational note">'; break;
case 1: add_html += '<img src="i_warn.png" title="Internal warning">'; break;
case 2: add_html += '<img src="i_low.png" title="Low risk or low specificity">'; break;
case 3: add_html += '<img src="i_medium.png" title="Medium risk - data compromise">'; break;
case 4: add_html += '<img src="i_high.png" title="High risk: system compromise">'; break;
}
add_html += '</td>\n<td><div style="issue_desc">' + issue_desc[i.type] + '</div>\n<ol>\n';
for (var cno2 = cno; cno2 < issue.length; cno2++) {
var i2 = issue[cno2];
if (i2.type != i.type) break;
if (i2.fetched) {
add_html += '<li><div class="fetch_info">' +
'Code: <span class="fetch_data">' + i2.code + '</span>, ' +
'length: <span class="fetch_data">' + i2.len + '</span>, ' +
'declared: <span class="fetch_data">' + H(i2.decl_mime) + '</span>, ';
if (i2.sniff_mime != '[none]') add_html +=
'detected: <span class="fetch_data">' + H(i2.sniff_mime) + '</span>, ';
add_html += 'charset: <span class="fetch_data">' + H(i2.cset) + '</span> ' +
'[ <a href="#" onclick="return show_dat(\'' + dir + i2.dir + '\', false)">show trace</a> ' +
'<a href="#" onclick="return show_win(\'' + dir + i2.dir + '\', false)">+</a> ]</div>\n';
} else {
add_html += '<li><div class="fetch_info">' +
'Fetch result: ' + i2.error + '</div>';
}
if (i2.extra.length > 0) add_html += '<div class="comment">Memo: ' + H(i2.extra) + '</div>\n';
}
cno = cno2 - 1;
add_html += '</ol>\n';
add_html += '</td></tr></table>\n';
t.innerHTML += add_html;
}
if (issue.length > 0)
t.innerHTML += '<div class="issue_line"></div>';
for (var cno = 0; cno < child.length; cno++) {
var c = child[cno];
var has_child = false;
var add_html, cstr = '';
add_html = '<table><tr><td valign="top">\n';
if (c.dupe) cstr = 'class="idupe" ';
switch (c.type) {
case 10: add_html += '<img ' + cstr + 'src="p_serv.png" title="Server node">'; break;
case 11: add_html += '<img ' + cstr + 'src="p_dir.png" title="Directory node">'; break;
case 12: add_html += '<img ' + cstr + 'src="p_file.png" title="File node">'; break;
case 13: add_html += '<img ' + cstr + 'src="p_pinfo.png" title="Script-like file">'; break;
case 100: add_html += '<img ' + cstr + 'src="p_param.png" title="GET or POST parameter">'; break;
case 101: add_html += '<img ' + cstr + 'src="p_value.png" title="Alternative parameter value">'; break;
default: add_html += '<img ' + cstr + 'src="p_unknown.png" title="Unknown node">';
}
if (c.child_cnt > 0 || c.issue_cnt[0] + c.issue_cnt[1] + c.issue_cnt[2] +
c.issue_cnt[3] + c.issue_cnt[4] > 0) {
add_html += '</td>\n<td class="child_ctr_exp" onclick="toggle_node(\'' +
dir + c.dir + '/\', ' + c_count + ')"' + '>';
has_child = true;
} else {
add_html += '</td>\n<td class="child_ctr">'
}
if (has_child)
add_html += '<img src="n_collapsed.png" id="exp_' + c_count + '"' +
' title="Click to expand">\n';
if (c.missing) {
if (c.linked == 2)
add_html += '<img src="n_missing.png" title="Resource missing">';
else
add_html += '<img src="n_maybe_missing.png" ' +
'title="Resource missing (guessed link)">';
}
if (!c.fetched)
add_html += '<img src="n_failed.png" title="Fetch failed">';
if (c.dupe) add_html += '<img src="n_clone.png" title="Suspected duplicate">' +
'<span class="dupe_name" title="' + H(c.url) + '">' + H(TRUNC(c.name)) + '</span>\n';
else add_html += '<span class="name" title="' + H(c.url) + '">' + H(TRUNC(c.name)) + '</span>\n';
if (c.linked == 0)
add_html += '<img src="n_unlinked.png" title="Not linked (brute-forced)" class="i2">';
add_html += '<span id="child_info">';
if (c.issue_cnt[4] > 0)
add_html += '<img class="i2" src="i_high.png" title="High risk">' + c.issue_cnt[4];
if (c.issue_cnt[3] > 0)
add_html += '<img class="i2" src="i_medium.png" title="Medium risk">' + c.issue_cnt[3];
if (c.issue_cnt[2] > 0)
add_html += '<img class="i2" src="i_low.png" title="Low risk">' + c.issue_cnt[2];
if (c.issue_cnt[1] > 0)
add_html += '<img class="i2" src="i_warn.png" title="Warnings">' + c.issue_cnt[1];
if (c.issue_cnt[0] > 0)
add_html += '<img class="i2" src="i_note.png" title="Notes">' + c.issue_cnt[0];
if (c.child_cnt > 0)
add_html += '<img class="i2" src="n_children.png" title="Unique children nodes">' + c.child_cnt;
add_html += '</span>\n';
if (c.fetched) {
add_html += '<div class="fetch_info">' +
'Code: <span class="fetch_data">' + c.code + '</span>, ' +
'length: <span class="fetch_data">' + c.len + '</span>, ' +
'declared: <span class="fetch_data">' + H(c.decl_mime) + '</span>, ';
if (c.sniff_mime != '[none]') add_html +=
'detected: <span class="fetch_data">' + H(c.sniff_mime) + '</span>, ';
if (has_child)
add_html += 'charset: <span class="fetch_data">' + H(c.cset) + '</span> ' +
'[ <a href="#" onclick="return show_dat(\'' + dir + c.dir + '\', true)">show trace</a> ' +
'<a href="#" onclick="return show_win(\'' + dir + c.dir + '\', true)">+</a> ]</div>\n';
else
add_html += 'charset: <span class="fetch_data">' + H(c.cset) + '</span> ' +
'[ <a href="#" onclick="return show_dat(\'' + dir + c.dir + '\', false)">show trace</a> ' +
'<a href="#" onclick="return show_win(\'' + dir + c.dir + '\', false)">+</a> ]</div>\n';
} else {
add_html += '<div class="fetch_info">' +
'Fetch result: ' + c.error + '</div>\n';
}
if (has_child) add_html += '</tr><tr>\n<td></td>\n<td id="c_' + c_count + '">';
add_html += '</td></tr></table>\n';
t.innerHTML += add_html;
c_count++;
}
}
/* Picks the lesser of two evils. */
function MIN(a,b) { if (a > b) return b; else return a; }
/* Toggles visibility of a summary view. */
function show_sum(t) {
var target = t.nextSibling.nextSibling.nextSibling.nextSibling;
if (target.style.display == 'block') {
target.style.display = 'none';
} else {
next_opacity(target.id, 0);
target.style.display = 'block';
}
}
/* Loads MIME summaries. */
function load_mime_summaries() {
var t = document.getElementById('doc_types');
for (var cno = 0; cno < mime_samples.length; cno++) {
var m = mime_samples[cno], limit = MIN(max_samples, m.samples.length);
var add_html;
add_html = '<table><tr><td valign="top"><img src="mime_entry.png"></td>\n<td valign="top">';
add_html += '<span class="sum_name" onclick="show_sum(this)">' + H(m.mime) + '</span>\n<span class="s_cnt">(' +
m.samples.length + ')</span>\n<ol id="sum_' + (c_count++) + '" style="display: none">\n';
for (var sno = 0; sno < limit; sno++) {
add_html += '<li><a target="_blank" href="' + H(m.samples[sno].url) + '">' + H(m.samples[sno].url) + '</a> ';
if (m.samples[sno].linked == 0)
add_html += '<img src="n_unlinked.png" title="Not linked (brute-forced)" class="i3"> ';
add_html += '<span class="s_cnt">(' + m.samples[sno].len + ' bytes)</span> <span class="fetch_info">' +
'[ <a href="#" onclick="return show_dat(\'' + m.samples[sno].dir + '\', false)">show trace</a> ' +
'<a href="#" onclick="return show_win(\'' + m.samples[sno].dir + '\', false)">+</a> ]</span>\n';
}
add_html += '</ol></tr></td></table>\n';
t.innerHTML += add_html;
}
}
/* Loads issue summaries. */
function load_issue_summaries() {
var t = document.getElementById('issue_types');
for (var cno = 0; cno < issue_samples.length; cno++) {
var i = issue_samples[cno], limit = MIN(max_samples, i.samples.length);
var add_html;
add_html = '<table><tr><td valign="top">';
switch (i.severity) {
case 0: add_html += '<img src="i_note.png" title="Informational note">'; break;
case 1: add_html += '<img src="i_warn.png" title="Internal warning">'; break;
case 2: add_html += '<img src="i_low.png" title="Low risk or low specificity">'; break;
case 3: add_html += '<img src="i_medium.png" title="Medium risk - data compromise">'; break;
case 4: add_html += '<img src="i_high.png" title="High risk: system compromise">'; break;
}
add_html += '</td>\n<td valign="top"><span class="sum_name" onclick="show_sum(this)">' +
issue_desc[i.type] + '</span>\n<span class="s_cnt">(' + i.samples.length + ')</span>\n' +
'<ol id="sum_' + (c_count++) + '" style="display: none">\n';
for (var sno = 0; sno < limit; sno++) {
add_html += '<li> <a target="_blank" href="' + H(i.samples[sno].url) + '">' + H(i.samples[sno].url) + '</a> <span class="fetch_info">' +
'[ <a href="#" onclick="return show_dat(\'' + i.samples[sno].dir + '\', false)">show trace</a> ' +
'<a href="#" onclick="return show_win(\'' + i.samples[sno].dir + '\', false)">+</a> ]</span>\n';
if (i.samples[sno].extra && i.samples[sno].extra.length > 0)
add_html += '<div class="comment">Memo: ' + H(i.samples[sno].extra) + '</div>\n';
}
add_html += '</ol></tr></td></table>\n';
t.innerHTML += add_html;
}
}
/* Warns about CSS support issues. */
if ('\v' == 'v')
alert('WARNING: This page works better with Firefox, Safari, Chrome, Opera, etc.\n\n' +
'Known problems in Internet Explorer include incorrectly rendered PNG icons, cursors,\n' +
'HTML request dumps, incorrect CSS padding for many elements, and so forth. To my best\n' +
'knowledge, these patterns trace back to problems with MSIE, not with this viewer.');
</script>
</head>
<body onload="initialize()">
<img src="sf_name.png" width="203" height="93" style="float: left">
<div class="req_div" id="req_div">
<div class="req_hdr" id="req_hdr" onclick="hide_dat()">HTTP trace - click this bar or hit ESC to close</div>
<textarea class="req_txtarea" id="req_txtarea" readonly onkeyup="if (event.keyCode == 27) hide_dat();"></textarea>
</div>
<div id="cover" class="cover"></div>
<table class="hdr_table">
<tr><td class="summary1">Scanner version:</td><td class="summary2" id="sf_version"></td>
<td class="summary1">Scan date:</td><td class="summary2" id="scan_date"></td></tr>
<tr><td class="summary1">Random seed:</td><td class="summary2" id="scan_seed"></td>
<td class="summary1">Total time:</td><td class="summary2" id="scan_time"></td></tr>
</table>
<br clear="all">
<h2>Crawl results - click to expand:</h2>
<div id="c_root" class="child_ctr">
</div>
<h2 class="supp_cat">Document type overview - click to expand:</h2>
<div id="doc_types">
</div>
<h2 class="supp_cat">Issue type overview - click to expand:</h2>
<div id="issue_types">
</div>
<p>
<span class="fetch_info">NOTE: 100 samples maximum per issue or document type.</span>

BIN
assets/mime_entry.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 KiB

BIN
assets/n_children.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 861 B

BIN
assets/n_clone.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 882 B

BIN
assets/n_collapsed.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 541 B

BIN
assets/n_expanded.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 329 B

BIN
assets/n_failed.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 677 B

BIN
assets/n_maybe_missing.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 920 B

BIN
assets/n_missing.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 894 B

BIN
assets/n_unlinked.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 724 B

BIN
assets/p_dir.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 KiB

BIN
assets/p_file.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 KiB

BIN
assets/p_param.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.8 KiB

BIN
assets/p_pinfo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 KiB

BIN
assets/p_serv.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.4 KiB

BIN
assets/p_unknown.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.7 KiB

BIN
assets/p_value.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 KiB

BIN
assets/sf_name.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

242
config.h Normal file
View File

@ -0,0 +1,242 @@
/*
skipfish - configurable settings
--------------------------------
Author: Michal Zalewski <lcamtuf@google.com>
Copyright 2009, 2010 by Google Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#ifndef _HAVE_CONFIG_H
#define _HAVE_CONFIG_H
#define VERSION "1.00b"
#define USE_COLOR 1 /* Use terminal colors */
/* Various default settings for HTTP client (cmdline override): */
#define MAX_CONNECTIONS 50 /* Simultaneous connection cap */
#define MAX_CONN_HOST 10 /* Per-host connction cap */
#define MAX_REQUESTS 1e8 /* Total request count cap */
#define MAX_FAIL 100 /* Max consecutive failed requests */
#define RW_TMOUT 10 /* Individual network R/W timeout */
#define RESP_TMOUT 20 /* Total request time limit */
#define IDLE_TMOUT 10 /* Connection tear down threshold */
#define SIZE_LIMIT 200000 /* Response size cap */
#define MAX_GUESSES 256 /* Guess-based wordlist size limit */
/* HTTP client constants: */
#define MAX_URL_LEN 1024 /* Maximum length of an URL */
#define MAX_DNS_LEN 255 /* Maximum length of a host name */
#define READ_CHUNK 4096 /* Read buffer size */
/* Define this to use FILO, rather than FIFO, scheduling for new requests.
FILO ensures a more uniform distribution of requests when fuzzing multiple
directories at once, but may reduce the odds of spotting some stored
XSSes, and increase memory usage a bit. */
// #define QUEUE_FILO 1
/* Dummy file to upload to the server where possible. */
#define DUMMY_EXT "gif"
#define DUMMY_FILE "GIF89a,\x01<html>"
#define DUMMY_MIME "image/gif"
/* Allocator settings: */
#define MAX_ALLOC 0x50000000 /* Refuse larger allocations. */
/* Configurable settings for crawl database (cmdline override): */
#define MAX_DEPTH 16 /* Maximum crawl tree depth */
#define MAX_CHILDREN 1024 /* Maximum children per tree node */
#define DEF_WORDLIST "skipfish.wl" /* Default wordlist file */
/* Crawl / analysis constants: */
#define MAX_WORD 64 /* Maximum wordlist item length */
#define GUESS_PROB 50 /* Guess word addition probability */
#define WORD_HASH 256 /* Hash table for wordlists */
#define SNIFF_LEN 1024 /* MIME sniffing buffer size */
#define MAX_SAMPLES 1024 /* Max issue / MIME samples */
/* Page fingerprinting constants: */
#define FP_SIZE 10 /* Page fingerprint size */
#define FP_MAX_LEN 15 /* Maximum word length to count */
#define FP_T_REL 5 /* Relative matching tolerance (%) */
#define FP_T_ABS 6 /* Absolute matching tolerance */
#define FP_B_FAIL 3 /* Max number of failed buckets */
#define BH_CHECKS 15 /* Page verification check count */
/* Crawler / probe constants: */
#define BOGUS_FILE "sfi9876" /* Name that should not exist */
#define MAX_404 4 /* Maximum number of 404 sigs */
#define PAR_MAX_DIGITS 6 /* Max digits in a fuzzable int */
#define PAR_INT_FUZZ 100 /* Fuzz by + / - this much */
#ifdef QUEUE_FILO
#define DICT_BATCH 200 /* Brute-force queue block */
#else
#define DICT_BATCH 1000 /* Brute-force queue block */
#endif /* ^QUEUE_FILO */
/* Single query for IPS detection - Evil Query of Doom (tm). */
#define IPS_TEST \
"?_test1=c:\\windows\\system32\\cmd.exe" \
"&_test2=/etc/passwd" \
"&_test3=|/bin/sh" \
"&_test4=(SELECT * FROM nonexistent) --" \
"&_test5=>/no/such/file" \
"&_test6=<script>alert(1)</script>" \
"&_test7=javascript:alert(1)"
/* A benign query with a similar character set to compare with EQoD. */
#define IPS_SAFE \
"?_test1=ccddeeeimmnossstwwxy.:\\\\\\" \
"&_test2=acdepsstw//" \
"&_test3=bhins//" \
"&_test4=CEEFLMORSTeeinnnosttx--*" \
"&_test5=cefhilnosu///" \
"&_test6=acceiilpprrrssttt1)(" \
"&_test7=aaaceijlprrsttv1):("
/* XSRF token detector settings: */
#define XSRF_B16_MIN 8 /* Minimum base10/16 token length */
#define XSRF_B16_MAX 45 /* Maximum base10/16 token length */
#define XSRF_B16_NUM 2 /* ...minimum digit count */
#define XSRF_B64_MIN 6 /* Minimum base32/64 token length */
#define XSRF_B64_MAX 32 /* Maximum base32/64 token length */
#define XSRF_B64_NUM 1 /* ...minimum digit count && */
#define XSRF_B64_CASE 2 /* ...minimum uppercase count */
#define XSRF_B64_NUM2 3 /* ...digit count override */
#define XSRF_B64_SLASH 2 /* ...maximum slash count */
#ifdef _VIA_DATABASE_C
/* Domains we always trust (identical to -B options). These entries do not
generate cross-domain content inclusion warnings. NULL-terminated. */
static const char* always_trust_domains[] = {
".google-analytics.com",
".googleapis.com",
".googleadservices.com",
".googlesyndication.com",
"www.w3.org",
0
};
#endif /* _VIA_DATABASE_C */
#ifdef _VIA_ANALYSIS_C
/* NULL-terminated list of JSON-like response prefixes we consider to
be sufficiently safe against cross-site script inclusion (courtesy
ratproxy). */
static const char* json_safe[] = {
"while(1);", /* Parser looping */
"while (1);", /* ... */
"while(true);", /* ... */
"while (true);", /* ... */
"&&&", /* Parser breaking */
"//OK[", /* Line commenting */
"{\"", /* Serialized object */
"{{\"", /* Serialized object */
"throw 1; <", /* Magical combo */
")]}'", /* Recommended magic */
0
};
/* NULL-terminated list of known valid charsets. Charsets not on the list are
considered dangerous (as they may trigger charset sniffing).
Note that many common misspellings, such as "utf8", are not valid and NOT
RECOGNIZED by browsers, leading to content sniffing. Do not add them here.
Also note that SF does not support encoding not compatible with US ASCII
transport (e.g., UTF-16, UTF-32). Lastly, variable-length encodings
other than utf-8 may have character consumption issues that are not
tested for at this point. */
static const char* valid_charsets[] = {
"utf-8", /* Valid 8-bit safe Unicode */
"iso8859-1", /* Western Europe */
"iso8859-2", /* Central Europe */
"iso8859-15", /* New flavor of ISO8859-1 */
"iso8859-16", /* New flavor of ISO8859-2 */
"iso-8859-1", /* Browser-supported misspellings */
"iso-8859-2", /* - */
"iso-8859-15", /* - */
"iso-8859-16", /* - */
"windows-1252", /* Microsoft's Western Europe */
"windows-1250", /* Microsoft's Central Europe */
"us-ascii", /* Old school but generally safe */
"koi8-r", /* 8-bit and US ASCII compatible */
0
};
/* Default form auto-fill rules - used to pair up form fields with fun
values! Do not attempt security attacks here, though - this is to maximize
crawl coverage, not to exploit anything. The last item must have a name
of NULL, and the value will be used as a default option when no other
matches found. */
static const char* form_suggestion[][2] = {
{ "phone" , "6505550100" }, /* Reserved */
{ "zip" , "94043" },
{ "first" , "John" },
{ "last" , "Smith" },
{ "name" , "Smith" },
{ "mail" , "skipfish@example.com" },
{ "street" , "1600 Amphitheatre Pkwy" },
{ "city" , "Mountain View" },
{ "state" , "CA" },
{ "country" , "US" },
{ "language" , "en" },
{ "company" , "ACME" },
{ "search" , "skipfish" },
{ "login" , "skipfish" },
{ "user" , "skipfish" },
{ "pass" , "skipfish" },
{ "year" , "2010" },
{ "card" , "4111111111111111" }, /* Reserved */
{ "code" , "000" },
{ "cvv" , "000" },
{ "expir" , "1212" },
{ "ssn" , "987654320" }, /* Reserved */
{ "url" , "http://example.com/?sfish_form_test" },
{ "site" , "http://example.com/?sfish_form_test" },
{ "domain" , "example.com" },
{ "search" , "a" },
{ NULL , "1" }
};
#endif /* _VIA_ANALYSIS_C */
#endif /* ! _HAVE_CONFIG_H */

2776
crawler.c Normal file

File diff suppressed because it is too large Load Diff

96
crawler.h Normal file
View File

@ -0,0 +1,96 @@
/*
skipfish - crawler state machine
--------------------------------
Author: Michal Zalewski <lcamtuf@google.com>
Copyright 2009, 2010 by Google Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#ifndef _HAVE_CRAWLER_H
#include "types.h"
#include "http_client.h"
#include "database.h"
extern u32 crawl_prob; /* Crawl probability (1-100%) */
extern u8 no_parse, /* Disable HTML link detection */
warn_mixed, /* Warn on mixed content? */
no_fuzz_ext, /* Don't fuzz ext in dirs? */
log_ext_urls; /* Log external URLs? */
/* Provisional debugging callback. */
u8 show_response(struct http_request* req, struct http_response* res);
/* Asynchronous request callback for the initial PSTATE_FETCH request of
PIVOT_UNKNOWN resources. */
u8 fetch_unknown_callback(struct http_request* req, struct http_response* res);
/* Asynchronous request callback for the initial PSTATE_FETCH request of
PIVOT_FILE resources. */
u8 fetch_file_callback(struct http_request* req, struct http_response* res);
/* Asynchronous request callback for the initial PSTATE_FETCH request of
PIVOT_DIR resources. */
u8 fetch_dir_callback(struct http_request* req, struct http_response* res);
/* Initializes the crawl of try_list items for a pivot point (if any still
not crawled). */
void crawl_par_trylist_init(struct pivot_desc* pv);
/* Adds new name=value to form hints list. */
void add_form_hint(u8* name, u8* value);
/* Macros to access various useful pivot points: */
#define MREQ(_x) (req->pivot->misc_req[_x])
#define MRES(_x) (req->pivot->misc_res[_x])
#define RPAR(_req) ((_req)->pivot->parent)
#define RPREQ(_req) ((_req)->pivot->req)
#define RPRES(_req) ((_req)->pivot->res)
/* Debugging instrumentation for callbacks and callback helpers: */
#ifdef LOG_STDERR
#define DEBUG_CALLBACK(_req, _res) do { \
u8* _url = serialize_path(_req, 1, 1); \
DEBUG("* %s: URL %s (%u, len %u)\n", __FUNCTION__, _url, (_res) ? \
(_res)->code : 0, (_res) ? (_res)->pay_len : 0); \
ck_free(_url); \
} while (0)
#define DEBUG_HELPER(_pv) do { \
u8* _url = serialize_path((_pv)->req, 1, 1); \
DEBUG("* %s: URL %s (%u, len %u)\n", __FUNCTION__, _url, (_pv)->res ? \
(_pv)->res->code : 0, (_pv)->res ? (_pv)->res->pay_len : 0); \
ck_free(_url); \
} while (0)
#else
#define DEBUG_CALLBACK(_req, _res)
#define DEBUG_HELPER(_pv)
#endif /* ^LOG_STDERR */
#endif /* !_HAVE_CRAWLER_H */

1356
database.c Normal file

File diff suppressed because it is too large Load Diff

406
database.h Normal file
View File

@ -0,0 +1,406 @@
/*
skipfish - database & crawl management
--------------------------------------
Author: Michal Zalewski <lcamtuf@google.com>
Copyright 2009, 2010 by Google Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#ifndef _HAVE_DATABASE_H
#define _HAVE_DATABASE_H
#include "debug.h"
#include "config.h"
#include "types.h"
#include "http_client.h"
/* Testing pivot points - used to organize the scan: */
/* - Pivot types: */
#define PIVOT_NONE 0 /* Invalid */
#define PIVOT_ROOT 1 /* Root pivot */
#define PIVOT_SERV 10 /* Top-level host pivot */
#define PIVOT_DIR 11 /* Directory pivot */
#define PIVOT_FILE 12 /* File pivot */
#define PIVOT_PATHINFO 13 /* PATH_INFO script */
#define PIVOT_UNKNOWN 18 /* (Currently) unknown type */
#define PIVOT_PARAM 100 /* Parameter fuzzing pivot */
#define PIVOT_VALUE 101 /* Parameter value pivot */
/* - Pivot states (initialized to PENDING or FETCH by database.c, then
advanced by crawler.c): */
#define PSTATE_NONE 0 /* Invalid */
#define PSTATE_PENDING 1 /* Pending parent tests */
#define PSTATE_FETCH 10 /* Initial data fetch */
#define PSTATE_TYPE_CHECK 20 /* Type check (unknown only) */
#define PSTATE_404_CHECK 22 /* 404 check (dir only) */
#define PSTATE_IPS_CHECK 25 /* IPS filtering check */
/* For directories only (injecting children nodes): */
#define PSTATE_CHILD_INJECT 50 /* Common security attacks */
#define PSTATE_CHILD_DICT 55 /* Dictionary brute-force */
/* For parametric nodes only (replacing parameter value): */
#define PSTATE_PAR_CHECK 60 /* Parameter works at all? */
#define PSTATE_PAR_INJECT 65 /* Common security attacks */
#define PSTATE_PAR_NUMBER 70 /* Numeric ID traversal */
#define PSTATE_PAR_DICT 75 /* Dictionary brute-force */
#define PSTATE_PAR_TRYLIST 99 /* 'Try list' fetches */
#define PSTATE_DONE 100 /* Analysis done */
/* - Descriptor of a pivot point: */
struct pivot_desc {
u8 type; /* PIVOT_* */
u8 state; /* PSTATE_* */
u8 linked; /* Linked to? (0/1/2) */
u8 missing; /* Determined to be missing? */
u8 csens; /* Case sensitive names? */
u8 c_checked; /* csens check done? */
u8* name; /* Directory / script name */
struct http_request* req; /* Prototype HTTP request */
s32 fuzz_par; /* Fuzz target parameter */
u8** try_list; /* Values to try */
u32 try_cnt; /* Number of values to try */
u32 try_cur; /* Last tested try list offs */
struct pivot_desc* parent; /* Parent pivot, if any */
struct pivot_desc** child; /* List of children */
u32 child_cnt; /* Number of children */
struct issue_desc* issue; /* List of issues found */
u32 issue_cnt; /* Number of issues */
struct http_response* res; /* HTTP response seen */
u8 res_varies; /* Response varies? */
/* Fuzzer and probe state data: */
u8 no_fuzz; /* Do not attepmt fuzzing. */
u8 uses_ips; /* Uses IPS filtering? */
u32 cur_key; /* Current keyword */
u32 pdic_cur_key; /* ...for param dict */
u8 guess; /* Guess list keywords? */
u8 pdic_guess; /* ...for param dict */
u32 pending; /* Number of pending reqs */
u32 pdic_pending; /* ...for param dict */
u32 num_pending; /* ...for numerical enum */
u32 try_pending; /* ...for try list */
u32 r404_pending; /* ...for 404 probes */
u32 ck_pending; /* ...for behavior checks */
struct http_sig r404[MAX_404]; /* 404 response signatures */
u32 r404_cnt; /* Number of sigs collected */
struct http_sig unk_sig; /* Original "unknown" sig. */
/* Injection attack logic scratchpad: */
struct http_request* misc_req[10]; /* Saved requests */
struct http_response* misc_res[10]; /* Saved responses */
u8 misc_cnt; /* Request / response count */
u8 i_skip[15]; /* Injection step skip flags */
u8 i_skip_add;
u8 r404_skip;
u8 bogus_par; /* fuzz_par does nothing? */
u8 ognl_check; /* OGNL check flags */
/* Reporting information: */
u32 total_child_cnt; /* All children */
u32 total_issues[6]; /* Issues by severity */
u8 dupe; /* Looks like a duplicate? */
u32 pv_sig; /* Simple pivot signature */
};
extern struct pivot_desc root_pivot;
/* Maps a parsed URL (in req) to the pivot tree, creating or modifying nodes
as necessary, and scheduling them for crawl; via_link should be 1 if the
URL came from an explicit link or user input, 0 if brute-forced.
Always makes a copy of req, res; they can be destroyed safely; via_link
set to 2 means we're sure it's a valid link; 1 means "probably". */
void maybe_add_pivot(struct http_request* req, struct http_response* res,
u8 via_link);
/* Creates a working copy of a request for use in db and crawl functions. If all
is 0, does not copy path, query parameters, or POST data (but still
copies headers); and forces GET method. */
struct http_request* req_copy(struct http_request* req,
struct pivot_desc* pv, u8 all);
/* Finds the host-level pivot point for global issues. */
struct pivot_desc* host_pivot(struct pivot_desc* pv);
/* Case sensitivity helper. */
u8 is_c_sens(struct pivot_desc* pv);
/* Recorded security issues: */
/* - Informational data (non-specific security-relevant notes): */
#define PROB_NONE 0 /* Invalid */
#define PROB_SSL_CERT 10101 /* SSL issuer data */
#define PROB_NEW_COOKIE 10201 /* New cookie added */
#define PROB_SERVER_CHANGE 10202 /* New Server: value seen */
#define PROB_VIA_CHANGE 10203 /* New Via: value seen */
#define PROB_X_CHANGE 10204 /* New X-*: value seen */
#define PROB_NEW_404 10205 /* New 404 signatures seen */
#define PROB_NO_ACCESS 10401 /* Resource not accessible */
#define PROB_AUTH_REQ 10402 /* Authentication requires */
#define PROB_SERV_ERR 10403 /* Server error */
#define PROB_EXT_LINK 10501 /* External link */
#define PROB_EXT_REDIR 10502 /* External redirector */
#define PROB_MAIL_ADDR 10503 /* E-mail address seen */
#define PROB_UNKNOWN_PROTO 10504 /* Unknown protocol in URL */
#define PROB_UNKNOWN_FIELD 10505 /* Unknown form field */
#define PROB_FORM 10601 /* XSRF-safe form */
#define PROB_PASS_FORM 10602 /* Password form */
#define PROB_USER_LINK 10701 /* User-supplied A link */
#define PROB_BAD_MIME_STAT 10801 /* Bad MIME type, low risk */
#define PROB_GEN_MIME_STAT 10802 /* Generic MIME, low risk */
#define PROB_BAD_CSET_STAT 10803 /* Bad charset, low risk */
#define PROB_CFL_HDRS_STAT 10804 /* Conflicting hdr, low risk */
#define PROB_FUZZ_DIGIT 10901 /* Try fuzzing file name */
#define PROB_OGNL 10902 /* OGNL-like parameter */
/* - Internal warnings (scan failures, etc): */
#define PROB_FETCH_FAIL 20101 /* Fetch failed. */
#define PROB_LIMITS 20102 /* Crawl limits exceeded. */
#define PROB_404_FAIL 20201 /* Behavior probe failed. */
#define PROB_IPS_FILTER 20202 /* IPS behavior detected. */
#define PROB_IPS_FILTER_OFF 20203 /* IPS no longer active. */
#define PROB_VARIES 20204 /* Response varies. */
#define PROB_NOT_DIR 20301 /* Node should be a dir. */
/* - Low severity issues (limited impact or check specificity): */
#define PROB_URL_AUTH 30101 /* HTTP credentials in URL */
#define PROB_SSL_CERT_DATE 30201 /* SSL cert date invalid */
#define PROB_SSL_SELF_CERT 30202 /* Self-signed SSL cert */
#define PROB_SSL_BAD_HOST 30203 /* Certificate host mismatch */
#define PROB_SSL_NO_CERT 30204 /* No certificate data? */
#define PROB_DIR_LIST 30301 /* Dir listing bypass */
#define PROB_URL_REDIR 30401 /* URL redirection */
#define PROB_USER_URL 30402 /* URL content inclusion */
#define PROB_EXT_OBJ 30501 /* External obj standalone */
#define PROB_MIXED_OBJ 30502 /* Mixed content standalone */
#define PROB_VULN_FORM 30601 /* Form w/o anti-XSRF token */
#define PROB_JS_XSSI 30602 /* Script with no XSSI prot */
#define PROB_CACHE_LOW 30701 /* Cache nit-picking */
/* - Moderate severity issues (data compromise): */
#define PROB_BODY_XSS 40101 /* Document body XSS */
#define PROB_URL_XSS 40102 /* URL-based XSS */
#define PROB_HTTP_INJECT 40103 /* Header splitting */
#define PROB_USER_URL_ACT 40104 /* Active user content */
#define PROB_EXT_SUB 40201 /* External subresource */
#define PROB_MIXED_SUB 40202 /* Mixed content subresource */
#define PROB_BAD_MIME_DYN 40301 /* Bad MIME type, hi risk */
#define PROB_GEN_MIME_DYN 40302 /* Generic MIME, hi risk */
#define PROB_BAD_CSET_DYN 40304 /* Bad charset, hi risk */
#define PROB_CFL_HDRS_DYN 40305 /* Conflicting hdr, hi risk */
#define PROB_FILE_POI 40401 /* Interesting file */
#define PROB_ERROR_POI 40402 /* Interesting error message */
#define PROB_DIR_TRAVERSAL 40501 /* Directory traversal */
#define PROB_CACHE_HI 40601 /* Serious caching issues */
/* - High severity issues (system compromise): */
#define PROB_XML_INJECT 50101 /* Backend XML injection */
#define PROB_SH_INJECT 50102 /* Shell cmd injection */
#define PROB_SQL_INJECT 50103 /* SQL injection */
#define PROB_FMT_STRING 50104 /* Format string attack */
#define PROB_INT_OVER 50105 /* Integer overflow attack */
#define PROB_SQL_PARAM 50201 /* SQL-like parameter */
/* - Severity macros: */
#define PSEV(_x) ((_x) / 10000)
#define PSEV_INFO 1
#define PSEV_WARN 2
#define PSEV_LOW 3
#define PSEV_MED 4
#define PSEV_HI 5
/* Issue descriptor: */
struct issue_desc {
u32 type; /* PROB_* */
u8* extra; /* Problem-specific string */
struct http_request* req; /* HTTP request sent */
struct http_response* res; /* HTTP response seen */
};
/* Register a problem, if not duplicate (res, extra may be NULL): */
void problem(u32 type, struct http_request* req, struct http_response* res,
u8* extra, struct pivot_desc* pv, u8 allow_dup);
/* Compare the checksums for two responses: */
u8 same_page(struct http_sig* sig1, struct http_sig* sig2);
/* URL filtering constraints (exported from database.c): */
#define APPEND_FILTER(_ptr, _cnt, _val) do { \
(_ptr) = ck_realloc(_ptr, ((_cnt) + 1) * sizeof(u8*)); \
(_ptr)[_cnt] = (u8*)(_val); \
(_cnt)++; \
} while (0)
extern u8 **deny_urls, **deny_strings, **allow_urls, **allow_domains,
**trust_domains;
extern u32 num_deny_urls,
num_deny_strings,
num_allow_urls,
num_allow_domains,
num_trust_domains;
extern u32 max_depth,
max_children,
max_trylist,
max_guesses;
/* Check if the URL is permitted under current rules (0 = no, 1 = yes): */
u8 url_allowed_host(struct http_request* req);
u8 url_trusted_host(struct http_request* req);
u8 url_allowed(struct http_request* req);
/* Keyword management: */
extern u8 dont_add_words;
/* Adds a new keyword candidate to the "guess" list. */
void wordlist_add_guess(u8* text);
/* Adds non-sanitized keywords to the list. */
void wordlist_confirm_word(u8* text);
/* Returns wordlist item at a specified offset (NULL if no more available). */
u8* wordlist_get_word(u32 offset);
/* Returns keyword candidate at a specified offset (or NULL). */
u8* wordlist_get_guess(u32 offset);
/* Returns extension at a specified offset (or NULL). */
u8* wordlist_get_extension(u32 offset);
/* Loads keywords from file. */
void load_keywords(u8* fname, u32 purge_age);
/* Saves all keywords to a file. */
void save_keywords(u8* fname);
/* Database maintenance: */
/* Dumps pivot database, for debugging purposes. */
void dump_pivots(struct pivot_desc* cur, u8 nest);
/* Deallocates all data, for debugging purposes. */
void destroy_database();
/* Prints DB stats. */
void database_stats();
/* XSS manager: */
/* Creates a new stored XSS id (buffer valid only until next call). */
u8* new_xss_tag(u8* prefix);
/* Registers last XSS tag along with a completed http_request. */
void register_xss_tag(struct http_request* req);
/* Returns request associated with a stored XSS id. */
struct http_request* get_xss_request(u32 xid, u32 sid);
/* Dumps signature data: */
void dump_signature(struct http_sig* sig);
/* Displays debug information for same_page() checks. */
void debug_same_page(struct http_sig* sig1, struct http_sig* sig2);
#endif /* _HAVE_DATABASE_H */

96
debug.h Normal file
View File

@ -0,0 +1,96 @@
/*
skipfish - debugging and messaging macros
-----------------------------------------
Author: Michal Zalewski <lcamtuf@google.com>
Copyright 2009, 2010 by Google Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#ifndef _HAVE_DEBUG_H
#define _HAVE_DEBUG_H
#include <stdio.h>
#include "config.h"
#ifdef USE_COLOR
# define cBLK "\x1b[0;30m"
# define cRED "\x1b[0;31m"
# define cGRN "\x1b[0;32m"
# define cBRN "\x1b[0;33m"
# define cBLU "\x1b[0;34m"
# define cMGN "\x1b[0;35m"
# define cCYA "\x1b[0;36m"
# define cNOR "\x1b[0;37m"
# define cGRA "\x1b[1;30m"
# define cLRD "\x1b[1;31m"
# define cLGN "\x1b[1;32m"
# define cYEL "\x1b[1;33m"
# define cLBL "\x1b[1;34m"
# define cPIN "\x1b[1;35m"
# define cLCY "\x1b[1;36m"
# define cBRI "\x1b[1;37m"
#else
# define cBLK
# define cRED
# define cGRN
# define cBRN
# define cBLU
# define cMGN
# define cCYA
# define cNOR
# define cGRA
# define cLRD
# define cLGN
# define cYEL
# define cLBL
# define cPIN
# define cLCY
# define cBRI
#endif /* ^USE_COLOR */
#ifdef LOG_STDERR
# define DEBUG(x...) fprintf(stderr,x)
#else
# define DEBUG(x...)
#endif /* ^LOG_STDERR */
#define F_DEBUG(x...) fprintf(stderr,x)
#define SAY(x...) printf(x)
#define WARN(x...) do { \
F_DEBUG(cYEL "[!] WARNING: " cBRI x); \
F_DEBUG(cNOR "\n"); \
} while (0)
#define FATAL(x...) do { \
F_DEBUG(cLRD "[-] PROGRAM ABORT : " cBRI x); \
F_DEBUG(cLRD "\n Stop location : " cNOR "%s(), %s:%u\n", \
__FUNCTION__, __FILE__, __LINE__); \
exit(1); \
} while (0)
#define PFATAL(x...) do { \
F_DEBUG(cLRD "[-] SYSTEM ERROR : " cBRI x); \
F_DEBUG(cLRD "\n Stop location : " cNOR "%s(), %s:%u\n", \
__FUNCTION__, __FILE__, __LINE__); \
perror(cLRD " OS message " cNOR); \
exit(1); \
} while (0)
#endif /* ! _HAVE_DEBUG_H */

186
dictionaries/README-FIRST Normal file
View File

@ -0,0 +1,186 @@
This directory contains four alternative, hand-picked Skipfish dictionaries.
Before you pick one, you should understand several basic concepts related to
dictionary management in this scanner, as this topic is of critical importance
to the quality of your scans.
-----------------------------
Dictionary management basics:
-----------------------------
1) Each dictionary may consist of a number of extensions, and a number of
"regular" keywords. Extensions are considered just a special subset of
the keyword list.
2) You can specify the dictionary to use with a -W option. The file must
conform to the following format:
type hits total_age last_age keyword
...where 'type' is either 'e' or 'w' (extension or wordlist); 'hits'
is the total number of times this keyword resulted in a non-404 hit
in all previous scans; 'total_age' is the number of scan cycles this
word is in the dictionary; 'last_age' is the number of scan cycles
since the last 'hit'; and 'keyword' is the actual keyword.
Do not duplicate extensions as keywords - if you already have 'html' as
an 'e' entry, there is no need to also create a 'w' one.
There must be no empty or malformed lines, comments, etc, in the wordlist
file. Extension keywords must have no leading dot (e.g., 'exe', not '.exe'),
and all keywords should be NOT url-encoded (e.g., 'Program Files', not
'Program%20Files'). No keyword should exceed 64 characters.
If you omit -W in the command line, 'skipfish.wl' is assumed.
3) When loading a dictionary, you can use -R option to drop any entries
that had no hits for a specified number of scans.
4) Unless -L is specified in the command line, the scanner will also
automatically learn new keywords and extensions based on any links
discovered during the scan.
5) Unless -L is specified, the scanner will also analyze pages and extract
words that would serve as keyword guesses. A capped number of guesses
is maintained by the scanner, with older entries being removed from the
list as new ones are found (the size of this jar is adjustable with the
-G option).
These guesses would be tested along with regular keywords during brute-force
steps. If they result in a non-404 hit at some point, they are promoted to
the "proper" keyword list.
6) Unless -V is specified in the command line, all newly discovered keywords
are saved back to the input wordlist file, along with their hit statistics.
----------------------------------------------
Dictionaries are used for the following tasks:
----------------------------------------------
1) When a new directory, or a file-like query or POST parameter is discovered,
the scanner attempts passing all possible <keyword> values to discover new
files, directories, etc.
2) If you did NOT specify -Y in the command line, the scanner also tests all
possible <keyword>.<extension> pairs in these cases. Note that this may
result in several orders of magnitude more requests, but is the only way
to discover files such as 'backup.tar.gz', 'database.csv', etc.
3) For any non-404 file or directory discovered by any other means, the scanner
also attempts all <node_filename>.<extension> combinations, to discover,
for example, entries such as 'index.php.old'.
----------------------
Supplied dictionaries:
----------------------
1) Empty dictionary (-).
Simply create an empty file, then load it via -W. If you use this option
in conjunction with -L, this essentially inhibits all brute-force testing,
and results in an orderly, link-based crawl.
If -L is not used, the crawler will still attempt brute-force, but only
based on the keywords and extensions discovered when crawling the site.
This means it will likely learn keywords such as 'index' or extensions
such as 'html' - but may never attempt probing for 'log', 'old', 'bak', etc.
Both these variants are very useful for lightweight scans, but are not
particularly exhaustive.
2) Extension-only dictionary (extensions-only.wl).
This dictionary contains about 90 common file extensions, and no other
keywords. It must be used in conjunction with -Y (otherwise, it will not
behave as expected).
This is often a better alternative to a null dictionary: the scanner will
still limit brute-force primarily to file names learned on the site, but
will know about extensions such as 'log' or 'old', and will test for them
accordingly.
3) Basic extensions dictionary (minimal.wl).
This dictionary contains about 25 extensions, focusing on common entries
most likely to spell trouble (.bak, .old, .conf, .zip, etc); and about 1,700
hand-picked keywords.
This is useful for quick assessments where no obscure technologies are used.
The principal scan cost is about 42,000 requests per each fuzzed directory.
Using it without -L is recommended, as the list of extensions does not
include standard framework-specific cases (.asp, .jsp, .php, etc), and
these are best learned on the fly.
You can also use this dictionary with -Y option enabled, approximating the
behavior of most other security scanners; in this case, it will send only
about 1,700 requests per directory, and will look for 25 secondary extensions
only on otherwise discovered resources.
3) Standard extensions dictionary (default.wl).
This dictionary contains about 60 common extensions, plus the same set of
1,700 keywords. The extensions cover most of the common, interesting web
resources.
This is a good starting point for assessments where scan times are not
a critical factor; the cost is about 100,000 requests per each fuzzed
directory.
In -Y mode, it behaves nearly identical to minimal.wl, but will test a
greater set of extensions on otherwise discovered resources, at a relatively
minor expense.
4) Complete extensions dictionary (complete.wl).
Contains about 90 common extensions and 1,700 keywords. These extensions
cover a broader range of media types, including some less common programming
languages, image and video formats, etc.
Useful for comprehensive assessments, over 150,000 requests per each fuzzed
directory.
In -Y mode - see default.wl, offers the best coverage of all three wordlists
at a relatively low cost.
Of course, you can customize these dictionaries as seen fit. It might be, for
example, a good idea to downgrade file extensions not likely to occur given
the technologies used by your target host to regular 'w' records.
Whichever option you choose, be sure to make a *copy* of this dictionary, and
load that copy, not the original, via -W. The specified file will be overwritten
with site-specific information (unless -V used).
----------------------------------
Bah, these dictionaries are small!
----------------------------------
Keep in mind that web crawling is not password guessing; it is exceedingly
unlikely for web servers to have directories or files named 'henceforth',
'abating', or 'witlessly'. Because of this, using 200,000+ entry English
wordlists, or similar data sets, is largely pointless.
More importantly, doing so often leads to reduced coverage or unacceptable
scan times; with a 200k wordlist and 80 extensions, trying all combinations
for a single directory would take 30-40 hours against a slow server; and even
with a fast one, at least 5 hours is to be expected.
DirBuster uses a unique approach that seems promising at first sight - to
base their wordlists depending on how often a particular keyword appeared in
URLs seen on the Internet. This is interesting, but comes with two gotchas:
- Keywords related to popular websites and brands are heavily
overrepresented; DirBuster wordlists have 'bbc_news_24', 'beebie_bunny',
and 'koalabrothers' near the top of their list, but it is pretty unlikely
these keywords would be of any use in real-world assessments of a typical
site, unless it happens to be BBC.
- Some of the most interesting security-related keywords are not commonly
indexed, and may appear, say, on no more than few dozen or few thousand
crawled websites in Google index. But, that does not make 'AggreSpy' or
'.ssh/authorized_keys' any less interesting.
Bottom line is, poor wordlists are one of the reasons why some other web
security scanners perform worse than expected, so please - be careful. You will
almost always be better off narrowing down or selectively extending the
supplied set (and possibly contributing back your changes upstream!), than
importing a giant wordlist from elsewhere.

1894
dictionaries/complete.wl Normal file

File diff suppressed because it is too large Load Diff

1893
dictionaries/default.wl Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,100 @@
e 1 1 1 asmx
e 1 1 1 asp
e 1 1 1 aspx
e 1 1 1 bak
e 1 1 1 bat
e 1 1 1 bin
e 1 1 1 bz2
e 1 1 1 c
e 1 1 1 cc
e 1 1 1 cfg
e 1 1 1 cgi
e 1 1 1 class
e 1 1 1 conf
e 1 1 1 config
e 1 1 1 cpp
e 1 1 1 cs
e 1 1 1 csv
e 1 1 1 dat
e 1 1 1 db
e 1 1 1 dll
e 1 1 1 do
e 1 1 1 doc
e 1 1 1 dump
e 1 1 1 ep
e 1 1 1 err
e 1 1 1 error
e 1 1 1 exe
e 1 1 1 gif
e 1 1 1 gz
e 1 1 1 htm
e 1 1 1 html
e 1 1 1 inc
e 1 1 1 ini
e 1 1 1 java
e 1 1 1 jhtml
e 1 1 1 jpg
e 1 1 1 js
e 1 1 1 jsf
e 1 1 1 jsp
e 1 1 1 key
e 1 1 1 lib
e 1 1 1 log
e 1 1 1 lst
e 1 1 1 manifest
e 1 1 1 mdb
e 1 1 1 meta
e 1 1 1 msg
e 1 1 1 nsf
e 1 1 1 o
e 1 1 1 old
e 1 1 1 ora
e 1 1 1 orig
e 1 1 1 out
e 1 1 1 part
e 1 1 1 pdf
e 1 1 1 php
e 1 1 1 php3
e 1 1 1 pl
e 1 1 1 pm
e 1 1 1 png
e 1 1 1 ppt
e 1 1 1 properties
e 1 1 1 py
e 1 1 1 rar
e 1 1 1 rss
e 1 1 1 rtf
e 1 1 1 save
e 1 1 1 sh
e 1 1 1 shtml
e 1 1 1 so
e 1 1 1 sql
e 1 1 1 stackdump
e 1 1 1 swf
e 1 1 1 tar
e 1 1 1 tar.bz2
e 1 1 1 tar.gz
e 1 1 1 temp
e 1 1 1 test
e 1 1 1 tgz
e 1 1 1 tmp
e 1 1 1 trace
e 1 1 1 txt
e 1 1 1 vb
e 1 1 1 vbs
e 1 1 1 ws
e 1 1 1 xls
e 1 1 1 xml
e 1 1 1 xsl
e 1 1 1 zip
w 1 1 1 AggreSpy
w 1 1 1 DMSDump
w 1 1 1 dms0
w 1 1 1 dmse 1 1 1 7z
w 1 1 1 getjobid
w 1 1 1 oprocmgr-status
w 1 1 1 rwservlet
w 1 1 1 showenv
w 1 1 1 showjobs
w 1 1 1 showmap
w 1 1 1 soaprouter

1892
dictionaries/minimal.wl Normal file

File diff suppressed because it is too large Load Diff

2455
http_client.c Normal file

File diff suppressed because it is too large Load Diff

418
http_client.h Normal file
View File

@ -0,0 +1,418 @@
/*
skipfish - high-performance, single-process asynchronous HTTP client
--------------------------------------------------------------------
Author: Michal Zalewski <lcamtuf@google.com>
Copyright 2009, 2010 by Google Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#ifndef _HAVE_HTTP_CLIENT_H
#define _HAVE_HTTP_CLIENT_H
#include <openssl/ssl.h>
#include "config.h"
#include "types.h"
#include "alloc-inl.h"
#include "string-inl.h"
/* Generic type-name-value array, used for HTTP headers, etc: */
struct param_array {
u8* t; /* Type */
u8** n; /* Name */
u8** v; /* Value */
u32 c; /* Count */
};
/* Flags for http_request protocol: */
#define PROTO_NONE 0 /* Illegal value */
#define PROTO_HTTP 1 /* Plain-text HTTP */
#define PROTO_HTTPS 2 /* TLS/SSL wrapper */
/* Flags for http_request parameter list entries: */
#define PARAM_NONE 0 /* Empty parameter slot */
#define PARAM_PATH 10 /* Path or parametrized path */
#define PARAM_PATH_S 11 /* - Semicolon element */
#define PARAM_PATH_C 12 /* - Comma element */
#define PARAM_PATH_E 13 /* - Exclamation mark element */
#define PARAM_PATH_D 14 /* - Dollar sign element */
#define PATH_SUBTYPE(_x) ((_x) >= PARAM_PATH && (_x) < PARAM_QUERY)
#define PARAM_QUERY 20 /* Query parameter */
#define PARAM_QUERY_S 21 /* - Semicolon element */
#define PARAM_QUERY_C 22 /* - Comma element */
#define PARAM_QUERY_E 23 /* - Exclamation mark element */
#define PARAM_QUERY_D 24 /* - Dollar sign element */
#define QUERY_SUBTYPE(_x) ((_x) >= PARAM_QUERY && (_x) < PARAM_POST)
#define PARAM_POST 50 /* Post parameter */
#define PARAM_POST_F 51 /* - File field */
#define PARAM_POST_O 52 /* - Non-standard (e.g., JSON) */
#define POST_SUBTYPE(_x) ((_x) >= PARAM_POST && (_x) < PARAM_HEADER)
#define PARAM_HEADER 100 /* Generic HTTP header */
#define PARAM_COOKIE 101 /* - HTTP cookie */
#define HEADER_SUBTYPE(_x) ((_x) >= PARAM_HEADER)
struct http_response;
struct queue_entry;
/* HTTP response signature. */
struct http_sig {
u32 code; /* HTTP response code */
u32 data[FP_SIZE]; /* Response fingerprint data */
};
/* HTTP request descriptor: */
struct http_request {
u8 proto; /* Protocol (PROTO_*) */
u8* method; /* HTTP method (GET, POST, ...) */
u8* host; /* Host name */
u32 addr; /* Resolved IP address */
u16 port; /* Port number to connect to */
u8* orig_url; /* Copy of the original URL */
struct param_array par; /* Parameters, headers, cookies */
struct pivot_desc *pivot; /* Pivot descriptor */
u32 user_val; /* Can be used freely */
u8 (*callback)(struct http_request*, struct http_response*);
/* Callback to invoke when done */
struct http_sig same_sig; /* Used by secondary ext fuzz. */
};
/* Flags for http_response completion state: */
#define STATE_NOTINIT 0 /* Request not sent */
#define STATE_CONNECT 1 /* Connecting... */
#define STATE_SEND 2 /* Sending request */
#define STATE_RECEIVE 3 /* Waiting for response */
#define STATE_OK 100 /* Proper fetch */
#define STATE_DNSERR 101 /* DNS error */
#define STATE_LOCALERR 102 /* Socket or routing error */
#define STATE_CONNERR 103 /* Connection failed */
#define STATE_RESPERR 104 /* Response not valid */
#define STATE_SUPPRESS 200 /* Dropped (limits / errors) */
/* Flags for http_response warnings: */
#define WARN_NONE 0 /* No warnings */
#define WARN_PARTIAL 1 /* Incomplete read */
#define WARN_TRAIL 2 /* Trailing request garbage */
#define WARN_CFL_HDR 4 /* Conflicting headers */
/* HTTP response descriptor: */
struct http_response {
u32 state; /* HTTP convo state (STATE_*) */
u32 code; /* HTTP response code */
u8* msg; /* HTTP response message */
u32 warn; /* Warning flags */
u8 cookies_set; /* Sets cookies? */
struct param_array hdr; /* Server header, cookie list */
u32 pay_len; /* Response payload length */
u8* payload; /* Response payload data */
struct http_sig sig; /* Response signature data */
/* Various information populated by content checks: */
u8 sniff_mime_id; /* Sniffed MIME (MIME_*) */
u8 decl_mime_id; /* Declared MIME (MIME_*) */
u8* meta_charset; /* META tag charset value */
u8* header_charset; /* Content-Type charset value */
u8* header_mime; /* Content-Type MIME type */
u8* sniffed_mime; /* Detected MIME type (ref) */
/* Everything below is of interest to scrape_response() only: */
u8 doc_type; /* 0 - tbd, 1 - bin, 2 - ascii */
u8 css_type; /* 0 - tbd, 1 - other, 2 - css */
u8 js_type; /* 0 - tbd, 1 - other, 2 - js */
u8 json_safe; /* 0 - no, 1 - yes */
u8 stuff_checked; /* check_stuff() called? */
u8 scraped; /* scrape_response() called? */
};
/* Open keep-alive connection descriptor: */
struct conn_entry {
s32 fd; /* The actual file descriptor */
u8 proto; /* Protocol (PROTO_*) */
u32 addr; /* Destination IP */
u32 port; /* Destination port */
u8 reused; /* Used for earier requests? */
u32 req_start; /* Unix time: request start */
u32 last_rw; /* Unix time: last read / write */
SSL_CTX *srv_ctx; /* SSL context */
SSL *srv_ssl;
u8 SSL_rd_w_wr; /* SSL_read() wants to write? */
u8 SSL_wr_w_rd; /* SSL_write() wants to read? */
u8 ssl_checked; /* SSL state checked? */
u8* read_buf; /* Current read buffer */
u32 read_len;
u8* write_buf; /* Pending write buffer */
u32 write_off; /* Current write offset */
u32 write_len;
struct queue_entry* q; /* Current queue entry */
struct conn_entry* prev; /* Previous connection entry */
struct conn_entry* next; /* Next connection entry */
};
/* Request queue descriptor: */
struct queue_entry {
struct http_request* req; /* Request descriptor */
struct http_response* res; /* Response descriptor */
struct conn_entry* c; /* Connection currently used */
struct queue_entry* prev; /* Previous queue entry */
struct queue_entry* next; /* Next queue entry */
};
/* DNS cache item: */
struct dns_entry {
u8* name; /* Name requested */
u32 addr; /* IP address (0 = bad host) */
struct dns_entry* next; /* Next cache entry */
};
/* Simplified macros to manipulate param_arrays: */
#define ADD(_ar,_t,_n,_v) do { \
u32 _cur = (_ar)->c++; \
(_ar)->t = ck_realloc((_ar)->t, (_ar)->c); \
(_ar)->n = ck_realloc((_ar)->n, (_ar)->c * sizeof(u8*)); \
(_ar)->v = ck_realloc((_ar)->v, (_ar)->c * sizeof(u8*)); \
(_ar)->t[cur] = _t; \
(_ar)->n[cur] = (_n) ? ck_strdup(_n) : 0; \
(_ar)->v[cur] = (_v) ? ck_strdup(_v) : 0; \
} while (0)
#define FREE(_ar) do { \
while ((_ar)->c--) { \
free((_ar)->n[(_ar)->c]); \
free((_ar)->v[(_ar)->c]); \
} \
free((_ar)->t); \
free((_ar)->n); \
free((_ar)->v); \
} while (0)
/* Extracts parameter value from param_array. Name is matched if
non-NULL. Returns pointer to value data, not a duplicate string;
NULL if no match found. */
u8* get_value(u8 type, u8* name, u32 offset, struct param_array* par);
/* Inserts or overwrites parameter value in param_array. If offset
== -1, will append parameter to list. Duplicates strings,
name and val can be NULL. */
void set_value(u8 type, u8* name, u8* val, s32 offset, struct param_array* par);
/* Simplified macros for value table access: */
#define GET_HDR(_name, _p) get_value(PARAM_HEADER, _name, 0, _p)
#define SET_HDR(_name, _val, _p) set_value(PARAM_HEADER, _name, _val, -1, _p)
#define GET_CK(_name, _p) get_value(PARAM_COOKIE, _name, 0, _p)
#define SET_CK(_name, _val, _p) set_value(PARAM_COOKIE, _name, _val, 0, _p)
void tokenize_path(u8* str, struct http_request* req, u8 add_slash);
/* Convert a fully-qualified or relative URL string to a proper http_request
representation. Returns 0 on success, 1 on format error. */
u8 parse_url(u8* url, struct http_request* req, struct http_request* ref);
/* URL-decodes a string. 'Plus' parameter governs the behavior on +
signs (as they have a special meaning only in query params, not in path). */
u8* url_decode_token(u8* str, u32 len, u8 plus);
/* URL-encodes a string according to custom rules. The assumption here is that
the data is already tokenized as "special" boundaries such as ?, =, &, /,
;, so these characters must always be escaped if present in tokens. We
otherwise let pretty much everything else go through, as it may help with
the exploitation of certain vulnerabilities. */
u8* url_encode_token(u8* str, u32 len);
/* Reconstructs URI from http_request data. Includes protocol and host
if with_host is non-zero. */
u8* serialize_path(struct http_request* req, u8 with_host, u8 with_post);
/* Looks up IP for a particular host, returns data in network order.
Uses standard resolver, so it is slow and blocking, but we only
expect to call it a couple of times. */
u32 maybe_lookup_host(u8* name);
/* Creates an ad hoc DNS cache entry, to override NS lookups. */
void fake_host(u8* name, u32 addr);
/* Schedules a new asynchronous request; req->callback() will be invoked when
the request is completed. */
void async_request(struct http_request* req);
/* Prepares a serialized HTTP buffer to be sent over the network. */
u8* build_request_data(struct http_request* req);
/* Parses a network buffer containing raw HTTP response received over the
network ('more' == the socket is still available for reading). Returns 0
if response parses OK, 1 if more data should be read from the socket,
2 if the response seems invalid. */
u8 parse_response(struct http_request* req, struct http_response* res, u8* data,
u32 data_len, u8 more);
/* Processes the queue. Returns the number of queue entries remaining,
0 if none. Will do a blocking select() to wait for socket state changes
(or timeouts) if no data available to process. This is the main
routine for the scanning loop. */
u32 next_from_queue(void);
/* Dumps HTTP request stats, for debugging purposes: */
void dump_http_request(struct http_request* r);
/* Dumps HTTP response stats, for debugging purposes: */
void dump_http_response(struct http_response* r);
/* Fingerprints a response: */
void fprint_response(struct http_response* res);
/* Performs a deep free() of sturct http_request */
void destroy_request(struct http_request* req);
/* Performs a deep free() of sturct http_response */
void destroy_response(struct http_response* res);
/* Creates a working copy of a request. If all is 0, does not copy
path, query parameters, or POST data (but still copies headers). */
struct http_request* req_copy(struct http_request* req, struct pivot_desc* pv,
u8 all);
/* Creates a copy of a response. */
struct http_response* res_copy(struct http_response* res);
/* Various settings and counters exported to other modules: */
extern u32 max_connections,
max_conn_host,
max_requests,
max_fail,
idle_tmout,
resp_tmout,
rw_tmout,
size_limit,
req_errors_net,
req_errors_http,
req_errors_cur,
req_count,
req_dropped,
req_retried,
url_scope,
conn_count,
conn_idle_tmout,
conn_busy_tmout,
conn_failed,
queue_cur;
extern u64 bytes_sent,
bytes_recv,
bytes_deflated,
bytes_inflated;
extern u8 ignore_cookies;
/* Flags for browser type: */
#define BROWSER_FAST 0 /* Minimimal HTTP headers */
#define BROWSER_MSIE 1 /* Try to mimic MSIE */
#define BROWSER_FFOX 2 /* Try to mimic Firefox */
extern u8 browser_type;
/* Flags for authentication type: */
#define AUTH_NONE 0 /* No authentication */
#define AUTH_BASIC 1 /* 'Basic' HTTP auth */
extern u8 auth_type;
extern u8 *auth_user,
*auth_pass;
/* Global HTTP cookies, extra headers: */
extern struct param_array global_http_par;
/* Destroys http state information, for memory profiling. */
void destroy_http();
/* Shows some pretty statistics. */
void http_stats(u64 st_time);
#endif /* !_HAVE_HTTP_CLIENT_H */

779
report.c Normal file
View File

@ -0,0 +1,779 @@
/*
skipfish - post-processing and reporting
----------------------------------------
Author: Michal Zalewski <lcamtuf@google.com>
Copyright 2009, 2010 by Google Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#include <unistd.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <time.h>
#include <dirent.h>
#include <sys/fcntl.h>
#include "debug.h"
#include "config.h"
#include "types.h"
#include "http_client.h"
#include "database.h"
#include "crawler.h"
#include "analysis.h"
/* Pivot and issue signature data. */
struct p_sig_desc {
u8 type; /* Pivot type */
struct http_sig* res_sig; /* Response signature */
u32 issue_sig; /* Issues fingerprint */
u32 child_sig; /* Children fingerprint */
};
static struct p_sig_desc* p_sig;
static u32 p_sig_cnt;
u8 suppress_dupes;
/* Response, issue sample data. */
struct mime_sample_desc {
u8* det_mime;
struct http_request** req;
struct http_response** res;
u32 sample_cnt;
};
struct issue_sample_desc {
u32 type;
struct issue_desc** i;
u32 sample_cnt;
};
static struct mime_sample_desc* m_samp;
static struct issue_sample_desc* i_samp;
static u32 m_samp_cnt, i_samp_cnt;
/* qsort() helper for sort_annotate_pivot(). */
static int pivot_compar(const void* par1, const void* par2) {
const struct pivot_desc *p1 = *(struct pivot_desc**)par1,
*p2 = *(struct pivot_desc**)par2;
return strcasecmp((char*)p1->name, (char*)p2->name);
}
static int issue_compar(const void* par1, const void* par2) {
const struct issue_desc *i1 = par1, *i2 = par2;
return i2->type - i1->type;
}
/* Recursively annotates and sorts pivots. */
static void sort_annotate_pivot(struct pivot_desc* pv) {
u32 i, path_child = 0;
static u32 proc_cnt;
u8 *q1, *q2;
/* Add notes to all non-dir nodes with dir or file children... */
for (i=0;i<pv->child_cnt;i++) {
if (pv->child[i]->type == PIVOT_FILE || pv->child[i]->type == PIVOT_DIR) path_child = 1;
sort_annotate_pivot(pv->child[i]);
}
if (pv->type != PIVOT_DIR && pv->type != PIVOT_SERV &&
pv->type != PIVOT_ROOT && path_child)
problem(PROB_NOT_DIR, pv->req, pv->res, 0, pv, 0);
/* Non-parametric nodes with digits in the name were not brute-forced,
but the user might be interested in doing so. Skip images here. */
if (pv->fuzz_par == -1 && pv->res &&
(pv->res->sniff_mime_id < MIME_IMG_JPEG ||
pv->res->sniff_mime_id > MIME_AV_WMEDIA) &&
(pv->type == PIVOT_DIR || pv->type == PIVOT_FILE ||
pv->type == PIVOT_PATHINFO) && !pv->missing) {
i = strlen((char*)pv->name);
while (i--)
if (isdigit(pv->name[i])) {
problem(PROB_FUZZ_DIGIT, pv->req, pv->res, 0, pv, 0);
break;
}
}
/* Parametric nodes that seem to contain queries in parameters, and are not
marked as bogus_par, should be marked as dangerous. */
if (pv->fuzz_par != -1 && !pv->bogus_par &&
(((q1 = (u8*)strchr((char*)pv->req->par.v[pv->fuzz_par], '(')) &&
(q2 = (u8*)strchr((char*)pv->req->par.v[pv->fuzz_par], ')')) && q1 < q2)
||
((inl_strcasestr(pv->req->par.v[pv->fuzz_par], (u8*)"SELECT ") ||
inl_strcasestr(pv->req->par.v[pv->fuzz_par], (u8*)"DELETE ") ) &&
inl_strcasestr(pv->req->par.v[pv->fuzz_par], (u8*)" FROM ")) ||
(inl_strcasestr(pv->req->par.v[pv->fuzz_par], (u8*)"UPDATE ") ||
inl_strcasestr(pv->req->par.v[pv->fuzz_par], (u8*)" WHERE ")) ||
inl_strcasestr(pv->req->par.v[pv->fuzz_par], (u8*)"DROP TABLE ") ||
inl_strcasestr(pv->req->par.v[pv->fuzz_par], (u8*)" ORDER BY ")))
problem(PROB_SQL_PARAM, pv->req, pv->res, 0, pv, 0);
/* Sort children nodes and issues as appropriate. */
if (pv->child_cnt > 1)
qsort(pv->child, pv->child_cnt, sizeof(struct pivot_desc*), pivot_compar);
if (pv->issue_cnt > 1)
qsort(pv->issue, pv->issue_cnt, sizeof(struct issue_desc), issue_compar);
if ((!(proc_cnt++ % 50)) || pv->type == PIVOT_ROOT) {
SAY(cLGN "\r[+] " cNOR "Sorting and annotating crawl nodes: %u", proc_cnt);
fflush(0);
}
}
/* Issue extra hashing helper. */
static inline u32 hash_extra(u8* str) {
register u32 ret = 0;
register u8 cur;
if (str)
while ((cur=*str)) {
ret = ~ret ^ (cur) ^
(cur << 5) ^ (~cur >> 5) ^
(cur << 10) ^ (~cur << 15) ^
(cur << 20) ^ (~cur << 25) ^
(cur << 30);
str++;
}
return ret;
}
/* Registers a new pivot signature, or updates an existing one. */
static void maybe_add_sig(struct pivot_desc* pv) {
u32 i, issue_sig = ~pv->issue_cnt,
child_sig = ~pv->child_cnt;
if (!pv->res) return;
/* Compute a rough children node signature based on children types. */
for (i=0;i<pv->child_cnt;i++)
child_sig ^= (hash_extra(pv->child[i]->name) ^
pv->child[i]->type) << (i % 16);
/* Do the same for all recorded issues. */
for (i=0;i<pv->issue_cnt;i++)
issue_sig ^= (hash_extra(pv->issue[i].extra) ^
pv->issue[i].type) << (i % 16);
/* Assign a simplified signature to the pivot. */
pv->pv_sig = (pv->type << 16) ^ ~child_sig ^ issue_sig;
/* See if a matching signature already exists. */
for (i=0;i<p_sig_cnt;i++)
if (p_sig[i].type == pv->type && p_sig[i].issue_sig == issue_sig &&
p_sig[i].child_sig == child_sig &&
same_page(p_sig[i].res_sig, &pv->res->sig)) {
pv->dupe = 1;
return;
}
/* No match - create a new one. */
p_sig = ck_realloc(p_sig, (p_sig_cnt + 1) * sizeof(struct p_sig_desc));
p_sig[p_sig_cnt].type = pv->type;
p_sig[p_sig_cnt].res_sig = &pv->res->sig;
p_sig[p_sig_cnt].issue_sig = issue_sig;
p_sig[p_sig_cnt].child_sig = child_sig;
p_sig_cnt++;
}
/* Recursively collects unique signatures for pivots. */
static void collect_signatures(struct pivot_desc* pv) {
u32 i;
static u32 proc_cnt;
maybe_add_sig(pv);
for (i=0;i<pv->child_cnt;i++) collect_signatures(pv->child[i]);
if ((!(proc_cnt++ % 50)) || pv->type == PIVOT_ROOT) {
SAY(cLGN "\r[+] " cNOR "Looking for duplicate entries: %u", proc_cnt);
fflush(0);
}
}
/* Destroys signature data (for memory profiling purposes). */
void destroy_signatures(void) {
u32 i;
ck_free(p_sig);
for (i=0;i<m_samp_cnt;i++) {
ck_free(m_samp[i].req);
ck_free(m_samp[i].res);
}
for (i=0;i<i_samp_cnt;i++)
ck_free(i_samp[i].i);
ck_free(m_samp);
ck_free(i_samp);
}
/* Prepares issue, pivot stats, backtracing through all children.
Do not count nodes that seem duplicate. */
static void compute_counts(struct pivot_desc* pv) {
u32 i;
struct pivot_desc* tmp = pv->parent;
static u32 proc_cnt;
for (i=0;i<pv->child_cnt;i++) compute_counts(pv->child[i]);
if (pv->dupe) return;
while (tmp) {
tmp->total_child_cnt++;
tmp = tmp->parent;
}
for (i=0;i<pv->issue_cnt;i++) {
u8 sev = PSEV(pv->issue[i].type);
tmp = pv;
while (tmp) {
tmp->total_issues[sev]++;
tmp = tmp->parent;
}
}
if ((!(proc_cnt++ % 50)) || pv->type == PIVOT_ROOT) {
SAY(cLGN "\r[+] " cNOR "Counting unique issues: %u", proc_cnt);
fflush(0);
}
}
/* Helper to JS-escape data. Static buffer, will be destroyed on
subsequent calls. */
static inline u8* js_escape(u8* str) {
u32 len;
static u8* ret;
u8* opos;
if (!str) return (u8*)"[none]";
len = strlen((char*)str);
if (ret) free(ret);
opos = ret = __DFL_ck_alloc(len * 4 + 1);
while (len--) {
if (*str > 0x1f && *str < 0x80 && !strchr("<>\\'\"", *str)) {
*(opos++) = *(str++);
} else {
sprintf((char*)opos, "\\x%02x", *(str++));
opos += 4;
}
}
*opos = 0;
return ret;
}
static void output_scan_info(u64 scan_time, u32 seed) {
FILE* f;
time_t t = time(NULL);
u8* ct = (u8*)ctime(&t);
if (isspace(ct[strlen((char*)ct)-1]))
ct[strlen((char*)ct)-1] = 0;
f = fopen("summary.js", "w");
if (!f) PFATAL("Cannot open 'summary.js'");
fprintf(f, "var sf_version = '%s';\n", VERSION);
fprintf(f, "var scan_date = '%s';\n", js_escape(ct));
fprintf(f, "var scan_seed = '0x%08x';\n", seed);
fprintf(f, "var scan_ms = %llu;\n", (long long)scan_time);
fclose(f);
}
/* Helper to save request, response data. */
static void describe_res(FILE* f, struct http_response* res) {
if (!res) {
fprintf(f, "'fetched': false, 'error': 'Content not fetched'");
return;
}
switch (res->state) {
case 0 ... STATE_OK - 1:
fprintf(f, "'fetched': false, 'error': '(Reported while fetch in progress)'");
break;
case STATE_OK:
fprintf(f, "'fetched': true, 'code': %u, 'len': %u, 'decl_mime': '%s', ",
res->code, res->pay_len,
js_escape(res->header_mime));
fprintf(f, "'sniff_mime': '%s', 'cset': '%s'",
res->sniffed_mime ? res->sniffed_mime : (u8*)"[none]",
js_escape(res->header_charset ? res->header_charset
: res->meta_charset));
break;
case STATE_DNSERR:
fprintf(f, "'fetched': false, 'error': 'DNS error'");
break;
case STATE_LOCALERR:
fprintf(f, "'fetched': false, 'error': 'Local network error'");
break;
case STATE_CONNERR:
fprintf(f, "'fetched': false, 'error': 'Connection error'");
break;
case STATE_RESPERR:
fprintf(f, "'fetched': false, 'error': 'Malformed HTTP response'");
break;
case STATE_SUPPRESS:
fprintf(f, "'fetched': false, 'error': 'Limits exceeded'");
break;
default:
fprintf(f, "'fetched': false, 'error': 'Unknown error'");
}
}
/* Helper to save request, response data. */
static void save_req_res(struct http_request* req, struct http_response* res, u8 sample) {
FILE* f;
if (req) {
u8* rd = build_request_data(req);
f = fopen("request.dat", "w");
if (!f) PFATAL("Cannot create 'request.dat'");
fwrite(rd, strlen((char*)rd), 1, f);
fclose(f);
ck_free(rd);
}
if (res && res->state == STATE_OK) {
u32 i;
f = fopen("response.dat", "w");
if (!f) PFATAL("Cannot create 'response.dat'");
fprintf(f, "HTTP/1.1 %u %s\n", res->code, res->msg);
for (i=0;i<res->hdr.c;i++)
if (res->hdr.t[i] == PARAM_HEADER)
fprintf(f, "%s: %s\n", res->hdr.n[i], res->hdr.v[i]);
else
fprintf(f, "Set-Cookie: %s=%s\n", res->hdr.n[i], res->hdr.v[i]);
fprintf(f, "\n");
fwrite(res->payload, res->pay_len, 1, f);
fclose(f);
/* Also collect MIME samples at this point. */
if (!req->pivot->dupe && res->sniffed_mime && sample) {
for (i=0;i<m_samp_cnt;i++)
if (!strcmp((char*)m_samp[i].det_mime, (char*)res->sniffed_mime)) break;
if (i == m_samp_cnt) {
m_samp = ck_realloc(m_samp, (i + 1) * sizeof(struct mime_sample_desc));
m_samp[i].det_mime = res->sniffed_mime;
m_samp_cnt++;
} else {
u32 c;
/* If we already have something that looks very much the same on the
list, don't bother reporting it again. */
for (c=0;c<m_samp[i].sample_cnt;c++)
if (same_page(&m_samp[i].res[c]->sig, &res->sig)) return;
}
m_samp[i].req = ck_realloc(m_samp[i].req, (m_samp[i].sample_cnt + 1) *
sizeof(struct http_request*));
m_samp[i].res = ck_realloc(m_samp[i].res, (m_samp[i].sample_cnt + 1) *
sizeof(struct http_response*));
m_samp[i].req[m_samp[i].sample_cnt] = req;
m_samp[i].res[m_samp[i].sample_cnt] = res;
m_samp[i].sample_cnt++;
}
}
}
/* Dumps the actual crawl data. */
static void output_crawl_tree(struct pivot_desc* pv) {
u32 i;
FILE* f;
static u32 proc_cnt;
/* Save request, response. */
save_req_res(pv->req, pv->res, 1);
/* Write children information. Don't crawl children just yet,
because we could run out of file descriptors on a particularly
deep tree if we keep one open and recurse. */
f = fopen("child_index.js", "w");
if (!f) PFATAL("Cannot create 'child_index.js'.");
fprintf(f, "var child = [\n");
for (i=0;i<pv->child_cnt;i++) {
u8 tmp[32];
u8* p;
if (suppress_dupes && pv->child[i]->dupe &&
!pv->child[i]->total_child_cnt) continue;
/* Also completely suppress nodes that seem identical to the
previous one, and have a common prefix (as this implies
a mod_rewrite or htaccess filter). */
if (i && pv->child[i-1]->pv_sig == pv->child[i]->pv_sig) {
u8 *pn = pv->child[i-1]->name, *cn = pv->child[i]->name;
u32 pnd = strcspn((char*)pn, ".");
if (!strncasecmp((char*)pn, (char*)cn, pnd)) continue;
}
sprintf((char*)tmp, "c%u", i);
fprintf(f, " { 'dupe': %s, 'type': %u, 'name': '%s%s",
pv->child[i]->dupe ? "true" : "false",
pv->child[i]->type, js_escape(pv->child[i]->name),
(pv->child[i]->fuzz_par == -1 || pv->child[i]->type == PIVOT_VALUE)
? (u8*)"" : (u8*)"=");
fprintf(f, "%s', 'dir': '%s', 'linked': %d, ",
(pv->child[i]->fuzz_par == -1 || pv->child[i]->type == PIVOT_VALUE)
? (u8*)"" :
js_escape(pv->child[i]->req->par.v[pv->child[i]->fuzz_par]),
tmp, pv->child[i]->linked);
p = serialize_path(pv->child[i]->req, 1, 1);
fprintf(f, "'url': '%s', ", js_escape(p));
ck_free(p);
describe_res(f, pv->child[i]->res);
fprintf(f,", 'missing': %s, 'csens': %s, 'child_cnt': %u, "
"'issue_cnt': [ %u, %u, %u, %u, %u ] }%s\n",
pv->child[i]->missing ? "true" : "false",
pv->child[i]->csens ? "true" : "false",
pv->child[i]->total_child_cnt, pv->child[i]->total_issues[1],
pv->child[i]->total_issues[2], pv->child[i]->total_issues[3],
pv->child[i]->total_issues[4], pv->child[i]->total_issues[5],
(i == pv->child_cnt - 1) ? "" : ",");
}
fprintf(f, "];\n");
fclose(f);
/* Write issue index, issue dumps. */
f = fopen("issue_index.js", "w");
if (!f) PFATAL("Cannot create 'issue_index.js'.");
fprintf(f, "var issue = [\n");
for (i=0;i<pv->issue_cnt;i++) {
u8 tmp[32];
sprintf((char*)tmp, "i%u", i);
fprintf(f, " { 'severity': %u, 'type': %u, 'extra': '%s', ",
PSEV(pv->issue[i].type) - 1, pv->issue[i].type,
pv->issue[i].extra ? js_escape(pv->issue[i].extra) : (u8*)"");
describe_res(f, pv->issue[i].res);
fprintf(f, ", 'dir': '%s' }%s\n",
tmp, (i == pv->issue_cnt - 1) ? "" : ",");
if (mkdir((char*)tmp, 0755)) PFATAL("Cannot create '%s'.", tmp);
chdir((char*)tmp);
save_req_res(pv->issue[i].req, pv->issue[i].res, 1);
chdir((char*)"..");
/* Issue samples next! */
if (!pv->dupe) {
u32 c;
for (c=0;c<i_samp_cnt;c++)
if (i_samp[c].type == pv->issue[i].type) break;
if (c == i_samp_cnt) {
i_samp = ck_realloc(i_samp, (c + 1) * sizeof(struct issue_sample_desc));
i_samp_cnt++;
i_samp[c].type = pv->issue[i].type;
}
i_samp[c].i = ck_realloc(i_samp[c].i, (i_samp[c].sample_cnt + 1) *
sizeof(struct issue_desc*));
i_samp[c].i[i_samp[c].sample_cnt] = &pv->issue[i];
i_samp[c].sample_cnt++;
}
}
fprintf(f, "];\n");
fclose(f);
/* Actually crawl children. */
for (i=0;i<pv->child_cnt;i++) {
u8 tmp[32];
sprintf((char*)tmp, "c%u", i);
if (mkdir((char*)tmp, 0755)) PFATAL("Cannot create '%s'.", tmp);
chdir((char*)tmp);
output_crawl_tree(pv->child[i]);
chdir((char*)"..");
}
if ((!(proc_cnt++ % 50)) || pv->type == PIVOT_ROOT) {
SAY(cLGN "\r[+] " cNOR "Counting unique issues: %u", proc_cnt);
fflush(0);
}
}
/* Writes previews of MIME types, issues. */
static int m_samp_qsort(const void* ptr1, const void* ptr2) {
const struct mime_sample_desc *p1 = ptr1, *p2 = ptr2;
return strcasecmp((char*)p1->det_mime, (char*)p2->det_mime);
}
static int i_samp_qsort(const void* ptr1, const void* ptr2) {
const struct issue_sample_desc *p1 = ptr1, *p2 = ptr2;
return p2->type - p1->type;
}
static void output_summary_views() {
u32 i;
FILE* f;
f = fopen("samples.js", "w");
if (!f) PFATAL("Cannot create 'samples.js'.");
qsort(m_samp, m_samp_cnt, sizeof(struct mime_sample_desc), m_samp_qsort);
qsort(i_samp, i_samp_cnt, sizeof(struct issue_sample_desc), i_samp_qsort);
fprintf(f, "var mime_samples = [\n");
for (i=0;i<m_samp_cnt;i++) {
u32 c;
u8 tmp[32];
u32 use_samp = (m_samp[i].sample_cnt > MAX_SAMPLES ? MAX_SAMPLES :
m_samp[i].sample_cnt);
sprintf((char*)tmp, "_m%u", i);
if (mkdir((char*)tmp, 0755)) PFATAL("Cannot create '%s'.", tmp);
chdir((char*)tmp);
fprintf(f, " { 'mime': '%s', 'samples': [\n", m_samp[i].det_mime);
for (c=0;c<use_samp;c++) {
u8 tmp2[32];
u8* p = serialize_path(m_samp[i].req[c], 1, 0);
sprintf((char*)tmp2, "%u", c);
if (mkdir((char*)tmp2, 0755)) PFATAL("Cannot create '%s'.", tmp2);
chdir((char*)tmp2);
save_req_res(m_samp[i].req[c], m_samp[i].res[c], 0);
chdir("..");
fprintf(f, " { 'url': '%s', 'dir': '%s/%s', 'linked': %d, 'len': %d"
" }%s\n", js_escape(p), tmp, tmp2,
m_samp[i].req[c]->pivot->linked, m_samp[i].res[c]->pay_len,
(c == use_samp - 1) ? " ]" : ",");
ck_free(p);
}
fprintf(f, " }%s\n", (i == m_samp_cnt - 1) ? "" : ",");
chdir("..");
}
fprintf(f, "];\n\n");
fprintf(f, "var issue_samples = [\n");
for (i=0;i<i_samp_cnt;i++) {
u32 c;
u8 tmp[32];
u32 use_samp = (i_samp[i].sample_cnt > MAX_SAMPLES ? MAX_SAMPLES :
i_samp[i].sample_cnt);
sprintf((char*)tmp, "_i%u", i);
if (mkdir((char*)tmp, 0755)) PFATAL("Cannot create '%s'.", tmp);
chdir((char*)tmp);
fprintf(f, " { 'severity': %d, 'type': %d, 'samples': [\n",
PSEV(i_samp[i].type) - 1, i_samp[i].type);
for (c=0;c<use_samp;c++) {
u8 tmp2[32];
u8* p = serialize_path(i_samp[i].i[c]->req, 1, 0);
sprintf((char*)tmp2, "%u", c);
if (mkdir((char*)tmp2, 0755)) PFATAL("Cannot create '%s'.", tmp2);
chdir((char*)tmp2);
save_req_res(i_samp[i].i[c]->req, i_samp[i].i[c]->res, 0);
chdir("..");
fprintf(f, " { 'url': '%s', ", js_escape(p));
fprintf(f, "'extra': '%s', 'dir': '%s/%s' }%s\n",
i_samp[i].i[c]->extra ? js_escape(i_samp[i].i[c]->extra) :
(u8*)"", tmp, tmp2,
(c == use_samp - 1) ? " ]" : ",");
ck_free(p);
}
fprintf(f, " }%s\n", (i == i_samp_cnt - 1) ? "" : ",");
chdir("..");
}
fprintf(f, "];\n\n");
fclose(f);
}
/* Copies over assets/... to target directory. */
static u8* ca_out_dir;
static int copy_asset(const struct dirent* d) {
u8 *itmp, *otmp, buf[1024];
s32 i, o;
if (d->d_name[0] == '.' || !strcmp(d->d_name, "COPYING")) return 0;
itmp = ck_alloc(6 + strlen(d->d_name) + 2);
sprintf((char*)itmp, "assets/%s", d->d_name);
i = open((char*)itmp, O_RDONLY);
otmp = ck_alloc(strlen((char*)ca_out_dir) + strlen(d->d_name) + 2);
sprintf((char*)otmp, "%s/%s", ca_out_dir, d->d_name);
o = open((char*)otmp, O_WRONLY | O_CREAT | O_EXCL, 0644);
if (i >= 0 && o >= 0) {
s32 c;
while ((c = read(i, buf, 1024)) > 0) write(o, buf, c);
}
close(i);
close(o);
ck_free(itmp);
ck_free(otmp);
return 0;
}
static void copy_static_code(u8* out_dir) {
struct dirent** d;
ca_out_dir = out_dir;
scandir("assets", &d, copy_asset, NULL);
}
/* Writes report to index.html in the current directory. Will create
subdirectories, helper files, etc. */
void write_report(u8* out_dir, u64 scan_time, u32 seed) {
SAY(cLGN "[+] " cNOR "Copying static resources...\n");
copy_static_code(out_dir);
if (chdir((char*)out_dir)) PFATAL("Cannot chdir to '%s'", out_dir);
sort_annotate_pivot(&root_pivot);
SAY("\n");
collect_signatures(&root_pivot);
SAY("\n");
compute_counts(&root_pivot);
SAY("\n");
SAY(cLGN "[+] " cNOR "Writing scan description...\n");
output_scan_info(scan_time, seed);
output_crawl_tree(&root_pivot);
SAY("\n");
SAY(cLGN "[+] " cNOR "Generating summary views...\n");
output_summary_views();
SAY(cLGN "[+] " cNOR "Report saved to '" cLBL "%s/index.html" cNOR "' ["
cLBL "0x%08x" cNOR "].\n", out_dir, seed);
}

38
report.h Normal file
View File

@ -0,0 +1,38 @@
/*
skipfish - post-processing and reporting
----------------------------------------
Author: Michal Zalewski <lcamtuf@google.com>
Copyright 2009, 2010 by Google Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#ifndef _HAVE_REPORT_H
#include "types.h"
extern u8 suppress_dupes;
/* Writes report to index.html in the current directory. Will create
subdirectories, helper files, etc. */
void write_report(u8* out_dir, u64 scan_time, u32 seed);
/* Destroys all signatures created for pivot and issue clustering purposes. */
void destroy_signatures(void);
#endif /* !_HAVE_REPORT_H */

84
same_test.c Normal file
View File

@ -0,0 +1,84 @@
/*
skipfish - same_page() test utility
-----------------------------------
Author: Michal Zalewski <lcamtuf@google.com>
Copyright 2009, 2010 by Google Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <signal.h>
#include <getopt.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <sys/time.h>
#include <time.h>
#include <sys/stat.h>
#include "types.h"
#include "alloc-inl.h"
#include "string-inl.h"
#include "crawler.h"
#include "analysis.h"
#include "database.h"
#include "http_client.h"
#include "report.h"
#ifdef DEBUG_ALLOCATOR
struct __AD_trk_obj* __AD_trk[ALLOC_BUCKETS];
u32 __AD_trk_cnt[ALLOC_BUCKETS];
#endif /* DEBUG_ALLOCATOR */
#define MAX_LEN (1024*1024)
u8 p1[MAX_LEN], p2[MAX_LEN];
int main(int argc, char** argv) {
static struct http_response r1, r2;
s32 l1, l2;
l1 = read(8, p1, MAX_LEN);
l2 = read(9, p2, MAX_LEN);
if (l1 < 0 || l2 < 0)
FATAL("Usage: ./same_test 8<file1 9<file2");
r1.code = 123;
r2.code = 123;
r1.payload = p1;
r2.payload = p2;
r1.pay_len = l1;
r2.pay_len = l2;
fprint_response(&r1);
fprint_response(&r2);
debug_same_page(&r1.sig, &r2.sig);
if (same_page(&r1.sig, &r2.sig))
DEBUG("=== PAGES SEEM THE SAME ===\n");
else
DEBUG("=== PAGES ARE DIFFERENT ===\n");
return 0;
}

457
skipfish.c Normal file
View File

@ -0,0 +1,457 @@
/*
skipfish - main entry point
---------------------------
Author: Michal Zalewski <lcamtuf@google.com>
Copyright 2009, 2010 by Google Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <signal.h>
#include <getopt.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <sys/time.h>
#include <time.h>
#include <sys/stat.h>
#include "types.h"
#include "alloc-inl.h"
#include "string-inl.h"
#include "crawler.h"
#include "analysis.h"
#include "database.h"
#include "http_client.h"
#include "report.h"
#ifdef DEBUG_ALLOCATOR
struct __AD_trk_obj* __AD_trk[ALLOC_BUCKETS];
u32 __AD_trk_cnt[ALLOC_BUCKETS];
#endif /* DEBUG_ALLOCATOR */
void usage(char* argv0) {
SAY("Usage: %s [ options ... ] -o output_dir start_url [ start_url2 ... ]\n\n"
"Authentication and access options:\n\n"
" -A user:pass - use specified HTTP authentication credentials\n"
" -F host:IP - pretend that 'host' resolves to 'IP'\n"
" -C name=val - append a custom cookie to all requests\n"
" -H name=val - append a custom HTTP header to all requests\n"
" -b (i|f) - use headers consistent with MSIE / Firefox\n"
" -N - do not accept any new cookies\n\n"
"Crawl scope options:\n\n"
" -d max_depth - maximum crawl tree depth (%u)\n"
" -c max_child - maximum children to index per node (%u)\n"
" -r r_limit - max total number of requests to send (%u)\n"
" -p crawl%% - node and link crawl probability (100%%)\n"
" -q hex - repeat probabilistic scan with given seed\n"
" -I string - only follow URLs matching 'string'\n"
" -X string - exclude URLs matching 'string'\n"
" -S string - exclude pages containing 'string'\n"
" -D domain - crawl cross-site links to another domain\n"
" -B domain - trust, but do not crawl, another domain\n"
" -O - do not submit any forms\n"
" -P - do not parse HTML, etc, to find new links\n\n"
"Reporting options:\n\n"
" -o dir - write output to specified directory (required)\n"
" -J - be less noisy about MIME / charset mismatches\n"
" -M - log warnings about mixed content\n"
" -E - log all HTTP/1.0 / HTTP/1.1 caching intent mismatches\n"
" -U - log all external URLs and e-mails seen\n"
" -Q - completely suppress duplicate nodes in reports\n\n"
"Dictionary management options:\n\n"
" -W wordlist - load an alternative wordlist (%s)\n"
" -L - do not auto-learn new keywords for the site\n"
" -V - do not update wordlist based on scan results\n"
" -Y - do not fuzz extensions in directory brute-force\n"
" -R age - purge words hit more than 'age' scans ago\n"
" -T name=val - add new form auto-fill rule\n"
" -G max_guess - maximum number of keyword guesses to keep (%d)\n\n"
"Performance settings:\n\n"
" -g max_conn - max simultaneous TCP connections, global (%u)\n"
" -m host_conn - max simultaneous connections, per target IP (%u)\n"
" -f max_fail - max number of consecutive HTTP errors (%u)\n"
" -t req_tmout - total request response timeout (%u s)\n"
" -w rw_tmout - individual network I/O timeout (%u s)\n"
" -i idle_tmout - timeout on idle HTTP connections (%u s)\n"
" -s s_limit - response size limit (%u B)\n\n"
"Send comments and complaints to <lcamtuf@google.com>.\n", argv0,
max_depth, max_children, max_requests, DEF_WORDLIST, MAX_GUESSES,
max_connections, max_conn_host, max_fail, resp_tmout, rw_tmout,
idle_tmout, size_limit);
exit(1);
}
/* Ctrl-C handler... */
static u8 stop_soon;
static void ctrlc_handler(int sig) {
stop_soon = 1;
}
/* Main entry point */
int main(int argc, char** argv) {
s32 opt;
u32 loop_cnt = 0, purge_age = 0, seed;
u8 dont_save_words = 0, show_once = 0;
u8 *wordlist = (u8*)DEF_WORDLIST, *output_dir = NULL;
struct timeval tv;
u64 st_time, en_time;
signal(SIGINT, ctrlc_handler);
signal(SIGPIPE, SIG_IGN);
SSL_library_init();
/* Come up with a quasi-decent random seed. */
gettimeofday(&tv, NULL);
seed = tv.tv_usec ^ (tv.tv_sec << 16) ^ getpid();
SAY("skipfish version " VERSION " by <lcamtuf@google.com>\n");
while ((opt = getopt(argc, argv,
"+A:F:C:H:b:Nd:c:r:p:I:X:S:D:PJOYQMUEW:LVT:G:R:B:q:g:m:f:t:w:i:s:o:")) > 0)
switch (opt) {
case 'A': {
u8* x = (u8*)strchr(optarg, ':');
if (!x) FATAL("Credentials must be in 'user:pass' form.");
*(x++) = 0;
auth_user = (u8*)optarg;
auth_pass = x;
auth_type = AUTH_BASIC;
break;
}
case 'F': {
u8* x = (u8*)strchr(optarg, '=');
u32 fake_addr;
if (!x) FATAL("Fake mappings must be in 'host=IP' form.");
*x = 0;
fake_addr = inet_addr((char*)x + 1);
if (fake_addr == (u32)-1)
FATAL("Could not parse IP address '%s'.", x + 1);
fake_host((u8*)optarg, fake_addr);
break;
}
case 'H': {
u8* x = (u8*)strchr(optarg, '=');
if (!x) FATAL("Extra headers must be in 'name=value' form.");
*x = 0;
if (!strcasecmp(optarg, "Cookie"))
FATAL("Do not use -H to set cookies (try -C instead).");
SET_HDR((u8*)optarg, x + 1, &global_http_par);
break;
}
case 'C': {
u8* x = (u8*)strchr(optarg, '=');
if (!x) FATAL("Cookies must be in 'name=value' form.");
if (strchr(optarg, ';'))
FATAL("Split multiple cookies into separate -C options.");
*x = 0;
SET_CK((u8*)optarg, x + 1, &global_http_par);
break;
}
case 'D':
if (*optarg == '*') optarg++;
APPEND_FILTER(allow_domains, num_allow_domains, optarg);
break;
case 'B':
if (*optarg == '*') optarg++;
APPEND_FILTER(trust_domains, num_trust_domains, optarg);
break;
case 'I':
if (*optarg == '*') optarg++;
APPEND_FILTER(allow_urls, num_allow_urls, optarg);
break;
case 'X':
if (*optarg == '*') optarg++;
APPEND_FILTER(deny_urls, num_deny_urls, optarg);
break;
case 'J':
relaxed_mime = 1;
break;
case 'S':
if (*optarg == '*') optarg++;
APPEND_FILTER(deny_strings, num_deny_strings, optarg);
break;
case 'T': {
u8* x = (u8*)strchr(optarg, '=');
if (!x) FATAL("Rules must be in 'name=value' form.");
*x = 0;
add_form_hint((u8*)optarg, x + 1);
break;
}
case 'N':
ignore_cookies = 1;
break;
case 'Y':
no_fuzz_ext = 1;
break;
case 'q':
if (sscanf(optarg, "0x%08x", &seed) != 1)
FATAL("Invalid seed format.");
srandom(seed);
break;
case 'Q':
suppress_dupes = 1;
break;
case 'P':
no_parse = 1;
break;
case 'V':
dont_save_words = 1;
break;
case 'M':
warn_mixed = 1;
break;
case 'U':
log_ext_urls = 1;
break;
case 'L':
dont_add_words = 1;
break;
case 'E':
pedantic_cache = 1;
break;
case 'O':
no_forms = 1;
break;
case 'R':
purge_age = atoi(optarg);
if (purge_age < 3) FATAL("Purge age invalid or too low (min 3).");
break;
case 'd':
max_depth = atoi(optarg);
if (max_depth < 2) FATAL("Invalid value '%s'.", optarg);
break;
case 'c':
max_children = atoi(optarg);
if (!max_children) FATAL("Invalid value '%s'.", optarg);
break;
case 'p':
crawl_prob = atoi(optarg);
if (!crawl_prob) FATAL("Invalid value '%s'.", optarg);
break;
case 'W':
wordlist = (u8*)optarg;
break;
case 'b':
if (optarg[0] == 'i') browser_type = BROWSER_MSIE; else
if (optarg[0] == 'f') browser_type = BROWSER_FFOX; else
usage(argv[0]);
break;
case 'g':
max_connections = atoi(optarg);
if (!max_connections) FATAL("Invalid value '%s'.", optarg);
break;
case 'm':
max_conn_host = atoi(optarg);
if (!max_conn_host) FATAL("Invalid value '%s'.", optarg);
break;
case 'G':
max_guesses = atoi(optarg);
if (!max_guesses) FATAL("Invalid value '%s'.", optarg);
break;
case 'r':
max_requests = atoi(optarg);
if (!max_requests) FATAL("Invalid value '%s'.", optarg);
break;
case 'f':
max_fail = atoi(optarg);
if (!max_fail) FATAL("Invalid value '%s'.", optarg);
break;
case 't':
resp_tmout = atoi(optarg);
if (!resp_tmout) FATAL("Invalid value '%s'.", optarg);
break;
case 'w':
rw_tmout = atoi(optarg);
if (!rw_tmout) FATAL("Invalid value '%s'.", optarg);
break;
case 'i':
idle_tmout = atoi(optarg);
if (!idle_tmout) FATAL("Invalid value '%s'.", optarg);
break;
case 's':
size_limit = atoi(optarg);
if (!size_limit) FATAL("Invalid value '%s'.", optarg);
break;
case 'o':
if (output_dir) FATAL("Multiple -o options not allowed.");
output_dir = (u8*)optarg;
rmdir(optarg);
if (mkdir(optarg, 0755))
PFATAL("Unable to create '%s'.", output_dir);
break;
default:
usage(argv[0]);
}
if (access("assets/index.html", R_OK))
PFATAL("Unable to access 'assets/index.html' - wrong directory?");
srandom(seed);
if (optind == argc)
FATAL("Scan target not specified (try -h for help).");
if (!output_dir)
FATAL("Output directory not specified (try -h for help).");
if (resp_tmout < rw_tmout)
resp_tmout = rw_tmout;
if (max_connections < max_conn_host)
max_connections = max_conn_host;
load_keywords((u8*)wordlist, purge_age);
/* Schedule all URLs in the command line for scanning */
while (optind < argc) {
struct http_request *req = ck_alloc(sizeof(struct http_request));
if (parse_url((u8*)argv[optind], req, NULL))
FATAL("One of specified scan targets is not a valid absolute URL.");
if (!url_allowed_host(req))
APPEND_FILTER(allow_domains, num_allow_domains,
__DFL_ck_strdup(req->host));
if (!url_allowed(req))
FATAL("URL '%s' explicitly excluded by -I / -X rules.", argv[optind]);
maybe_add_pivot(req, NULL, 2);
destroy_request(req);
optind++;
}
gettimeofday(&tv, NULL);
st_time = tv.tv_sec * 1000 + tv.tv_usec / 1000;
SAY("\x1b[H\x1b[J");
while ((next_from_queue() && !stop_soon) || (!show_once++)) {
if ((loop_cnt++ % 20) && !show_once) continue;
SAY(cYEL "\x1b[H"
"skipfish version " VERSION " by <lcamtuf@google.com>\n\n" cNOR);
http_stats(st_time);
SAY("\n");
database_stats();
SAY("\n \r");
}
gettimeofday(&tv, NULL);
en_time = tv.tv_sec * 1000 + tv.tv_usec / 1000;
if (stop_soon)
SAY(cYEL "[!] " cBRI "Scan aborted by user, bailing out!" cNOR "\n");
if (!dont_save_words) save_keywords((u8*)wordlist);
write_report(output_dir, en_time - st_time, seed);
#ifdef LOG_STDERR
SAY("\n== PIVOT DEBUG ==\n");
dump_pivots(0, 0);
SAY("\n== END OF DUMP ==\n\n");
#endif /* LOG_STDERR */
SAY(cLGN "[+] " cBRI "This was a great day for science!" cNOR "\n\n");
#ifdef DEBUG_ALLOCATOR
if (!stop_soon) {
destroy_database();
destroy_http();
destroy_signatures();
__AD_report();
}
#endif /* DEBUG_ALLOCATOR */
return 0;
}

182
string-inl.h Normal file
View File

@ -0,0 +1,182 @@
/*
skipfish - various string manipulation helpers
----------------------------------------------
Some modern operating systems still ship with no strcasestr() or memmem()
implementations in place, for reasons beyond comprehension. This file
includes a simplified version of these routines, copied from NetBSD, plus
several minor, custom string manipulation macros and inline functions.
The original NetBSD code is licensed under a BSD license, as follows:
Copyright (c) 1990, 1993
The Regents of the University of California. All rights reserved.
This code is derived from software contributed to Berkeley by
Chris Torek.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. Neither the name of the University nor the names of its contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
SUCH DAMAGE.
*/
#ifndef _HAVE_STRING_INL_H
#define _HAVE_STRING_INL_H
#include <ctype.h>
#include <string.h>
#include "types.h"
/* Modified NetBSD strcasestr() implementation (rolling strncasecmp). */
static inline u8* inl_strcasestr(const u8* haystack, const u8* needle) {
register u8 c, sc;
register u32 len;
if (!haystack || !needle) return 0;
if ((c = *needle++)) {
c = tolower(c);
len = strlen((char*)needle);
do {
do {
if (!(sc = *haystack++)) return 0;
} while (tolower(sc) != c);
} while (strncasecmp((char*)haystack, (char*)needle, len));
haystack--;
}
return (u8*)haystack;
}
/* Modified NetBSD memmem() implementation (rolling memcmp). */
static inline void* inl_memmem(const void* haystack, u32 h_len,
const void* needle, u32 n_len) {
register u8* sp = (u8*)haystack;
register u8* pp = (u8*)needle;
register u8* eos = sp + h_len - n_len;
if (!(haystack && needle && h_len && n_len)) return 0;
while (sp <= eos) {
if (*sp == *pp)
if (memcmp(sp, pp, n_len) == 0) return sp;
sp++;
}
return 0;
}
/* String manipulation macros for operating on a dynamic buffer. */
#define NEW_STR(_buf_ptr, _buf_len) do { \
(_buf_ptr) = ck_alloc(1024); \
(_buf_len) = 0; \
} while (0)
#define ADD_STR_DATA(_buf_ptr, _buf_len, _str) do { \
u32 _sl = strlen((char*)_str); \
if ((_buf_len) + (_sl) + 1 > malloc_usable_size(_buf_ptr)) { \
u32 _nsiz = ((_buf_len) + _sl + 1024) >> 10 << 10; \
(_buf_ptr) = ck_realloc(_buf_ptr, _nsiz); \
} \
memcpy((_buf_ptr) + (_buf_len), _str, _sl + 1); \
(_buf_len) += _sl; \
} while (0)
#define TRIM_STR(_buf_ptr, _buf_len) do { \
(_buf_ptr) = ck_realloc(_buf_ptr, _buf_len + 1); \
(_buf_ptr)[_buf_len] = 0; \
} while (0)
/* Simple base64 encoder */
static inline u8* b64_encode(u8* str, u32 len) {
const u8 b64[64] = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
"abcdefghijklmnopqrstuvwxyz"
"0123456789+/";
u8 *ret, *cur;
ret = cur = ck_alloc((len + 3) * 4 / 3 + 1);
while (len > 0) {
if (len >= 3) {
u32 comp = (str[0] << 16) | (str[1] << 8) | str[2];
*(cur++) = b64[comp >> 18];
*(cur++) = b64[(comp >> 12) & 0x3F];
*(cur++) = b64[(comp >> 6) & 0x3F];
*(cur++) = b64[comp & 0x3F];
len -= 3;
str += 3;
} else if (len == 2) {
u32 comp = (str[0] << 16) | (str[1] << 8);
*(cur++) = b64[comp >> 18];
*(cur++) = b64[(comp >> 12) & 0x3F];
*(cur++) = b64[(comp >> 6) & 0x3D];
*(cur++) = '=';
len -= 2;
str += 2;
} else {
u32 comp = (str[0] << 16);;
*(cur++) = b64[comp >> 18];
*(cur++) = b64[(comp >> 12) & 0x3F];
*(cur++) = '=';
*(cur++) = '=';
len--;
str++;
}
}
*cur = 0;
return ret;
}
#endif /* !_HAVE_STRING_INL_H */

42
types.h Normal file
View File

@ -0,0 +1,42 @@
/*
skipfish - type definitions
---------------------------
Author: Michal Zalewski <lcamtuf@google.com>
Copyright 2009, 2010 by Google Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#ifndef _HAVE_TYPES_H
#define _HAVE_TYPES_H
#include <stdint.h>
typedef uint8_t u8;
typedef uint16_t u16;
typedef uint32_t u32;
typedef uint64_t u64;
typedef int8_t s8;
typedef int16_t s16;
typedef int32_t s32;
typedef int64_t s64;
/* PRNG wrapper, of no better place to put it. */
#define R(_ceil) ((u32)(random() % (_ceil)))
#endif /* ! _HAVE_TYPES_H */