ooniprobe-2.2.0/ 0000755 0001750 0001750 00000000000 13071152230 011637 5 ustar irl irl ooniprobe-2.2.0/LICENSE 0000644 0001750 0001750 00000002770 12733731376 012674 0 ustar irl irl Copyright (c) 2012-2016, Jacob Appelbaum, Aaron Gibson, Arturo Filastò,
Isis Lovecruft, The Tor Project.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
This product includes GeoLite data created by MaxMind, available from
http://www.maxmind.com.
ooniprobe-2.2.0/PKG-INFO 0000644 0001750 0001750 00000012041 13071152230 012732 0 ustar irl irl Metadata-Version: 1.1
Name: ooniprobe
Version: 2.2.0
Summary: Network measurement tool foridentifying traffic manipulation and blocking.
Home-page: https://ooni.torproject.org/
Author: Open Observatory of Network Interference
Author-email: contact@openobservatory.org
License: BSD 2 clause
Description:
ooniprobe: a network interference detection tool
================================================
.. image:: https://travis-ci.org/TheTorProject/ooni-probe.png?branch=master
:target: https://travis-ci.org/TheTorProject/ooni-probe
.. image:: https://coveralls.io/repos/TheTorProject/ooni-probe/badge.png
:target: https://coveralls.io/r/TheTorProject/ooni-probe
___________________________________________________________________________
.. image:: https://ooni.torproject.org/images/ooni-header-mascot.png
:target: https:://ooni.torproject.org/
OONI, the Open Observatory of Network Interference, is a global observation
network which aims is to collect high quality data using open methodologies,
using Free and Open Source Software (FL/OSS) to share observations and data
about the various types, methods, and amounts of network tampering in the
world.
Read this before running ooniprobe!
-----------------------------------
Running ooniprobe is a potentially risky activity. This greatly depends on the
jurisdiction in which you are in and which test you are running. It is
technically possible for a person observing your internet connection to be
aware of the fact that you are running ooniprobe. This means that if running
network measurement tests is something considered to be illegal in your country
then you could be spotted.
Furthermore, ooniprobe takes no precautions to protect the install target machine
from forensics analysis. If the fact that you have installed or used ooni
probe is a liability for you, please be aware of this risk.
Setup ooniprobe
-------------------
To install ooniprobe you will need the following dependencies:
* python
* python-dev
* python-setuptools
* build-essential
* libdumbnet1
* python-dumbnet
* python-libpcap
* tor
* libgeoip-dev
* libpcap0.8-dev
* libssl-dev
* libffi-dev
* libdumbnet-dev
On debian based systems this can generally be done by running:
.. code:: bash
sudo apt-get install -y build-essential libdumbnet-dev libpcap-dev libgeoip-dev libffi-dev python-dev python-pip
When you got them run:
.. code:: bash
sudo pip install ooniprobe
Using ooniprobe
---------------
It is recommended that you start the ooniprobe-agent system daemon that will
expose a localhost only Web UI and automatically run tests for you.
This can be done with:
.. code:: bash
ooniprobe-agent start
Then connect to the local web interface on http://127.0.0.1:8842/
Have fun!
Platform: UNKNOWN
Classifier: Development Status :: 5 - Production/Stable
Classifier: Environment :: Console
Classifier: Framework :: Twisted
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Education
Classifier: Intended Audience :: End Users/Desktop
Classifier: Intended Audience :: Information Technology
Classifier: Intended Audience :: Science/Research
Classifier: Intended Audience :: Telecommunications Industry
Classifier: License :: OSI Approved :: BSD License
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2 :: Only
Classifier: Programming Language :: Python :: 2.6
Classifier: Programming Language :: Python :: 2.7
Classifier: Operating System :: MacOS :: MacOS X
Classifier: Operating System :: POSIX
Classifier: Operating System :: POSIX :: BSD
Classifier: Operating System :: POSIX :: BSD :: BSD/OS
Classifier: Operating System :: POSIX :: BSD :: FreeBSD
Classifier: Operating System :: POSIX :: BSD :: NetBSD
Classifier: Operating System :: POSIX :: BSD :: OpenBSD
Classifier: Operating System :: POSIX :: Linux
Classifier: Operating System :: Unix
Classifier: Topic :: Scientific/Engineering :: Information Analysis
Classifier: Topic :: Security
Classifier: Topic :: Security :: Cryptography
Classifier: Topic :: Software Development :: Libraries :: Application Frameworks
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: Software Development :: Testing
Classifier: Topic :: Software Development :: Testing :: Traffic Generation
Classifier: Topic :: System :: Networking :: Monitoring
ooniprobe-2.2.0/README.rst 0000644 0001750 0001750 00000024073 13061505267 013347 0 ustar irl irl ooniprobe: a network interference detection tool
================================================
.. image:: https://travis-ci.org/TheTorProject/ooni-probe.png?branch=master
:target: https://travis-ci.org/TheTorProject/ooni-probe
.. image:: https://coveralls.io/repos/TheTorProject/ooni-probe/badge.png
:target: https://coveralls.io/r/TheTorProject/ooni-probe
.. image:: https://slack.openobservatory.org/badge.svg
:target: https://slack.openobservatory.org/badge.svg
___________________________________________________________________________
.. image:: https://ooni.torproject.org/images/ooni-header-mascot.png
:target: https:://ooni.torproject.org/
OONI, the Open Observatory of Network Interference, is a global observation
network which aims is to collect high quality data using open methodologies,
using Free and Open Source Software (FL/OSS) to share observations and data
about the various types, methods, and amounts of network tampering in the
world.
"The Net interprets censorship as damage and routes around it."
- John Gilmore; TIME magazine (6 December 1993)
ooniprobe is the first program that users run to probe their network and to
collect data for the OONI project. Are you interested in testing your network
for signs of surveillance and censorship? Do you want to collect data to share
with others, so that you and others may better understand your network? If so,
please read this document and we hope ooniprobe will help you to gather
network data that will assist you with your endeavors!
Read this before running ooniprobe!
-----------------------------------
Running ooniprobe is a potentially risky activity. This greatly depends on the
jurisdiction in which you are in and which test you are running. It is
technically possible for a person observing your internet connection to be
aware of the fact that you are running ooniprobe. This means that if running
network measurement tests is something considered to be illegal in your country
then you could be spotted.
Furthermore, ooniprobe takes no precautions to protect the install target machine
from forensics analysis. If the fact that you have installed or used ooni
probe is a liability for you, please be aware of this risk.
OONI in 5 minutes
=================
The latest ooniprobe version for Debian and Ubuntu releases can be found in the
deb.torproject.org package repository.
On Debian stable (jessie)::
gpg --keyserver keys.gnupg.net --recv A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89
gpg --export A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 | sudo apt-key add -
echo 'deb http://deb.torproject.org/torproject.org jessie main' | sudo tee /etc/apt/sources.list.d/ooniprobe.list
sudo apt-get update
sudo apt-get install ooniprobe deb.torproject.org-keyring
On Debian testing::
gpg --keyserver keys.gnupg.net --recv A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89
gpg --export A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 | sudo apt-key add -
echo 'deb http://deb.torproject.org/torproject.org testing main' | sudo tee /etc/apt/sources.list.d/ooniprobe.list
sudo apt-get update
sudo apt-get install ooniprobe deb.torproject.org-keyring
On Debian unstable::
gpg --keyserver keys.gnupg.net --recv A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89
gpg --export A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 | sudo apt-key add -
echo 'deb http://deb.torproject.org/torproject.org unstable main' | sudo tee /etc/apt/sources.list.d/ooniprobe.list
sudo apt-get update
sudo apt-get install ooniprobe deb.torproject.org-keyring
On Ubuntu 16.10 (yakkety), 16.04 (xenial) or 14.04 (trusty)::
gpg --keyserver keys.gnupg.net --recv A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89
gpg --export A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 | sudo apt-key add -
echo 'deb http://deb.torproject.org/torproject.org $RELEASE main' | sudo tee /etc/apt/sources.list.d/ooniprobe.list
sudo apt-get update
sudo apt-get install ooniprobe deb.torproject.org-keyring
Note: You'll need to swap out ``$RELEASE`` for either ``yakkety``, ``xenial`` or
``trusty``. This will not happen automatically. You will also need to ensure
that you have the ``universe`` repository enabled. The ``universe`` repository
is enabled by default in a standard Ubuntu installation but may not be on some
minimal, or not standard, installations.
Installation
============
macOS
-----
You can install ooniprobe on macOS if you have installed homebrew (http://brew.sh/) with::
brew install ooniprobe
Unix systems (with pip)
-----------------------
Make sure you have installed the following dependencies:
* build-essential
* python (>=2.7)
* python-dev
* pip
* libgeoip-dev
* libdumbnet-dev
* libpcap-dev
* libssl-dev
* libffi-dev
* tor (>=0.2.5.1 to run all the tor related tests)
Optional dependencies:
* obfs4proxy
On debian based systems this can generally be done by running::
sudo apt-get install -y build-essential libdumbnet-dev libpcap-dev libgeoip-dev libffi-dev python-dev python-pip tor libssl-dev obfs4proxy
Then you should be able to install ooniprobe by running::
sudo pip install ooniprobe
or install ooniprobe as a user::
pip install ooniprobe
Using ooniprobe
===============
**Net test** is a set of measurements to assess what kind of internet censorship is occurring.
**Decks** are collections of ooniprobe nettests with some associated inputs.
**Collector** is a service used to report the results of measurements.
**Test helper** is a service used by a probe for successfully performing its measurements.
**Bouncer** is a service used to discover the addresses of test helpers and collectors.
Configuring ooniprobe
---------------------
After successfully installing ooniprobe you should be able to access the web UI
on your host machine at .
You should now be presented with the web UI setup wizard where you can read the
risks involved with running ooniprobe. Upon answering the quiz correctly you can
enable or disable ooniprobe tests, set how you can connect to the measurement's
collector and finally configure your privacy settings.
By default ooniprobe will not include personal identifying information in the
test results, nor create a pcap file. This behavior can be personalized.
Run ooniprobe as a service (systemd)
------------------------------------
Upon ooniprobe version 2.0.0 there is no need for cronjobs as ooniprobe-agent is
responsible for the tasks scheduling.
You can ensure that ooniprobe-agent is always running by installing and enabling
the systemd unit `ooniprobe.service`::
wget https://raw.githubusercontent.com/TheTorProject/ooni-probe/master/scripts/systemd/ooniprobe.service --directory-prefix=/etc/systemd/system
systemctl enable ooniprobe
systemctl start ooniprobe
You should be able to see a similar output if ooniprobe (systemd) service is
active and loaded by running `systemctl status ooniprobe`::
● ooniprobe.service - ooniprobe.service, network interference detection tool
Loaded: loaded (/etc/systemd/system/ooniprobe.service; enabled)
Active: active (running) since Thu 2016-10-20 09:17:42 UTC; 16s ago
Process: 311 ExecStart=/usr/local/bin/ooniprobe-agent start (code=exited, status=0/SUCCESS)
Main PID: 390 (ooniprobe-agent)
CGroup: /system.slice/ooniprobe.service
└─390 /usr/bin/python /usr/local/bin/ooniprobe-agent start
Setting capabilities on your virtualenv python binary
=====================================================
If your distribution supports capabilities you can avoid needing to run OONI as root::
setcap cap_net_admin,cap_net_raw+eip /path/to/your/virtualenv's/python2
Reporting bugs
==============
You can report bugs and issues you find with ooni-probe on The Tor Project issue
tracker filing them under the "Ooni" component: https://trac.torproject.org/projects/tor/newticket?component=Ooni.
You can either register an account or use the group account "cypherpunks" with
password "writecode".
Contributing
============
You can download the code for ooniprobe from the following git repository::
git clone https://github.com/TheTorProject/ooni-probe.git
You should then submit patches for review as pull requests to this github repository:
https://github.com/TheTorProject/ooni-probe
Read this article to learn how to create a pull request on github (https://help.github.com/articles/creating-a-pull-request).
If you prefer not to use github (or don't have an account), you may also submit
patches as attachments to tickets.
Be sure to format the patch (given that you are working on a feature branch
that is different from master) with::
git format-patch master --stdout > my_first_ooniprobe.patch
Setting up development environment
----------------------------------
On Debian based systems a development environment can be setup as follows: (prerequisites include build essentials, python-dev, and tor; for tor see https://www.torproject.org/docs/debian.html.en)::
sudo apt-get install python-pip python-virtualenv virtualenv
sudo apt-get install libgeoip-dev libffi-dev libdumbnet-dev libssl-dev libpcap-dev
git clone https://github.com/TheTorProject/ooni-probe
cd ooni-probe
virtualenv venv
`virtualenv venv` will create a folder in the current directory which will
contain the Python executable files, and a copy of the pip library which you can
use to install other packages. To begin using the virtual environment, it needs
to be activated::
source venv/bin/activate
pip install -r requirements.txt
pip install -r requirements-dev.txt
python setup.py install
Then, you can check whether the installation went well with::
ooniprobe -s
This will explain you the risks of running ooniprobe and make sure you have
understood them, afterwards it shows you the available tests.
To run the ooniprobe agent, instead, type::
ooniprobe-agent run
To execute the unit tests for ooniprobe, type::
coverage run $(which trial) ooni
Donate
-------
Send bitcoins to
.. image:: http://i.imgur.com/CIWHb5R.png
:target: http://www.coindesk.com/information/how-can-i-buy-bitcoins/
1Ai9d4dhDBjxYVkKKf1pFXptEGfM1vxFBf
ooniprobe-2.2.0/ooniprobe.egg-info/ 0000755 0001750 0001750 00000000000 13071152230 015325 5 ustar irl irl ooniprobe-2.2.0/ooniprobe.egg-info/requires.txt 0000644 0001750 0001750 00000000322 13071152230 017722 0 ustar irl irl pyasn1>=0.1.8
setuptools>=11.3
PyYAML>=3.10
Twisted>=13.2.0
ipaddr>=2.1.10
pyOpenSSL>=0.15.1
geoip
txtorcon>=0.7
txsocksx>=0.0.2
scapy>=2.2.0
pypcap>=1.1
service-identity
pydumbnet
zope.interface
certifi
klein
ooniprobe-2.2.0/ooniprobe.egg-info/PKG-INFO 0000644 0001750 0001750 00000012041 13071152230 016420 0 ustar irl irl Metadata-Version: 1.1
Name: ooniprobe
Version: 2.2.0
Summary: Network measurement tool foridentifying traffic manipulation and blocking.
Home-page: https://ooni.torproject.org/
Author: Open Observatory of Network Interference
Author-email: contact@openobservatory.org
License: BSD 2 clause
Description:
ooniprobe: a network interference detection tool
================================================
.. image:: https://travis-ci.org/TheTorProject/ooni-probe.png?branch=master
:target: https://travis-ci.org/TheTorProject/ooni-probe
.. image:: https://coveralls.io/repos/TheTorProject/ooni-probe/badge.png
:target: https://coveralls.io/r/TheTorProject/ooni-probe
___________________________________________________________________________
.. image:: https://ooni.torproject.org/images/ooni-header-mascot.png
:target: https:://ooni.torproject.org/
OONI, the Open Observatory of Network Interference, is a global observation
network which aims is to collect high quality data using open methodologies,
using Free and Open Source Software (FL/OSS) to share observations and data
about the various types, methods, and amounts of network tampering in the
world.
Read this before running ooniprobe!
-----------------------------------
Running ooniprobe is a potentially risky activity. This greatly depends on the
jurisdiction in which you are in and which test you are running. It is
technically possible for a person observing your internet connection to be
aware of the fact that you are running ooniprobe. This means that if running
network measurement tests is something considered to be illegal in your country
then you could be spotted.
Furthermore, ooniprobe takes no precautions to protect the install target machine
from forensics analysis. If the fact that you have installed or used ooni
probe is a liability for you, please be aware of this risk.
Setup ooniprobe
-------------------
To install ooniprobe you will need the following dependencies:
* python
* python-dev
* python-setuptools
* build-essential
* libdumbnet1
* python-dumbnet
* python-libpcap
* tor
* libgeoip-dev
* libpcap0.8-dev
* libssl-dev
* libffi-dev
* libdumbnet-dev
On debian based systems this can generally be done by running:
.. code:: bash
sudo apt-get install -y build-essential libdumbnet-dev libpcap-dev libgeoip-dev libffi-dev python-dev python-pip
When you got them run:
.. code:: bash
sudo pip install ooniprobe
Using ooniprobe
---------------
It is recommended that you start the ooniprobe-agent system daemon that will
expose a localhost only Web UI and automatically run tests for you.
This can be done with:
.. code:: bash
ooniprobe-agent start
Then connect to the local web interface on http://127.0.0.1:8842/
Have fun!
Platform: UNKNOWN
Classifier: Development Status :: 5 - Production/Stable
Classifier: Environment :: Console
Classifier: Framework :: Twisted
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Education
Classifier: Intended Audience :: End Users/Desktop
Classifier: Intended Audience :: Information Technology
Classifier: Intended Audience :: Science/Research
Classifier: Intended Audience :: Telecommunications Industry
Classifier: License :: OSI Approved :: BSD License
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2 :: Only
Classifier: Programming Language :: Python :: 2.6
Classifier: Programming Language :: Python :: 2.7
Classifier: Operating System :: MacOS :: MacOS X
Classifier: Operating System :: POSIX
Classifier: Operating System :: POSIX :: BSD
Classifier: Operating System :: POSIX :: BSD :: BSD/OS
Classifier: Operating System :: POSIX :: BSD :: FreeBSD
Classifier: Operating System :: POSIX :: BSD :: NetBSD
Classifier: Operating System :: POSIX :: BSD :: OpenBSD
Classifier: Operating System :: POSIX :: Linux
Classifier: Operating System :: Unix
Classifier: Topic :: Scientific/Engineering :: Information Analysis
Classifier: Topic :: Security
Classifier: Topic :: Security :: Cryptography
Classifier: Topic :: Software Development :: Libraries :: Application Frameworks
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: Software Development :: Testing
Classifier: Topic :: Software Development :: Testing :: Traffic Generation
Classifier: Topic :: System :: Networking :: Monitoring
ooniprobe-2.2.0/ooniprobe.egg-info/top_level.txt 0000644 0001750 0001750 00000000005 13071152230 020052 0 ustar irl irl ooni
ooniprobe-2.2.0/ooniprobe.egg-info/dependency_links.txt 0000644 0001750 0001750 00000000001 13071152230 021373 0 ustar irl irl
ooniprobe-2.2.0/ooniprobe.egg-info/not-zip-safe 0000644 0001750 0001750 00000000001 12762034611 017563 0 ustar irl irl
ooniprobe-2.2.0/ooniprobe.egg-info/entry_points.txt 0000644 0001750 0001750 00000000360 13071152230 020622 0 ustar irl irl [console_scripts]
oonideckgen = ooni.scripts.oonideckgen:run
ooniprobe = ooni.scripts.ooniprobe:run
ooniprobe-agent = ooni.scripts.ooniprobe_agent:run
oonireport = ooni.scripts.oonireport:run
ooniresources = ooni.scripts.ooniresources:run
ooniprobe-2.2.0/ooniprobe.egg-info/SOURCES.txt 0000644 0001750 0001750 00000022622 13071152230 017215 0 ustar irl irl ChangeLog.rst
LICENSE
MANIFEST.in
README.rst
requirements.txt
setup.cfg
setup.py
data/lepidopter-update.py
data/oonideckgen.1
data/ooniprobe-agent.1
data/ooniprobe.1
data/ooniprobe.conf.sample
data/oonireport.1
data/ooniresources.1
data/decks/http-invalid.yaml
data/decks/im.yaml
data/decks/tor.yaml
data/decks/web.yaml
ooni/__init__.py
ooni/backend_client.py
ooni/constants.py
ooni/director.py
ooni/errors.py
ooni/geoip.py
ooni/managers.py
ooni/measurements.py
ooni/nettest.py
ooni/otime.py
ooni/reporter.py
ooni/resources.py
ooni/settings.ini
ooni/settings.py
ooni/tasks.py
ooni/agent/__init__.py
ooni/agent/agent.py
ooni/agent/scheduler.py
ooni/common/__init__.py
ooni/common/http_utils.py
ooni/common/ip_utils.py
ooni/common/tcp_utils.py
ooni/common/txextra.py
ooni/contrib/__init__.py
ooni/contrib/croniter.py
ooni/contrib/dateutil/__init__.py
ooni/contrib/dateutil/relativedelta.py
ooni/contrib/dateutil/tz/__init__.py
ooni/contrib/dateutil/tz/_common.py
ooni/contrib/dateutil/tz/tz.py
ooni/contrib/dateutil/tz/win.py
ooni/deck/__init__.py
ooni/deck/backend.py
ooni/deck/deck.py
ooni/deck/legacy.py
ooni/deck/store.py
ooni/kit/__init__.py
ooni/kit/daphn3.py
ooni/kit/domclass.py
ooni/nettests/__init__.py
ooni/nettests/__init__.pyc
ooni/nettests/blocking/__init__.py
ooni/nettests/blocking/__init__.pyc
ooni/nettests/blocking/bridge_reachability.py
ooni/nettests/blocking/bridge_reachability.pyc
ooni/nettests/blocking/dns_consistency.py
ooni/nettests/blocking/dns_consistency.pyc
ooni/nettests/blocking/dns_n_http.pyc
ooni/nettests/blocking/facebook_messenger.py
ooni/nettests/blocking/facebook_messenger.pyc
ooni/nettests/blocking/http_requests.py
ooni/nettests/blocking/http_requests.pyc
ooni/nettests/blocking/meek_fronted_requests.py
ooni/nettests/blocking/meek_fronted_requests.pyc
ooni/nettests/blocking/tcp_connect.py
ooni/nettests/blocking/tcp_connect.pyc
ooni/nettests/blocking/telegram.py
ooni/nettests/blocking/vanilla_tor.py
ooni/nettests/blocking/vanilla_tor.pyc
ooni/nettests/blocking/web_connectivity.py
ooni/nettests/blocking/web_connectivity.pyc
ooni/nettests/blocking/whatsapp.py
ooni/nettests/blocking/whatsapp.pyc
ooni/nettests/examples/__init__.py
ooni/nettests/examples/__init__.pyc
ooni/nettests/examples/example_dns_http.py
ooni/nettests/examples/example_dnst.py
ooni/nettests/examples/example_dnst.pyc
ooni/nettests/examples/example_http_checksum.py
ooni/nettests/examples/example_http_checksum.pyc
ooni/nettests/examples/example_httpt.py
ooni/nettests/examples/example_httpt.pyc
ooni/nettests/examples/example_myip.py
ooni/nettests/examples/example_myip.pyc
ooni/nettests/examples/example_postprocessor.py
ooni/nettests/examples/example_process.py
ooni/nettests/examples/example_process.pyc
ooni/nettests/examples/example_scapyt.py
ooni/nettests/examples/example_scapyt.pyc
ooni/nettests/examples/example_scapyt_yield.py
ooni/nettests/examples/example_scapyt_yield.pyc
ooni/nettests/examples/example_simple.py
ooni/nettests/examples/example_simple_post.py
ooni/nettests/examples/example_tcpt.py
ooni/nettests/examples/example_tcpt.pyc
ooni/nettests/experimental/__init__.py
ooni/nettests/experimental/__init__.pyc
ooni/nettests/experimental/chinatrigger.py
ooni/nettests/experimental/chinatrigger.pyc
ooni/nettests/experimental/daphne.py
ooni/nettests/experimental/dns_injection.py
ooni/nettests/experimental/dns_injection.pyc
ooni/nettests/experimental/domclass_collector.py
ooni/nettests/experimental/domclass_collector.pyc
ooni/nettests/experimental/dynamic_inputs.pyc
ooni/nettests/experimental/http_filtering_bypassing.py
ooni/nettests/experimental/http_keyword_filtering.py
ooni/nettests/experimental/http_keyword_filtering.pyc
ooni/nettests/experimental/http_trix.py
ooni/nettests/experimental/http_uk_mobile_networks.py
ooni/nettests/experimental/http_uk_mobile_networks.pyc
ooni/nettests/experimental/keyword_filtering.py
ooni/nettests/experimental/keyword_filtering.pyc
ooni/nettests/experimental/mk_http_invalid_request_line.pyc
ooni/nettests/experimental/parasitictraceroute.py
ooni/nettests/experimental/parasitictraceroute.pyc
ooni/nettests/experimental/script.py
ooni/nettests/experimental/squid.py
ooni/nettests/experimental/squid.pyc
ooni/nettests/experimental/tls_handshake.py
ooni/nettests/experimental/bridge_reachability/bridget.py
ooni/nettests/experimental/bridge_reachability/echo.py
ooni/nettests/manipulation/__init__.py
ooni/nettests/manipulation/__init__.pyc
ooni/nettests/manipulation/captiveportal.py
ooni/nettests/manipulation/captiveportal.pyc
ooni/nettests/manipulation/daphne.pyc
ooni/nettests/manipulation/dns_spoof.py
ooni/nettests/manipulation/dns_spoof.pyc
ooni/nettests/manipulation/http_header_field_manipulation.py
ooni/nettests/manipulation/http_header_field_manipulation.pyc
ooni/nettests/manipulation/http_host.py
ooni/nettests/manipulation/http_host.pyc
ooni/nettests/manipulation/http_invalid_request_line.py
ooni/nettests/manipulation/http_invalid_request_line.pyc
ooni/nettests/manipulation/traceroute.py
ooni/nettests/manipulation/traceroute.pyc
ooni/nettests/scanning/__init__.py
ooni/nettests/scanning/__init__.pyc
ooni/nettests/scanning/http_url_list.py
ooni/nettests/scanning/http_url_list.pyc
ooni/nettests/third_party/Makefile
ooni/nettests/third_party/README
ooni/nettests/third_party/__init__.py
ooni/nettests/third_party/__init__.pyc
ooni/nettests/third_party/lantern.py
ooni/nettests/third_party/lantern.pyc
ooni/nettests/third_party/netalyzr.py
ooni/nettests/third_party/netalyzr.pyc
ooni/nettests/third_party/openvpn.py
ooni/nettests/third_party/openvpn.pyc
ooni/nettests/third_party/psiphon.py
ooni/nettests/third_party/psiphon.pyc
ooni/scripts/__init__.py
ooni/scripts/oonideckgen.py
ooni/scripts/ooniprobe.py
ooni/scripts/ooniprobe_agent.py
ooni/scripts/oonireport.py
ooni/scripts/ooniresources.py
ooni/templates/__init__.py
ooni/templates/dnst.py
ooni/templates/httpt.py
ooni/templates/process.py
ooni/templates/scapyt.py
ooni/templates/tcpt.py
ooni/tests/__init__.py
ooni/tests/bases.py
ooni/tests/disable_test_dns.py
ooni/tests/mocks.py
ooni/tests/test_backend_client.py
ooni/tests/test_common.py
ooni/tests/test_deck.py
ooni/tests/test_director.py
ooni/tests/test_errors.py
ooni/tests/test_geoip.py
ooni/tests/test_managers.py
ooni/tests/test_mutate.py
ooni/tests/test_nettest.py
ooni/tests/test_onion.py
ooni/tests/test_oonicli.py
ooni/tests/test_oonideckgen.py
ooni/tests/test_oonireport.py
ooni/tests/test_reporter.py
ooni/tests/test_resources.py
ooni/tests/test_safe_represent.py
ooni/tests/test_scheduler.py
ooni/tests/test_settings.py
ooni/tests/test_socks.py
ooni/tests/test_templates.py
ooni/tests/test_trueheaders.py
ooni/tests/test_txscapy.py
ooni/tests/test_utils.py
ooni/tests/test_wui_server.py
ooni/ui/__init__.py
ooni/ui/cli.py
ooni/ui/consent-form.md
ooni/ui/web/__init__.py
ooni/ui/web/server.py
ooni/ui/web/web.py
ooni/ui/web/client/0.measurements.abbefd5cfbd0c09ba163.js
ooni/ui/web/client/1.dashboard.8a8441e69ec6ad3f4623.js
ooni/ui/web/client/3.onboard.d447ccf49a17f1bcf076.js
ooni/ui/web/client/4.4.98946e4733f3cb74e9a8.js
ooni/ui/web/client/5.settings.c6df80ccc6ab26c17688.js
ooni/ui/web/client/6.logs.2037a11d271e08733f99.js
ooni/ui/web/client/app.18387b22880f2afc1f16828000464498.css
ooni/ui/web/client/app.8b2cc273c7c7f67623f2.js
ooni/ui/web/client/favicon.ico
ooni/ui/web/client/humans.txt
ooni/ui/web/client/index.html
ooni/ui/web/client/robots.txt
ooni/ui/web/client/vendor.c8637e95835a4a051245.js
ooni/ui/web/client/favicons/android-icon-192x192.png
ooni/ui/web/client/favicons/apple-icon-114x114.png
ooni/ui/web/client/favicons/apple-icon-120x120.png
ooni/ui/web/client/favicons/apple-icon-144x144.png
ooni/ui/web/client/favicons/apple-icon-152x152.png
ooni/ui/web/client/favicons/apple-icon-180x180.png
ooni/ui/web/client/favicons/apple-icon-57x57.png
ooni/ui/web/client/favicons/apple-icon-60x60.png
ooni/ui/web/client/favicons/apple-icon-72x72.png
ooni/ui/web/client/favicons/apple-icon-76x76.png
ooni/ui/web/client/favicons/favicon-16x16.png
ooni/ui/web/client/favicons/favicon-32x32.png
ooni/ui/web/client/favicons/favicon-96x96.png
ooni/ui/web/client/favicons/ms-icon-144x144.png
ooni/ui/web/client/fonts/charter-bold-italic.e5c78e2789ec748d8c7f5adccad90e0b.woff
ooni/ui/web/client/fonts/charter-bold.78342dfad83c591ee5e926f2ffbd0671.woff
ooni/ui/web/client/fonts/charter-italic.a043b97f0bac1546f96bc31abd6956bb.woff
ooni/ui/web/client/fonts/charter-regular.0c4500a9d203a33bd879a9a0bee1190d.woff
ooni/ui/web/client/fonts/fira-sans-bold.5310ca5fb41a915987df5663660da770.otf
ooni/ui/web/client/fonts/fira-sans-light.7dd0ad25580893d980bbf0475f88aead.otf
ooni/ui/web/client/fonts/fira-sans-semi-bold.3de79d2eb33e18bba8f5f5834a3d9d05.otf
ooni/ui/web/client/fonts/fontawesome-webfont.674f50d287a8c48dc19ba404d20fe713.eot
ooni/ui/web/client/fonts/fontawesome-webfont.af7ae505a9eed503f8b8e6982036873e.woff2
ooni/ui/web/client/fonts/fontawesome-webfont.b06871f281fee6b241d60582ae9369b9.ttf
ooni/ui/web/client/fonts/fontawesome-webfont.fee66e712a8a08eef5805a46892932ad.woff
ooni/ui/web/client/fonts/ooni-icons.7f721a571194837f629b6dd86a703ca5.eot
ooni/ui/web/client/fonts/source-code-pro-bold.b78a2d32658068a52eab4b7f8f7d366e.woff
ooni/ui/web/client/fonts/source-code-pro-regular.7e5b1b977ba8a582d81367d2940e8150.woff
ooni/utils/__init__.py
ooni/utils/files.py
ooni/utils/log.py
ooni/utils/net.py
ooni/utils/onion.py
ooni/utils/socks.py
ooni/utils/txscapy.py
ooniprobe.egg-info/PKG-INFO
ooniprobe.egg-info/SOURCES.txt
ooniprobe.egg-info/dependency_links.txt
ooniprobe.egg-info/entry_points.txt
ooniprobe.egg-info/not-zip-safe
ooniprobe.egg-info/requires.txt
ooniprobe.egg-info/top_level.txt ooniprobe-2.2.0/ooni/ 0000755 0001750 0001750 00000000000 13071152230 012603 5 ustar irl irl ooniprobe-2.2.0/ooni/agent/ 0000755 0001750 0001750 00000000000 13071152230 013701 5 ustar irl irl ooniprobe-2.2.0/ooni/agent/scheduler.py 0000644 0001750 0001750 00000041262 13070703575 016253 0 ustar irl irl import os
import errno
import random
from hashlib import md5
from datetime import datetime, timedelta
from twisted.application import service
from twisted.internet import defer, reactor
from twisted.internet.task import LoopingCall
from twisted.python.filepath import FilePath
from ooni.scripts import oonireport
from ooni import resources
from ooni.utils import log, SHORT_DATE
from ooni.utils.files import human_size_to_bytes, directory_usage
from ooni.deck.store import input_store, deck_store, DEFAULT_DECKS
from ooni.settings import config
from ooni.contrib import croniter
from ooni.contrib.dateutil.tz import tz
from ooni.geoip import probe_ip
from ooni.measurements import list_measurements
class FileSystemlockAndMutex(object):
"""
This is a lock that is both a mutex lock and also on filesystem.
When you acquire it, it will first acquire the mutex lock and then
acquire the filesystem lock. The release order is inverted.
This is to avoid concurrent usage within the same process. When using it
concurrently the mutex lock will block before the filesystem lock is
acquired.
It's a way to support concurrent usage of the DeferredFilesystemLock from
different stacks (threads/fibers) within the same process without races.
"""
def __init__(self, file_path):
"""
Args:
file_path: is the location of where the filesystem based lockfile should be written to.
"""
self._fs_lock = defer.DeferredFilesystemLock(file_path)
self._mutex = defer.DeferredLock()
@defer.inlineCallbacks
def acquire(self):
yield self._mutex.acquire()
yield self._fs_lock.deferUntilLocked()
def release(self):
"""Release the filesystem based and in memory locks."""
self._fs_lock.unlock()
self._mutex.release()
@property
def locked(self):
return self._mutex.locked or self._fs_lock.locked
# We use this date to indicate that the scheduled task has never run.
# Easter egg, try to see what is special about this date :)?
CANARY_DATE = datetime(1957, 10, 4, tzinfo=tz.tzutc())
class DidNotRun(Exception):
pass
class ScheduledTask(object):
"""
Two ScheduledTask instances with same identifier are not permited to run
concurrently. There should be no ScheduledTask queue waiting for the lock
as SchedulerService ticks quite often.
"""
_time_format = "%Y-%m-%dT%H:%M:%SZ"
schedule = None
identifier = None
def __init__(self, schedule=None, identifier=None,
scheduler_directory=None):
if scheduler_directory is None:
scheduler_directory = config.scheduler_directory
if schedule is not None:
self.schedule = schedule
if identifier is not None:
self.identifier = identifier
assert self.identifier is not None, "self.identifier must be set"
assert self.schedule is not None, "self.schedule must be set"
# XXX: both _last_run_lock and _smear_coef require that there is single
# instance of the ScheduledTask of each type identified by `identifier`.
self._last_run = FilePath(scheduler_directory).child(self.identifier)
self._last_run_lock = FileSystemlockAndMutex(
FilePath(scheduler_directory).child(self.identifier + ".lock").path
)
self._smear_coef = random.random()
def cancel(self):
"""
Cancel a currently running task.
If it is locked, then release the lock.
"""
if not self._last_run_lock.locked:
# _last_run_lock.release() will throw if we try to release it
log.err('BUG: cancelling non-locked task {} without holding lock'.format(self.identifier))
return
# probably, cancelling the task TAKEN the lock is even worse :-)
self._last_run_lock.release()
@property
def should_run(self):
current_time = datetime.utcnow().replace(tzinfo=tz.tzutc())
next_cycle = croniter(self.schedule, self.last_run).get_next(datetime)
delta = (croniter(self.schedule, next_cycle).get_next(datetime) - next_cycle).total_seconds()
next_cycle = next_cycle + timedelta(seconds=delta * 0.1 * self._smear_coef)
if next_cycle <= current_time:
return True
return False
@property
def last_run(self):
self._last_run.restat(False)
if not self._last_run.exists():
return CANARY_DATE
with self._last_run.open('r') as in_file:
date_str = in_file.read()
return datetime.strptime(date_str, self._time_format).replace(
tzinfo=tz.tzutc())
def _update_last_run(self, last_run_time):
"""
Update the time at which this task ran successfully last, by running
to a file.
"""
with self._last_run.open('w') as out_file:
out_file.write(last_run_time.strftime(self._time_format))
def task(self):
raise NotImplementedError
def first_run(self):
"""
This hook is called if it's the first time a particular scheduled
operation is run.
"""
pass
@defer.inlineCallbacks
def run(self):
if self._last_run_lock.locked:
# do not allow the queue to grow forever
raise DidNotRun
yield self._last_run_lock.acquire()
if not self.should_run:
self._last_run_lock.release()
raise DidNotRun
try:
if self.last_run == CANARY_DATE:
log.debug("Detected first run")
yield defer.maybeDeferred(self.first_run)
last_run_time = datetime.utcnow()
yield self.task()
self._update_last_run(last_run_time)
except:
raise
finally:
self._last_run_lock.release()
class UpdateInputsAndResources(ScheduledTask):
identifier = "update-inputs"
schedule = "@daily"
@defer.inlineCallbacks
def first_run(self):
"""
On first run we update the resources that are common to every country.
"""
log.debug("Updating the global inputs and resources")
yield resources.check_for_update("ZZ")
@defer.inlineCallbacks
def task(self):
log.debug("Updating the inputs")
yield probe_ip.lookup()
log.debug("Updating the inputs for country %s" %
probe_ip.geodata['countrycode'])
yield resources.check_for_update(probe_ip.geodata['countrycode'])
yield input_store.update(probe_ip.geodata['countrycode'])
yield probe_ip.resolveGeodata()
class UploadReports(ScheduledTask):
"""
This task is used to submit to the collector reports that have not been
submitted and those that have been partially uploaded.
"""
identifier = 'upload-reports'
schedule = '@hourly'
@defer.inlineCallbacks
def task(self):
yield oonireport.upload_all(upload_incomplete=True)
class DeleteOldReports(ScheduledTask):
"""
This task is used to delete reports that are older than a week.
"""
identifier = 'delete-old-reports'
schedule = '@daily'
def task(self):
measurement_path = FilePath(config.measurements_directory)
for measurement in list_measurements():
if measurement['keep'] is True:
continue
delta = datetime.utcnow() - \
datetime.strptime(measurement['test_start_time'],
SHORT_DATE)
if delta.days >= 7:
log.debug("Deleting old report {0}".format(measurement["id"]))
measurement_path.child(measurement['id']).remove()
class CheckMeasurementQuota(ScheduledTask):
"""
This task is run to ensure we don't run out of disk space and deletes
older reports to avoid filling the quota.
"""
identifier = 'check-measurement-quota'
schedule = '@hourly'
_warn_when = 0.8
def task(self):
if config.basic.measurement_quota is None:
return
maximum_bytes = human_size_to_bytes(config.basic.measurement_quota)
used_bytes = directory_usage(config.measurements_directory)
warning_path = os.path.join(config.running_path, 'quota_warning')
if (float(used_bytes) / float(maximum_bytes)) >= self._warn_when:
log.warn("You are about to reach the maximum allowed quota. Be careful")
with open(warning_path, "w") as out_file:
out_file.write("{0} {1}".format(used_bytes,
maximum_bytes))
else:
try:
os.remove(warning_path)
except OSError as ose:
if ose.errno != errno.ENOENT:
raise
if float(used_bytes) < float(maximum_bytes):
# We are within the allow quota exit.
return
# We should begin to delete old reports
amount_to_delete = float(used_bytes) - float(maximum_bytes)
amount_deleted = 0
measurement_path = FilePath(config.measurements_directory)
kept_measurements = []
stale_measurements = []
remaining_measurements = []
measurements_by_date = sorted(list_measurements(compute_size=True),
key=lambda k: k['test_start_time'])
for measurement in measurements_by_date:
if measurement['keep'] is True:
kept_measurements.append(measurement)
elif measurement['stale'] is True:
stale_measurements.append(measurement)
else:
remaining_measurements.append(measurement)
# This is the order in which we should begin deleting measurements.
ordered_measurements = (stale_measurements +
remaining_measurements +
kept_measurements)
while amount_deleted < amount_to_delete:
measurement = ordered_measurements.pop(0)
log.warn("Deleting report {0}".format(measurement["id"]))
measurement_path.child(measurement['id']).remove()
amount_deleted += measurement['size']
class RunDeck(ScheduledTask):
"""
This will run the decks that have been configured on the system as the
decks to run by default.
"""
def __init__(self, director, deck_id, schedule):
self.deck_id = deck_id
self.director = director
# We use as identifier also the schedule time
identifier = 'run-deck-' + deck_id + '-' + md5(schedule).hexdigest()
super(RunDeck, self).__init__(schedule, identifier)
@defer.inlineCallbacks
def task(self):
deck = deck_store.get(self.deck_id)
yield deck.setup()
yield deck.run(self.director, from_schedule=True)
class RefreshDeckList(ScheduledTask):
"""
This task is configured to refresh the list of decks that are enabled.
"""
identifier = 'refresh-deck-list'
schedule = '@hourly'
def __init__(self, scheduler, schedule=None, identifier=None):
self.scheduler = scheduler
super(RefreshDeckList, self).__init__(schedule, identifier)
def first_run(self):
"""
On first run we enable the default decks.
"""
for deck_id in DEFAULT_DECKS:
deck_store.enable(deck_id)
def task(self):
self.scheduler.refresh_deck_list()
class SendHeartBeat(ScheduledTask):
"""
This task is used to send a heartbeat that the probe is still alive and
well.
"""
identifier = 'send-heartbeat'
schedule = '@hourly'
def task(self):
# XXX implement this
pass
# Order matters
SYSTEM_TASKS = [
UpdateInputsAndResources
]
@defer.inlineCallbacks
def run_system_tasks(no_input_store=False):
task_classes = SYSTEM_TASKS[:]
if no_input_store:
log.debug("Not updating the inputs")
try:
task_classes.remove(UpdateInputsAndResources)
except ValueError:
pass
for task_class in task_classes:
task = task_class()
log.debug("Running task {0}".format(task.identifier))
try:
yield task.run()
except DidNotRun:
log.debug("Did not run {0}".format(task.identifier))
except Exception as exc:
log.err("Failed to run task {0}".format(task.identifier))
log.exception(exc)
class SchedulerService(service.MultiService):
"""
This service is responsible for running the periodic tasks.
"""
def __init__(self, director, interval=30, _reactor=reactor):
service.MultiService.__init__(self)
self.director = director
self.interval = interval
self._looping_call = LoopingCall(self._should_run)
self._looping_call.clock = _reactor
self._scheduled_tasks = []
def schedule(self, task):
self._scheduled_tasks.append(task)
def unschedule(self, task):
# We first cancel the task so the run lock is deleted
task.cancel()
self._scheduled_tasks.remove(task)
def refresh_deck_list(self):
"""
This checks if there are some decks that have been enabled and
should be scheduled as periodic tasks to run on the next scheduler
cycle and if some have been disabled and should not be run.
It does so by listing the enabled decks and checking if the enabled
ones are already scheduled or if some of the scheduled ones are not
amongst the enabled decks.
"""
to_enable = []
for deck_id, deck in deck_store.list_enabled():
if deck.schedule is None:
continue
to_enable.append((deck_id, deck.schedule))
# If we are not initialized we should not enable anything
if not config.is_initialized():
log.msg("We are not initialized skipping setup of decks")
to_enable = []
for scheduled_task in self._scheduled_tasks[:]:
if not isinstance(scheduled_task, RunDeck):
continue
info = (scheduled_task.deck_id, scheduled_task.schedule)
if info in to_enable:
# If the task is already scheduled there is no need to
# enable it.
log.debug("The deck {0} is already scheduled".format(deck_id))
to_enable.remove(info)
else:
# If one of the tasks that is scheduled is no longer in the
# scheduled tasks. We should disable it.
log.debug("The deck task {0} should be disabled".format(deck_id))
self.unschedule(scheduled_task)
for deck_id, schedule in to_enable:
log.debug("Scheduling to run {0}".format(deck_id))
self.schedule(RunDeck(self.director, deck_id, schedule))
def _task_did_not_run(self, failure, task):
"""
Fired when a tasks did not run. This is not an error.
"""
failure.trap(DidNotRun)
log.debug("Did not run {0}".format(task.identifier))
def _task_failed(self, failure, task):
"""
Fired when a task failed to run due to an error.
"""
log.err("Failed to run {0}".format(task.identifier))
log.exception(failure)
def _task_success(self, result, task):
"""
Fired when a task has successfully run.
"""
log.debug("Ran {0}".format(task.identifier))
def _should_run(self):
"""
This function is called every self.interval seconds to check
which periodic tasks should be run.
Note: the task will wait on the lock if there is already a task of
that type running. This means that if a task is very long running
there can potentially be a pretty large backlog of accumulated
periodic tasks waiting to know if they should run.
XXX
We may want to do something like not wait on the lock if there is
already a queue that is larger than a certain amount or something
smarter if still starts to become a memory usage concern.
"""
for task in self._scheduled_tasks:
log.debug("Running task {0}".format(task.identifier))
d = task.run()
d.addErrback(self._task_did_not_run, task)
d.addCallback(self._task_success, task)
d.addErrback(self._task_failed, task)
def startService(self):
service.MultiService.startService(self)
self.refresh_deck_list()
self.schedule(UpdateInputsAndResources())
self.schedule(UploadReports())
self.schedule(DeleteOldReports())
self.schedule(CheckMeasurementQuota())
self.schedule(RefreshDeckList(self))
self._looping_call.start(self.interval)
def stopService(self):
service.MultiService.stopService(self)
self._looping_call.stop()
ooniprobe-2.2.0/ooni/agent/agent.py 0000644 0001750 0001750 00000002111 13024243330 015344 0 ustar irl irl from twisted.application import service
from ooni.director import Director
from ooni.settings import config
from ooni.ui.web.web import WebUIService
from ooni.agent.scheduler import SchedulerService
class AgentService(service.MultiService):
"""Manage all services related to the ooniprobe-agent daemon."""
def __init__(self, web_ui_port):
"""
If the advanced->disabled_webui is set to true then we will not start the WebUI.
"""
service.MultiService.__init__(self)
director = Director()
self.scheduler_service = SchedulerService(director)
self.scheduler_service.setServiceParent(self)
if not config.advanced.disabled_webui:
self.web_ui_service = WebUIService(director,
self.scheduler_service,
web_ui_port)
self.web_ui_service.setServiceParent(self)
def startService(self):
service.MultiService.startService(self)
def stopService(self):
service.MultiService.stopService(self)
ooniprobe-2.2.0/ooni/agent/__init__.py 0000644 0001750 0001750 00000000000 12767752452 016027 0 ustar irl irl ooniprobe-2.2.0/ooni/utils/ 0000755 0001750 0001750 00000000000 13071152230 013743 5 ustar irl irl ooniprobe-2.2.0/ooni/utils/files.py 0000644 0001750 0001750 00000001563 12767752460 015452 0 ustar irl irl import os
import re
HUMAN_SIZE = re.compile("(\d+\.?\d*G)|(\d+\.?\d*M)|(\d+\.?\d*K)|(\d+\.?\d*)")
class InvalidFormat(Exception):
pass
def human_size_to_bytes(human_size):
"""
Converts a size specified in a human friendly way (for example 1G, 10M,
30K) into bytes.
"""
gb, mb, kb, b = HUMAN_SIZE.match(human_size).groups()
if gb is not None:
b = float(gb[:-1]) * (1024 ** 3)
elif mb is not None:
b = float(mb[:-1]) * (1024 ** 2)
elif kb is not None:
b = float(kb[:-1]) * 1024
elif b is not None:
b = float(b)
else:
raise InvalidFormat
return b
def directory_usage(path):
total_usage = 0
for root, dirs, filenames in os.walk(path):
for filename in filenames:
fp = os.path.join(root, filename)
total_usage += os.path.getsize(fp)
return total_usage
ooniprobe-2.2.0/ooni/utils/onion.py 0000644 0001750 0001750 00000030532 13017627406 015456 0 ustar irl irl import os
import re
import pwd
import fcntl
import errno
import string
import StringIO
import subprocess
from distutils.spawn import find_executable
from distutils.version import LooseVersion
from twisted.internet import reactor, defer
from twisted.internet.endpoints import TCP4ClientEndpoint
from txtorcon import TorConfig, TorState, launch_tor, build_tor_connection
from txtorcon.util import find_tor_binary as tx_find_tor_binary
from ooni.utils import mkdir_p
from ooni.utils.net import randomFreePort
from ooni import constants
from ooni import errors
from ooni.utils import log
from ooni.settings import config
ONION_ADDRESS_REGEXP = re.compile("^((httpo|http|https)://)?"
"[a-z0-9]{16}\.onion")
TBB_PT_PATHS = ("/Applications/TorBrowser.app/Contents/MacOS/Tor"
"/PluggableTransports/",)
class TorVersion(LooseVersion):
pass
class OBFSProxyVersion(LooseVersion):
pass
def find_tor_binary():
if config.advanced.tor_binary:
return config.advanced.tor_binary
return tx_find_tor_binary()
def executable_version(binary, strip=lambda x: x):
if not binary:
return None
try:
proc = subprocess.Popen((binary, '--version'),
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
except OSError:
pass
else:
stdout, _ = proc.communicate()
if proc.poll() == 0 and stdout != '':
version = stdout.strip()
return LooseVersion(strip(version))
return None
def tor_version():
version = executable_version(find_tor_binary(),
lambda x: x.split(' ')[2])
return TorVersion(str(version))
def obfsproxy_version():
version = executable_version(find_executable('obfsproxy'))
return OBFSProxyVersion(str(version))
def transport_name(address):
"""
If the address of the bridge starts with a valid c identifier then
we consider it to be a bridge.
Returns:
The transport_name if it's a transport.
None if it's not a obfsproxy bridge.
"""
transport_name = address.split(' ')[0]
transport_name_chars = string.ascii_letters + string.digits
if all(c in transport_name_chars for c in transport_name):
return transport_name
return None
def is_onion_address(address):
return ONION_ADDRESS_REGEXP.match(address) != None
def find_pt_executable(name):
bin_loc = find_executable(name)
if bin_loc:
return bin_loc
for path in TBB_PT_PATHS:
bin_loc = os.path.join(path, name)
if os.path.isfile(bin_loc):
return bin_loc
return None
tor_details = {
'binary': find_tor_binary(),
'version': tor_version()
}
obfsproxy_details = {
'binary': find_executable('obfsproxy'),
'version': obfsproxy_version()
}
transport_bin_name = { 'fte': 'fteproxy',
'scramblesuit': 'obfsproxy',
'obfs2': 'obfsproxy',
'obfs3': 'obfsproxy',
'obfs4': 'obfs4proxy' }
_pyobfsproxy_line = lambda transport, bin_loc, log_file: \
"%s exec %s --log-min-severity info --log-file %s managed" % \
(transport, bin_loc, log_file)
_transport_line_templates = {
'fte': lambda bin_loc, log_file : \
"fte exec %s --managed" % bin_loc,
'scramblesuit': lambda bin_loc, log_file: \
_pyobfsproxy_line('scramblesuit', bin_loc, log_file),
'obfs2': lambda bin_loc, log_file: \
_pyobfsproxy_line('obfs2', bin_loc, log_file),
'obfs3': lambda bin_loc, log_file: \
_pyobfsproxy_line('obfs3', bin_loc, log_file),
'obfs4': lambda bin_loc, log_file: \
"obfs4 exec %s --enableLogging=true --logLevel=INFO" % bin_loc,
}
pt_names = _transport_line_templates.keys()
class UnrecognizedTransport(Exception):
pass
class UninstalledTransport(Exception):
pass
class OutdatedObfsproxy(Exception):
pass
class OutdatedTor(Exception):
pass
def bridge_line(transport, log_file):
bin_name = transport_bin_name.get(transport)
if not bin_name:
raise UnrecognizedTransport
bin_loc = find_executable(bin_name)
if not bin_loc:
raise UninstalledTransport
if OBFSProxyVersion('0.2') > obfsproxy_details['version']:
raise OutdatedObfsproxy
if (transport == 'scramblesuit' or \
bin_name == 'obfs4proxy') and \
TorVersion('0.2.5.1') > tor_details['version']:
raise OutdatedTor
if TorVersion('0.2.4.1') > tor_details['version']:
raise OutdatedTor
return _transport_line_templates[transport](bin_loc, log_file)
pt_config = {
'meek': [
{
'executable': 'obfs4proxy',
'minimum_version': '0.0.6',
'version_parse': lambda x: x.split('-')[1],
'client_transport_line': 'meek exec {bin_loc}'
},
{
'executable': 'meek-client',
'minimum_version': None,
'client_transport_line': 'meek exec {bin_loc}'
}
],
'obfs4': [
{
'executable': 'obfs4proxy',
'minimum_version': None,
'client_transport_line': 'obfs4 exec {bin_loc}'
}
]
}
def get_client_transport(transport):
"""
:param transport:
:return: client_transport_line
"""
try:
pts = pt_config[transport]
except KeyError:
raise UnrecognizedTransport
for pt in pts:
bin_loc = find_pt_executable(pt['executable'])
if bin_loc is None:
continue
if pt['minimum_version'] is not None:
pt_version = executable_version(bin_loc, pt['version_parse'])
if (pt_version is None or
pt_version < LooseVersion(pt['minimum_version'])):
continue
return pt['client_transport_line'].format(bin_loc=bin_loc)
raise UninstalledTransport
def is_tor_data_dir_usable(tor_data_dir):
"""
Checks if the Tor data dir specified is usable. This means that
it is not being locked and we have permissions to write to it.
"""
if not os.path.exists(tor_data_dir):
return True
try:
fcntl.flock(open(os.path.join(tor_data_dir, 'lock'), 'w'),
fcntl.LOCK_EX | fcntl.LOCK_NB)
return True
except (IOError, OSError) as err:
if err.errno == errno.EACCES:
# Permission error
return False
elif err.errno == errno.EAGAIN:
# File locked
return False
def get_tor_config():
tor_config = TorConfig()
if config.tor.control_port is None:
config.tor.control_port = int(randomFreePort())
if config.tor.socks_port is None:
config.tor.socks_port = int(randomFreePort())
tor_config.ControlPort = config.tor.control_port
tor_config.SocksPort = config.tor.socks_port
if config.tor.data_dir:
data_dir = os.path.expanduser(config.tor.data_dir)
# We only use the Tor data dir specified in the config file if
# 1. It is not locked (i.e. another process is using it)
# 2. We have write permissions to it
data_dir_usable = is_tor_data_dir_usable(data_dir)
try:
mkdir_p(data_dir)
except OSError as ose:
if ose.errno == errno.EACCESS:
data_dir_usable = False
else:
raise
if data_dir_usable:
tor_config.DataDirectory = data_dir
if config.tor.bridges:
tor_config.UseBridges = 1
if config.advanced.obfsproxy_binary:
tor_config.ClientTransportPlugin = (
'obfs2,obfs3 exec %s managed' %
config.advanced.obfsproxy_binary
)
bridges = []
with open(config.tor.bridges) as f:
for bridge in f:
if 'obfs' in bridge:
if config.advanced.obfsproxy_binary:
bridges.append(bridge.strip())
else:
bridges.append(bridge.strip())
tor_config.Bridge = bridges
if config.tor.torrc:
for i in config.tor.torrc.keys():
setattr(tor_config, i, config.tor.torrc[i])
if os.geteuid() == 0:
tor_config.User = pwd.getpwuid(os.geteuid()).pw_name
tor_config.save()
log.debug("Setting control port as %s" % tor_config.ControlPort)
log.debug("Setting SOCKS port as %s" % tor_config.SocksPort)
return tor_config
class TorLauncherWithRetries(object):
def __init__(self, tor_config, timeout=config.tor.timeout):
self.retry_with = ["obfs4", "meek"]
self.started = defer.Deferred()
self.tor_output = StringIO.StringIO()
self.tor_config = tor_config
if timeout is None:
# XXX we will want to move setting the default inside of the
# config object.
timeout = 200
self.timeout = timeout
def _reset_tor_config(self):
"""
This is used to reset the Tor configuration to before launch_tor
modified it. This is in particular used to force the regeneration of the
DataDirectory.
"""
new_tor_config = TorConfig()
for key in self.tor_config:
if config.tor.data_dir is None and key == "DataDirectory":
continue
setattr(new_tor_config, key, getattr(self.tor_config, key))
self.tor_config = new_tor_config
def _progress_updates(self, prog, tag, summary):
log.msg("%d%%: %s" % (prog, summary))
@defer.inlineCallbacks
def _state_complete(self, state):
config.tor_state = state
log.debug("We now have the following circuits: ")
for circuit in state.circuits.values():
log.debug(" * %s" % circuit)
socks_port = yield state.protocol.get_conf("SocksPort")
control_port = yield state.protocol.get_conf("ControlPort")
config.tor.socks_port = int(socks_port.values()[0])
config.tor.control_port = int(control_port.values()[0])
self.started.callback(state)
def _setup_failed(self, failure):
self.tor_output.seek(0)
map(log.debug, self.tor_output.readlines())
self.tor_output.seek(0)
if len(self.retry_with) == 0:
self.started.errback(errors.UnableToStartTor())
return
while len(self.retry_with) > 0:
self._reset_tor_config()
self.tor_config.UseBridges = 1
transport = self.retry_with.pop(0)
log.msg("Failed to start Tor. Retrying with {0}".format(transport))
try:
bridge_lines = getattr(constants,
'{0}_BRIDGES'.format(transport).upper())
except AttributeError:
continue
try:
self.tor_config.ClientTransportPlugin = get_client_transport(transport)
except UninstalledTransport:
log.err("Pluggable transport {0} is not installed".format(
transport))
continue
except UnrecognizedTransport:
log.err("Unrecognized transport type")
continue
self.tor_config.Bridge = bridge_lines
self.launch()
break
def _setup_complete(self, proto):
"""
Called when we read from stdout that Tor has reached 100%.
"""
log.debug("Building a TorState")
config.tor.protocol = proto
state = TorState(proto.tor_protocol)
state.post_bootstrap.addCallbacks(self._state_complete,
self._setup_failed)
def _launch_tor(self):
return launch_tor(self.tor_config, reactor,
tor_binary=config.advanced.tor_binary,
progress_updates=self._progress_updates,
stdout=self.tor_output,
timeout=self.timeout,
stderr=self.tor_output)
def launch(self):
self._launched = self._launch_tor()
self._launched.addCallbacks(self._setup_complete, self._setup_failed)
return self.started
def start_tor(tor_config):
tor_launcher = TorLauncherWithRetries(tor_config)
return tor_launcher.launch()
@defer.inlineCallbacks
def connect_to_control_port():
connection = TCP4ClientEndpoint(reactor, '127.0.0.1',
config.tor.control_port)
config.tor_state = yield build_tor_connection(connection)
ooniprobe-2.2.0/ooni/utils/net.py 0000644 0001750 0001750 00000013760 13004657346 015130 0 ustar irl irl import sys
import socket
from random import randint
from zope.interface import implements
from twisted.internet import defer
from twisted.internet.protocol import Factory, Protocol
from twisted.web.iweb import IBodyProducer
from scapy.config import conf
from ooni.errors import IfaceError
# This is our own connectProtocol to avoid noisy twisted cluttering our logs
def connectProtocol(endpoint, protocol):
class OneShotFactory(Factory):
noisy = False
def buildProtocol(self, addr):
return protocol
return endpoint.connect(OneShotFactory())
# if sys.platform.system() == 'Windows':
# import _winreg as winreg
# These user agents are taken from the "How Unique Is Your Web Browser?"
# (https://panopticlick.eff.org/browser-uniqueness.pdf) paper as the browser user
# agents with largest anonymity set.
userAgents = ("Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7",
"Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6",
"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6",
"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6",
"Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.2) Gecko/20100115 Firefox/3.6",
"Mozilla/5.0 (Windows; U; Windows NT 6.1; de; rv:1.9.2) Gecko/20100115 Firefox/3.6",
"Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.2) Gecko/20100115 Firefox/3.6",
"Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7",
"Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.1.7) "
"Gecko/20091221 Firefox/3.5.7 (.NET CLR 3.5.30729)")
PLATFORMS = {'LINUX': sys.platform.startswith("linux"),
'OPENBSD': sys.platform.startswith("openbsd"),
'FREEBSD': sys.platform.startswith("freebsd"),
'NETBSD': sys.platform.startswith("netbsd"),
'DARWIN': sys.platform.startswith("darwin"),
'SOLARIS': sys.platform.startswith("sunos"),
'WINDOWS': sys.platform.startswith("win32")}
# These are the 25 most common server headers for the sites in the
# citizenlab global testing list.
COMMON_SERVER_HEADERS = (
"date",
"content-type",
"server",
"cache-control",
"vary",
"set-cookie",
"location",
"expires",
"x-powered-by",
"content-encoding",
"last-modified",
"accept-ranges",
"pragma",
"x-frame-options",
"etag",
"x-content-type-options",
"age",
"via",
"p3p",
"x-xss-protection",
"content-language",
"cf-ray",
"strict-transport-security",
"link",
"x-varnish"
)
# This is used as a default for checking if we get the expected result when
# fetching URLs over some proxy.
GOOGLE_HUMANS = ('http://www.google.com/humans.txt', 'Google is built by a large')
class StringProducer(object):
implements(IBodyProducer)
def __init__(self, body):
self.body = body
self.length = len(body)
def startProducing(self, consumer):
consumer.write(self.body)
return defer.succeed(None)
def pauseProducing(self):
pass
def stopProducing(self):
pass
class BodyReceiver(Protocol):
def __init__(self, finished, content_length=None, body_processor=None):
self.finished = finished
self.data = ""
self.bytes_remaining = content_length
self.body_processor = body_processor
def dataReceived(self, b):
self.data += b
if self.bytes_remaining:
if self.bytes_remaining == 0:
self.connectionLost(None)
else:
self.bytes_remaining -= len(b)
def connectionLost(self, reason):
try:
if self.body_processor:
self.data = self.body_processor(self.data)
self.finished.callback(self.data)
except Exception as exc:
self.finished.errback(exc)
class Downloader(Protocol):
def __init__(self, download_path,
finished, content_length=None):
self.finished = finished
self.bytes_remaining = content_length
self.fp = open(download_path, 'w+')
def dataReceived(self, b):
self.fp.write(b)
if self.bytes_remaining:
if self.bytes_remaining == 0:
self.connectionLost(None)
else:
self.bytes_remaining -= len(b)
def connectionLost(self, reason):
self.fp.flush()
self.fp.close()
self.finished.callback(None)
class ConnectAndCloseProtocol(Protocol):
def connectionMade(self):
self.transport.loseConnection()
def randomFreePort(addr="127.0.0.1"):
"""
Args:
addr (str): the IP address to attempt to bind to.
Returns an int representing the free port number at the moment of calling
Note: there is no guarantee that some other application will attempt to
bind to this port once this function has been called.
"""
free = False
while not free:
port = randint(1024, 65535)
s = socket.socket()
try:
s.bind((addr, port))
free = True
except:
pass
s.close()
return port
def getDefaultIface():
""" Return the default interface or raise IfaceError """
iface = conf.route.route('0.0.0.0', verbose=0)[0]
if len(iface) > 0:
return iface
raise IfaceError
def getAddresses():
from scapy.all import get_if_addr, get_if_list
from ipaddr import IPAddress
addresses = set()
for i in get_if_list():
try:
addresses.add(get_if_addr(i))
except:
pass
if '0.0.0.0' in addresses:
addresses.remove('0.0.0.0')
return [IPAddress(addr) for addr in addresses]
def hasRawSocketPermission():
try:
socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_RAW)
return True
except socket.error:
return False
ooniprobe-2.2.0/ooni/utils/__init__.py 0000644 0001750 0001750 00000012376 13046133036 016072 0 ustar irl irl import shutil
import string
import random
import errno
import gzip
import os
from datetime import datetime, timedelta
from zipfile import ZipFile
from twisted.python.filepath import FilePath
from twisted.python.runtime import platform
class Storage(dict):
"""
A Storage object is like a dictionary except `obj.foo` can be used
in addition to `obj['foo']`.
>>> o = Storage(a=1)
>>> o.a
1
>>> o['a']
1
>>> o.a = 2
>>> o['a']
2
>>> del o.a
>>> o.a
None
"""
def __getattr__(self, key):
try:
return self[key]
except KeyError:
return None
def __setattr__(self, key, value):
self[key] = value
def __delattr__(self, key):
try:
del self[key]
except KeyError, k:
raise AttributeError(k)
def __repr__(self):
return ''
def __getstate__(self):
return dict(self)
def __setstate__(self, value):
for (k, v) in value.items():
self[k] = v
def checkForRoot():
from ooni import errors
if os.getuid() != 0:
raise errors.InsufficientPrivileges
def randomSTR(length, num=True):
"""
Returns a random all uppercase alfa-numerical (if num True) string long length
"""
chars = string.ascii_uppercase
if num:
chars += string.digits
return ''.join(random.choice(chars) for x in range(length))
def randomstr(length, num=True):
"""
Returns a random all lowercase alfa-numerical (if num True) string long length
"""
chars = string.ascii_lowercase
if num:
chars += string.digits
return ''.join(random.choice(chars) for x in range(length))
def randomStr(length, num=True):
"""
Returns a random a mixed lowercase, uppercase, alfanumerical (if num True)
string long length
"""
chars = string.ascii_lowercase + string.ascii_uppercase
if num:
chars += string.digits
return ''.join(random.choice(chars) for x in range(length))
def randomDate(start, end):
"""
From: http://stackoverflow.com/a/553448
"""
delta = end - start
int_delta = (delta.days * 24 * 60 * 60)
random_second = random.randrange(int_delta)
return start + timedelta(seconds=random_second)
LONG_DATE = "%Y-%m-%d %H:%M:%S"
SHORT_DATE = "%Y%m%dT%H%M%SZ"
def generate_filename(test_details, prefix=None, extension=None, deck_id=None):
"""
Returns a filename for every test execution.
It's used to assure that all files of a certain test have a common basename but different
extension.
"""
kwargs = {}
filename_format = ""
if prefix is not None:
kwargs["prefix"] = prefix
filename_format += "{prefix}-"
filename_format += "{timestamp}-{probe_cc}-{probe_asn}-{test_name}"
if deck_id is not None:
kwargs["deck_id"] = deck_id
filename_format += "-{deck_id}"
if extension is not None:
kwargs["extension"] = extension
filename_format += ".{extension}"
kwargs['test_name'] = test_details['test_name']
kwargs['probe_cc'] = test_details.get('probe_cc', 'ZZ')
kwargs['probe_asn'] = test_details.get('probe_asn', 'AS0')
kwargs['timestamp'] = datetime.strptime(test_details['test_start_time'],
LONG_DATE).strftime(SHORT_DATE)
return filename_format.format(**kwargs)
def sanitize_options(options):
"""
Strips all possible user identifying information from the ooniprobe test
options.
Currently only strips leading directories from filepaths.
"""
sanitized_options = []
for option in options:
if isinstance(option, str):
option = os.path.basename(option)
sanitized_options.append(option)
return sanitized_options
def rename(src, dst):
# Best effort atomic renaming
if platform.isWindows() and os.path.exists(dst):
os.unlink(dst)
os.rename(src, dst)
def unzip(filename, dst):
assert filename.endswith('.zip')
dst_path = os.path.join(
dst,
os.path.basename(filename).replace(".zip", "")
)
with open(filename) as zfp:
zip_file = ZipFile(zfp)
zip_file.extractall(dst_path)
return dst_path
def gunzip(file_path):
"""
gunzip a file in place.
"""
tmp_location = FilePath(file_path).temporarySibling()
in_file = gzip.open(file_path)
with tmp_location.open('w') as out_file:
shutil.copyfileobj(in_file, out_file)
in_file.close()
rename(tmp_location.path, file_path)
def get_ooni_root():
script = os.path.join(__file__, '..')
return os.path.dirname(os.path.realpath(script))
def is_process_running(pid):
try:
os.kill(pid, 0)
running = True
except OSError as ose:
if ose.errno == errno.EPERM:
running = True
elif ose.errno == errno.ESRCH:
running = False
else:
raise
return running
def mkdir_p(path):
"""
Like makedirs, but it also ignores EEXIST errors, unless it exists but
isn't a directory.
"""
try:
os.makedirs(path)
except OSError as ose:
if ose.errno != errno.EEXIST:
raise
if not os.path.isdir(path):
raise
ooniprobe-2.2.0/ooni/utils/txscapy.py 0000644 0001750 0001750 00000036452 12767752461 016051 0 ustar irl irl import sys
import time
import random
from twisted.internet import fdesc
from twisted.internet import reactor
from twisted.internet import defer, abstract
from scapy.config import conf
from scapy.all import RandShort, IP, IPerror, ICMP, ICMPerror, TCP, TCPerror, UDP, UDPerror
from ooni.errors import ProtocolNotRegistered, ProtocolAlreadyRegistered, LibraryNotInstalledError
from ooni.utils import log
from ooni.utils.net import getDefaultIface, getAddresses
from ooni.settings import config
# Check to see if libdnet or libpcap are installed and set the according
# variables.
# In debian libdnet is called dumbnet instead of dnet, but scapy is
# expecting "dnet" so we try and import it under such name.
try:
import dumbnet
sys.modules['dnet'] = dumbnet
except ImportError:
pass
try:
conf.use_pcap = True
conf.use_dnet = True
from scapy.arch import pcapdnet
config.pcap_dnet = True
except ImportError as e:
log.err(e.message + ". Pypcap or dnet are not properly installed. Certain tests may not work.")
config.pcap_dnet = False
conf.use_pcap = False
conf.use_dnet = False
# This is required for unix systems that are different than linux (OSX for
# example) since scapy explicitly wants pcap and libdnet installed for it
# to work.
try:
from scapy.arch import pcapdnet
except ImportError:
log.err("Your platform requires having libdnet and libpcap installed.")
raise LibraryNotInstalledError
_PCAP_DNET_INSTALLED = config.pcap_dnet
if _PCAP_DNET_INSTALLED:
from scapy.all import PcapWriter
else:
class DummyPcapWriter:
def __init__(self, pcap_filename, *arg, **kw):
log.err("Initializing DummyPcapWriter. We will not actually write to a pcapfile")
@staticmethod
def write(self):
pass
PcapWriter = DummyPcapWriter
from scapy.all import Gen, SetGen, MTU
class ScapyFactory(abstract.FileDescriptor):
"""
Inspired by muxTCP scapyLink:
https://github.com/enki/muXTCP/blob/master/scapyLink.py
"""
def __init__(self, interface, super_socket=None, timeout=5):
abstract.FileDescriptor.__init__(self, reactor)
if interface == 'auto':
interface = getDefaultIface()
if not super_socket and sys.platform == 'darwin':
super_socket = conf.L3socket(iface=interface, promisc=True, filter='')
elif not super_socket:
super_socket = conf.L3socket(iface=interface)
self.protocols = []
fdesc._setCloseOnExec(super_socket.ins.fileno())
self.super_socket = super_socket
def writeSomeData(self, data):
"""
XXX we actually want to use this, but this requires overriding doWrite
or writeSequence.
"""
pass
def send(self, packet):
"""
Write a scapy packet to the wire.
"""
return self.super_socket.send(packet)
def fileno(self):
return self.super_socket.ins.fileno()
def doRead(self):
packet = self.super_socket.recv(MTU)
if packet:
for protocol in self.protocols:
protocol.packetReceived(packet)
def registerProtocol(self, protocol):
if not self.connected:
self.startReading()
if protocol not in self.protocols:
protocol.factory = self
self.protocols.append(protocol)
else:
raise ProtocolAlreadyRegistered
def unRegisterProtocol(self, protocol):
if protocol in self.protocols:
self.protocols.remove(protocol)
if len(self.protocols) == 0:
self.loseConnection()
else:
raise ProtocolNotRegistered
class ScapyProtocol(object):
factory = None
def packetReceived(self, packet):
"""
When you register a protocol, this method will be called with argument
the packet it received.
Every protocol that is registered will have this method called.
"""
raise NotImplementedError
class ScapySender(ScapyProtocol):
timeout = 5
# This deferred will fire when we have finished sending a receiving packets.
# Should we look for multiple answers for the same sent packet?
multi = False
# When 0 we stop when all the packets we have sent have received an
# answer
expected_answers = 0
def processPacket(self, packet):
"""
Hook useful for processing packets as they come in.
"""
def processAnswer(self, packet, answer_hr):
log.debug("Got a packet from %s" % packet.src)
log.debug("%s" % self.__hash__)
for i in range(len(answer_hr)):
if packet.answers(answer_hr[i]):
self.answered_packets.append((answer_hr[i], packet))
if not self.multi:
del (answer_hr[i])
break
if len(self.answered_packets) == len(self.sent_packets):
log.debug("All of our questions have been answered.")
self.stopSending()
return
if self.expected_answers and self.expected_answers == len(self.answered_packets):
log.debug("Got the number of expected answers")
self.stopSending()
def packetReceived(self, packet):
if self.timeout and time.time() - self._start_time > self.timeout:
self.stopSending()
if packet:
self.processPacket(packet)
# A string that has the same value for the request than for the
# response.
hr = packet.hashret()
if hr in self.hr_sent_packets:
answer_hr = self.hr_sent_packets[hr]
self.processAnswer(packet, answer_hr)
def stopSending(self):
result = (self.answered_packets, self.sent_packets)
self.d.callback(result)
self.factory.unRegisterProtocol(self)
def sendPackets(self, packets):
if not isinstance(packets, Gen):
packets = SetGen(packets)
for packet in packets:
hashret = packet.hashret()
if hashret in self.hr_sent_packets:
self.hr_sent_packets[hashret].append(packet)
else:
self.hr_sent_packets[hashret] = [packet]
self.sent_packets.append(packet)
self.factory.send(packet)
def startSending(self, packets):
# This dict is used to store the unique hashes that allow scapy to
# match up request with answer
self.hr_sent_packets = {}
# These are the packets we have received as answer to the ones we sent
self.answered_packets = []
# These are the packets we send
self.sent_packets = []
self._start_time = time.time()
self.d = defer.Deferred()
self.sendPackets(packets)
return self.d
class ScapySniffer(ScapyProtocol):
def __init__(self, pcap_filename, *arg, **kw):
self.pcapwriter = PcapWriter(pcap_filename, *arg, **kw)
def packetReceived(self, packet):
self.pcapwriter.write(packet)
def close(self):
self.pcapwriter.close()
class ParasiticTraceroute(ScapyProtocol):
def __init__(self):
self.numHosts = 7
self.rate = 15
self.hosts = {}
self.ttl_max = 15
self.ttl_min = 1
self.sent_packets = []
self.received_packets = []
self.matched_packets = {}
self.addresses = [str(x) for x in getAddresses()]
def sendPacket(self, packet):
self.factory.send(packet)
self.sent_packets.append(packet)
log.debug("Sent packet to %s with ttl %d" % (packet.dst, packet.ttl))
def packetReceived(self, packet):
try:
packet[IP]
except IndexError:
return
# Add TTL Expired responses.
if isinstance(packet.getlayer(3), TCPerror):
self.received_packets.append(packet)
# Live traceroute?
log.debug("%s replied with icmp-ttl-exceeded for %s" % (packet.src, packet[IPerror].dst))
return
elif packet.dst in self.hosts:
if random.randint(1, 100) > self.rate:
# Don't send a packet this time
return
try:
packet[IP].ttl = self.hosts[packet.dst]['ttl'].pop()
del packet.chksum # XXX Why is this incorrect?
self.sendPacket(packet)
k = (packet.id, packet[TCP].sport, packet[TCP].dport, packet[TCP].seq)
self.matched_packets[k] = {'ttl': packet.ttl}
return
except IndexError:
return
def maxttl(packet=None):
if packet:
return min(self.ttl_max, *map(lambda x: x - packet.ttl, [64, 128, 256])) - 1
else:
return self.ttl_max
def genttl(packet=None):
ttl = range(self.ttl_min, maxttl(packet))
random.shuffle(ttl)
return ttl
if len(self.hosts) < self.numHosts:
if packet.dst not in self.hosts \
and packet.dst not in self.addresses \
and isinstance(packet.getlayer(1), TCP):
self.hosts[packet.dst] = {'ttl': genttl()}
log.debug("Tracing to %s" % packet.dst)
return
if packet.src not in self.hosts \
and packet.src not in self.addresses \
and isinstance(packet.getlayer(1), TCP):
self.hosts[packet.src] = {'ttl': genttl(packet),
'ttl_max': maxttl(packet)}
log.debug("Tracing to %s" % packet.src)
return
if packet.src in self.hosts and not 'ttl_max' in self.hosts[packet.src]:
self.hosts[packet.src]['ttl_max'] = ttl_max = maxttl(packet)
log.debug("set ttl_max to %d for host %s" % (ttl_max, packet.src))
ttl = []
for t in self.hosts[packet.src]['ttl']:
if t < ttl_max:
ttl.append(t)
self.hosts[packet.src]['ttl'] = ttl
return
def stopListening(self):
self.factory.unRegisterProtocol(self)
class MPTraceroute(ScapyProtocol):
dst_ports = [0, 22, 23, 53, 80, 123, 443, 8080, 65535]
ttl_min = 1
ttl_max = 30
def __init__(self):
self.sent_packets = []
self._recvbuf = []
self.received_packets = {}
self.matched_packets = {}
self.hosts = []
self.interval = 0.2
self.timeout = ((self.ttl_max - self.ttl_min) * len(self.dst_ports) * self.interval) + 5
self.numPackets = 1
def ICMPTraceroute(self, host):
if host not in self.hosts:
self.hosts.append(host)
d = defer.Deferred()
reactor.callLater(self.timeout, d.callback, self)
self.sendPackets(IP(dst=host, ttl=(self.ttl_min, self.ttl_max), id=RandShort()) / ICMP(id=RandShort()))
return d
def UDPTraceroute(self, host):
if host not in self.hosts:
self.hosts.append(host)
d = defer.Deferred()
reactor.callLater(self.timeout, d.callback, self)
for dst_port in self.dst_ports:
self.sendPackets(
IP(dst=host, ttl=(self.ttl_min, self.ttl_max), id=RandShort()) / UDP(dport=dst_port, sport=RandShort()))
return d
def TCPTraceroute(self, host):
if host not in self.hosts:
self.hosts.append(host)
d = defer.Deferred()
reactor.callLater(self.timeout, d.callback, self)
for dst_port in self.dst_ports:
self.sendPackets(
IP(dst=host, ttl=(self.ttl_min, self.ttl_max), id=RandShort()) / TCP(flags=2L, dport=dst_port,
sport=RandShort(),
seq=RandShort()))
return d
@defer.inlineCallbacks
def sendPackets(self, packets):
def sleep(seconds):
d = defer.Deferred()
reactor.callLater(seconds, d.callback, seconds)
return d
if not isinstance(packets, Gen):
packets = SetGen(packets)
for packet in packets:
for i in xrange(self.numPackets):
self.sent_packets.append(packet)
self.factory.super_socket.send(packet)
yield sleep(self.interval)
def matchResponses(self):
def addToReceivedPackets(key, packet):
"""
Add a packet into the received packets dictionary,
typically the key is a tuple of packet fields used
to correlate sent packets with received packets.
"""
# Initialize or append to the lists of packets
# with the same key
if key in self.received_packets:
self.received_packets[key].append(packet)
else:
self.received_packets[key] = [packet]
def matchResponse(k, p):
if k in self.received_packets:
if p in self.matched_packets:
log.debug("Matched sent packet to more than one response!")
self.matched_packets[p].extend(self.received_packets[k])
else:
self.matched_packets[p] = self.received_packets[k]
log.debug("Packet %s matched %s" % ([p], self.received_packets[k]))
return 1
return 0
for p in self._recvbuf:
l = p.getlayer(2)
if isinstance(l, IPerror):
l = p.getlayer(3)
if isinstance(l, ICMPerror):
addToReceivedPackets(('icmp', l.id), p)
elif isinstance(l, TCPerror):
addToReceivedPackets(('tcp', l.dport, l.sport), p)
elif isinstance(l, UDPerror):
addToReceivedPackets(('udp', l.dport, l.sport), p)
elif hasattr(p, 'src') and p.src in self.hosts:
l = p.getlayer(1)
if isinstance(l, ICMP):
addToReceivedPackets(('icmp', l.id), p)
elif isinstance(l, TCP):
addToReceivedPackets(('tcp', l.ack - 1, l.dport, l.sport), p)
elif isinstance(l, UDP):
addToReceivedPackets(('udp', l.dport, l.sport), p)
for p in self.sent_packets:
# for each sent packet, find corresponding
# received packets
l = p.getlayer(1)
i = 0
if isinstance(l, ICMP):
i += matchResponse(('icmp', p.id), p) # match by ipid
i += matchResponse(('icmp', l.id), p) # match by icmpid
if isinstance(l, TCP):
i += matchResponse(('tcp', l.dport, l.sport), p) # match by s|dport
i += matchResponse(('tcp', l.seq, l.sport, l.dport), p)
if isinstance(l, UDP):
i += matchResponse(('udp', l.dport, l.sport), p)
i += matchResponse(('udp', l.sport, l.dport), p)
if i == 0:
log.debug("No response for packet %s" % [p])
del self._recvbuf
def packetReceived(self, packet):
l = packet.getlayer(1)
if not l:
return
elif isinstance(l, ICMP) or isinstance(l, UDP) or isinstance(l, TCP):
self._recvbuf.append(packet)
def stopListening(self):
self.factory.unRegisterProtocol(self)
ooniprobe-2.2.0/ooni/utils/log.py 0000644 0001750 0001750 00000017051 13061505273 015112 0 ustar irl irl import os
import sys
import errno
import codecs
import logging
from datetime import datetime
from twisted.python import log as tw_log
from twisted.python.logfile import DailyLogFile, LogFile
from ooni.utils import mkdir_p
from ooni.utils.files import human_size_to_bytes
from ooni import otime
# Get rid of the annoying "No route found for
# IPv6 destination warnings":
logging.getLogger("scapy.runtime").setLevel(logging.ERROR)
class MyDailyLogFile(DailyLogFile):
""" Override default behavior of Twisted class such that the
suffix always uses two digits for months and days such that
the rotated log files are lexicographically sortable """
def suffix(self, tupledate):
if len(tupledate) < 3: # just in case
return DailyLogFile.suffix(self, tupledate)
return "{:04d}_{:02d}_{:02d}".format(*tupledate[:3])
def log_encode(logmsg):
"""
I encode logmsg (a str or unicode) as printable ASCII. Each case
gets a distinct prefix, so that people differentiate a unicode
from a utf-8-encoded-byte-string or binary gunk that would
otherwise result in the same final output.
"""
if isinstance(logmsg, unicode):
return codecs.encode(logmsg, 'unicode_escape')
elif isinstance(logmsg, str):
try:
unicodelogmsg = logmsg.decode('utf-8')
except UnicodeDecodeError:
return codecs.encode(logmsg, 'string_escape')
else:
return codecs.encode(unicodelogmsg, 'unicode_escape')
else:
raise Exception("I accept only a unicode object or a string, "
"not a %s object like %r" % (type(logmsg),
repr(logmsg)))
levels = {
'NONE': 9999,
'CRITICAL': 50,
'ERROR': 40,
'WARNING': 30,
# This is the name twisted gives it
'WARN': 30,
'NOTICE': 25,
'INFO': 20,
'DEBUG': 10,
}
class LogLevelObserver(tw_log.FileLogObserver):
def __init__(self, f, log_level=levels['INFO']):
tw_log.FileLogObserver.__init__(self, f)
self.log_level = log_level
def should_emit(self, eventDict):
if eventDict['isError']:
level = levels['ERROR']
elif 'log_level' in eventDict:
level = eventDict['log_level']
else:
level = levels['INFO']
# To support twisted > 15.2 log_level argument
if hasattr(level, 'name'):
level = levels[level.name.upper()]
source = 'unknown'
if 'source' in eventDict:
source = eventDict['source']
# Don't log messages not coming from OONI unless the configured log
# level is debug and unless they are really important.
if (source != 'ooni' and
level <= levels['WARN'] and
self.log_level >= levels['DEBUG']):
return False
if level >= self.log_level:
return True
return False
def emit(self, eventDict):
if not self.should_emit(eventDict):
return
tw_log.FileLogObserver.emit(self, eventDict)
class StdoutStderrObserver(LogLevelObserver):
stderr = sys.stderr
def emit(self, eventDict):
if not self.should_emit(eventDict):
return
text = tw_log.textFromEventDict(eventDict)
if eventDict['isError']:
self.stderr.write(text + "\n")
self.stderr.flush()
else:
self.write(text + "\n")
self.flush()
class MsecLogObserver(LogLevelObserver):
def formatTime(self, when):
"""
Code from Twisted==16.4.1 modified to log microseconds. Although this
logging subsystem is legacy: http://twistedmatrix.com/trac/ticket/7596
Also, `timeFormat` is not used as `%z` is broken.
"""
tzOffset = -self.getTimezoneOffset(when)
when = datetime.utcfromtimestamp(when + tzOffset)
tzHour = abs(int(tzOffset / 60 / 60))
tzMin = abs(int(tzOffset / 60 % 60))
if tzOffset < 0:
tzSign = '-'
else:
tzSign = '+'
return '%d-%02d-%02d %02d:%02d:%02d,%06d%s%02d%02d' % (
when.year, when.month, when.day,
when.hour, when.minute, when.second,
when.microsecond,
tzSign, tzHour, tzMin)
class OONILogger(object):
def msg(self, msg, *arg, **kw):
text = log_encode(msg)
tw_log.msg(text, log_level=levels['INFO'], source="ooni")
def debug(self, msg, *arg, **kw):
text = log_encode(msg)
tw_log.msg(text, log_level=levels['DEBUG'], source="ooni")
def err(self, msg, *arg, **kw):
if isinstance(msg, str) or isinstance(msg, unicode):
text = "[!] " + log_encode(msg)
tw_log.msg(text, log_level=levels['ERROR'], source="ooni")
else:
tw_log.err(msg, source="ooni")
def warn(self, msg, *arg, **kw):
text = log_encode(msg)
tw_log.msg(text, log_level=levels['WARNING'], source="ooni")
def exception(self, error):
"""
Error can either be an error message to print to stdout and to the logfile
or it can be a twisted.python.failure.Failure instance.
"""
tw_log.err(error, source="ooni")
def start(self, logfile=None, application_name="ooniprobe"):
from ooni.settings import config
if not logfile:
logfile = os.path.expanduser(config.basic.logfile)
log_folder = os.path.dirname(logfile)
if (not os.access(log_folder, os.W_OK) or
(os.path.exists(logfile) and not os.access(logfile, os.W_OK))):
# If we don't have permissions to write to the log_folder or
# logfile.
log_folder = config.running_path
logfile = os.path.join(log_folder, "ooniprobe.log")
self.log_filepath = logfile
mkdir_p(log_folder)
log_filename = os.path.basename(logfile)
file_log_level = levels.get(config.basic.loglevel,
levels['INFO'])
stdout_log_level = levels['INFO']
if config.advanced.debug:
stdout_log_level = levels['DEBUG']
if config.basic.rotate == 'daily':
logfile = MyDailyLogFile(log_filename, log_folder)
elif config.basic.rotate == 'length':
logfile = LogFile(log_filename, log_folder,
rotateLength=int(human_size_to_bytes(
config.basic.rotate_length
)),
maxRotatedFiles=config.basic.max_rotated_files)
else:
logfile = open(os.path.join(log_folder, log_filename), 'a')
self.fileObserver = MsecLogObserver(logfile, log_level=file_log_level)
self.stdoutObserver = StdoutStderrObserver(sys.stdout,
log_level=stdout_log_level)
tw_log.startLoggingWithObserver(self.fileObserver.emit)
tw_log.addObserver(self.stdoutObserver.emit)
tw_log.msg("Starting %s on %s (%s UTC)" % (application_name,
otime.prettyDateNow(),
otime.prettyDateNowUTC()))
def stop(self):
self.stdoutObserver.stop()
self.fileObserver.stop()
oonilogger = OONILogger()
# This is a mock of a LoggerObserverFactory to be supplied to twistd.
ooniloggerNull = lambda: lambda eventDict: None
start = oonilogger.start
stop = oonilogger.stop
msg = oonilogger.msg
debug = oonilogger.debug
err = oonilogger.err
warn = oonilogger.warn
exception = oonilogger.exception
ooniprobe-2.2.0/ooni/utils/socks.py 0000644 0001750 0001750 00000001603 12733731377 015462 0 ustar irl irl from twisted.internet import reactor
from ooni.common.txextra import HTTPConnectionPool
from twisted import version as twisted_version
from twisted.python.versions import Version
_twisted_15_0 = Version('twisted', 15, 0, 0)
from txsocksx.http import SOCKS5Agent
from txsocksx.client import SOCKS5ClientFactory
SOCKS5ClientFactory.noisy = False
class TrueHeadersSOCKS5Agent(SOCKS5Agent):
def __init__(self, *args, **kw):
super(TrueHeadersSOCKS5Agent, self).__init__(*args, **kw)
pool = HTTPConnectionPool(reactor, False)
#
# With Twisted > 15.0 txsocksx wraps the twisted agent using a
# wrapper class, hence we must set the _pool attribute in the
# inner class rather than into its external wrapper.
#
if twisted_version >= _twisted_15_0:
self._wrappedAgent._pool = pool
else:
self._pool = pool
ooniprobe-2.2.0/ooni/templates/ 0000755 0001750 0001750 00000000000 13071152230 014601 5 ustar irl irl ooniprobe-2.2.0/ooni/templates/__init__.py 0000644 0001750 0001750 00000000000 12373757552 016726 0 ustar irl irl ooniprobe-2.2.0/ooni/templates/tcpt.py 0000644 0001750 0001750 00000006072 12767752456 016165 0 ustar irl irl from twisted.internet import protocol, defer, reactor
from twisted.internet.endpoints import TCP4ClientEndpoint
from ooni.nettest import NetTestCase
from ooni.errors import failureToString
from ooni.utils import log
class TCPSender(protocol.Protocol):
def __init__(self):
self.received_data = ''
self.sent_data = ''
def dataReceived(self, data):
"""
We receive data until the total amount of data received reaches that
which we have sent. At that point we append the received data to the
report and we fire the callback of the test template sendPayload
function.
This is used in pair with a TCP Echo server.
The reason why we put the data received inside of an array is that in
future we may want to expand this to support state and do something
similar to what daphne does, but without the mutation.
XXX Actually daphne will probably be refactored to be a subclass of the
TCP Test Template.
"""
if self.payload_len:
self.received_data += data
def sendPayload(self, payload):
"""
Write the payload to the wire and set the expected size of the payload
we are to receive.
Args:
payload: the data to be sent on the wire.
"""
self.payload_len = len(payload)
self.sent_data = payload
self.transport.write(payload)
class TCPSenderFactory(protocol.Factory):
noisy = False
def buildProtocol(self, addr):
return TCPSender()
class TCPTest(NetTestCase):
name = "Base TCP Test"
version = "0.1"
requiresRoot = False
timeout = 5
address = None
port = None
def _setUp(self):
super(TCPTest, self)._setUp()
self.report['sent'] = []
self.report['received'] = []
def sendPayload(self, payload):
d1 = defer.Deferred()
def closeConnection(proto):
self.report['sent'].append(proto.sent_data)
self.report['received'].append(proto.received_data)
proto.transport.loseConnection()
log.debug("Closing connection")
d1.callback(proto.received_data)
def timedOut(proto):
self.report['failure'] = 'tcp_timed_out_error'
proto.transport.loseConnection()
def errback(failure):
self.report['failure'] = failureToString(failure)
d1.errback(failure)
def connected(proto):
log.debug("Connected to %s:%s" % (self.address, self.port))
proto.report = self.report
proto.deferred = d1
proto.sendPayload(payload)
if self.timeout:
# XXX-Twisted this logic should probably go inside of the protocol
reactor.callLater(self.timeout, closeConnection, proto)
point = TCP4ClientEndpoint(reactor, self.address, self.port)
log.debug("Connecting to %s:%s" % (self.address, self.port))
d2 = point.connect(TCPSenderFactory())
d2.addCallback(connected)
d2.addErrback(errback)
return d1
ooniprobe-2.2.0/ooni/templates/scapyt.py 0000644 0001750 0001750 00000011636 13004657346 016503 0 ustar irl irl from base64 import b64encode
from ooni.nettest import NetTestCase
from ooni.utils import log
from ooni.settings import config
from ooni.utils.net import hasRawSocketPermission
from ooni.utils.txscapy import ScapySender, ScapyFactory
def representPacket(packet):
return {
"raw_packet": {
'data': b64encode(str(packet)),
'format': 'base64'
},
"summary": str(repr(packet))
}
class BaseScapyTest(NetTestCase):
"""
The report of a test run with scapy looks like this:
report:
sent_packets: [
{
'raw_packet': BASE64Encoding of packet,
'summary': 'IP / TCP 192.168.2.66:ftp_data > 8.8.8.8:http S'
}
]
answered_packets: []
"""
name = "Base Scapy Test"
version = 0.1
requiresRoot = not hasRawSocketPermission()
baseFlags = [
['ipsrc', 's',
'Does *not* check if IP src and ICMP IP citation '
'matches when processing answers'],
['seqack', 'k',
'Check if TCP sequence number and ACK match in the '
'ICMP citation when processing answers'],
['ipid', 'i', 'Check if the IPID matches when processing answers']]
def _setUp(self):
super(BaseScapyTest, self)._setUp()
if config.scapyFactory is None:
log.debug("Scapy factory not set, registering it.")
config.scapyFactory = ScapyFactory(config.advanced.interface)
self.report['answer_flags'] = []
if self.localOptions['ipsrc']:
config.checkIPsrc = 0
else:
self.report['answer_flags'].append('ipsrc')
config.checkIPsrc = 1
if self.localOptions['ipid']:
self.report['answer_flags'].append('ipid')
config.checkIPID = 1
else:
config.checkIPID = 0
# XXX we don't support strict matching
# since (from scapy's documentation), some stacks have a bug for which
# the bytes in the IPID are swapped.
# Perhaps in the future we will want to have more fine grained control
# over this.
if self.localOptions['seqack']:
self.report['answer_flags'].append('seqack')
config.check_TCPerror_seqack = 1
else:
config.check_TCPerror_seqack = 0
self.report['sent_packets'] = []
self.report['answered_packets'] = []
def finishedSendReceive(self, packets):
"""
This gets called when all packets have been sent and received.
"""
answered, unanswered = packets
for snd, rcv in answered:
log.debug("Writing report for scapy test")
sent_packet = snd
received_packet = rcv
if not config.privacy.includeip:
log.debug("Detected you would not like to "
"include your ip in the report")
log.debug(
"Stripping source and destination IPs from the reports")
sent_packet.src = '127.0.0.1'
received_packet.dst = '127.0.0.1'
self.report['sent_packets'].append(representPacket(sent_packet))
self.report['answered_packets'].append(representPacket(received_packet))
return packets
def sr(self, packets, timeout=None, *arg, **kw):
"""
Wrapper around scapy.sendrecv.sr for sending and receiving of packets
at layer 3.
"""
scapySender = ScapySender(timeout=timeout)
config.scapyFactory.registerProtocol(scapySender)
log.debug("Using sending with hash %s" % scapySender.__hash__)
d = scapySender.startSending(packets)
d.addCallback(self.finishedSendReceive)
return d
def sr1(self, packets, *arg, **kw):
def done(packets):
"""
We do this so that the returned value is only the one packet that
we expected a response for, identical to the scapy implementation
of sr1.
"""
try:
return packets[0][0][1]
except IndexError:
log.err("Got no response...")
return packets
scapySender = ScapySender()
scapySender.expected_answers = 1
config.scapyFactory.registerProtocol(scapySender)
log.debug("Running sr1")
d = scapySender.startSending(packets)
log.debug("Started to send")
d.addCallback(self.finishedSendReceive)
d.addCallback(done)
return d
def send(self, packets, *arg, **kw):
"""
Wrapper around scapy.sendrecv.send for sending of packets at layer 3
"""
scapySender = ScapySender()
config.scapyFactory.registerProtocol(scapySender)
scapySender.startSending(packets)
scapySender.stopSending()
for sent_packet in packets:
self.report['sent_packets'].append(representPacket(sent_packet))
ScapyTest = BaseScapyTest
ooniprobe-2.2.0/ooni/templates/process.py 0000644 0001750 0001750 00000010156 12767752456 016667 0 ustar irl irl from twisted.internet import protocol, defer, reactor
from ooni.settings import config
from ooni.nettest import NetTestCase
from ooni.utils import log
from ooni.geoip import probe_ip
class ProcessDirector(protocol.ProcessProtocol):
def __init__(self, d, finished=None, timeout=None, stdin=None):
self.d = d
self.stderr = ""
self.stdout = ""
self.finished = finished
self.timeout = timeout
self.stdin = stdin
self.timer = None
self.exit_reason = None
def cancelTimer(self):
if self.timeout and self.timer:
self.timer.cancel()
self.timer = None
def close(self, reason=None):
self.reason = reason
self.transport.loseConnection()
def resetTimer(self):
if self.timeout is not None:
if self.timer is not None and self.timer.active():
self.timer.cancel()
self.timer = reactor.callLater(self.timeout,
self.close,
"timeout_reached")
def finish(self, exit_reason=None):
if not self.exit_reason:
self.exit_reason = exit_reason
data = {
"stderr": self.stderr,
"stdout": self.stdout,
"exit_reason": self.exit_reason
}
self.d.callback(data)
def shouldClose(self):
if self.finished is None:
return False
return self.finished(self.stdout, self.stderr)
def connectionMade(self):
self.resetTimer()
if self.stdin is not None:
self.transport.write(self.stin)
self.transport.closeStdin()
def outReceived(self, data):
log.debug("STDOUT: %s" % data)
self.stdout += data
if self.shouldClose():
self.close("condition_met")
self.handleRead(data, None)
def errReceived(self, data):
log.debug("STDERR: %s" % data)
self.stderr += data
if self.shouldClose():
self.close("condition_met")
self.handleRead(None, data)
def inConnectionLost(self):
log.debug("inConnectionLost")
# self.d.callback(self.data())
def outConnectionLost(self):
log.debug("outConnectionLost")
def errConnectionLost(self):
log.debug("errConnectionLost")
def processExited(self, reason):
log.debug("Exited %s" % reason)
def processEnded(self, reason):
log.debug("Ended %s" % reason)
self.finish("process_done")
def handleRead(self, stdout, stderr=None):
pass
class ProcessTest(NetTestCase):
name = "Base Process Test"
version = "0.1"
requiresRoot = False
timeout = 5
processDirector = None
def _setUp(self):
super(ProcessTest, self)._setUp()
def processEnded(self, result, command):
log.debug("Finished %s: %s" % (command, result))
if not isinstance(self.report.get('commands'), list):
self.report['commands'] = []
# Attempt to redact the IP address of the probe from the standard output
if config.privacy.includeip is False and probe_ip.address is not None:
result['stdout'] = result['stdout'].replace(probe_ip.address, "[REDACTED]")
result['stderr'] = result['stderr'].replace(probe_ip.address, "[REDACTED]")
self.report['commands'].append({
'command_name': ' '.join(command),
'command_stdout': result['stdout'],
'command_stderr': result['stderr'],
'command_exit_reason': result['exit_reason'],
})
return result
def run(self, command, finished=None, env={}, path=None, usePTY=0):
d = defer.Deferred()
d.addCallback(self.processEnded, command)
self.processDirector = ProcessDirector(d, finished, self.timeout)
self.processDirector.handleRead = self.handleRead
reactor.spawnProcess(self.processDirector, command[0], command, env=env, path=path, usePTY=usePTY)
return d
# handleRead is not an abstract method to be backwards compatible
def handleRead(self, stdout, stderr=None):
pass
ooniprobe-2.2.0/ooni/templates/httpt.py 0000644 0001750 0001750 00000035711 13070747126 016342 0 ustar irl irl import random
from txtorcon.interface import StreamListenerMixin
from twisted.web.client import readBody, PartialDownloadError
from twisted.web.client import ContentDecoderAgent
from twisted.internet import reactor
from twisted.internet.endpoints import TCP4ClientEndpoint
from ooni.utils.socks import TrueHeadersSOCKS5Agent
from ooni.nettest import NetTestCase
from ooni.utils import log
from ooni.settings import config
from ooni.utils.net import StringProducer, userAgents
from ooni.common.txextra import TrueHeaders
from ooni.common.txextra import FixedRedirectAgent, TrueHeadersAgent
from ooni.common.http_utils import representBody
from ooni.errors import handleAllFailures
from ooni.geoip import probe_ip
class InvalidSocksProxyOption(Exception):
pass
class StreamListener(StreamListenerMixin):
def __init__(self, request):
self.request = request
def stream_succeeded(self, stream):
host=self.request['url'].split('/')[2]
try:
if stream.target_host == host and self.request['tor']['exit_ip'] is None:
self.request['tor']['exit_ip'] = stream.circuit.path[-1].ip
self.request['tor']['exit_name'] = stream.circuit.path[-1].name
config.tor_state.stream_listeners.remove(self)
except:
log.err("Tor Exit ip detection failed")
def _representHeaders(headers):
represented_headers = {}
for name, value in headers.getAllRawHeaders():
represented_headers[name] = unicode(value[0], errors='ignore')
return represented_headers
class HTTPTest(NetTestCase):
"""
A utility class for dealing with HTTP based testing. It provides methods to
be overriden for dealing with HTTP based testing.
The main functions to look at are processResponseBody and
processResponseHeader that are invoked once the headers have been received
and once the request body has been received.
To perform requests over Tor you will have to use the special URL schema
"shttp". For example to request / on example.com you will have to do
specify as URL "shttp://example.com/".
XXX all of this requires some refactoring.
"""
name = "HTTP Test"
version = "0.1.1"
randomizeUA = False
followRedirects = False
# When this is set to False we will follow redirects pointing to IPs in
# rfc1918
ignorePrivateRedirects = False
# You can specify a list of tuples in the format of (CONTENT_TYPE,
# DECODER)
# For example to support Gzip decoding you should specify
# contentDecoders = [('gzip', GzipDecoder)]
contentDecoders = []
baseParameters = [['socksproxy', 's', None,
'Specify a socks proxy to use for requests (ip:port)']]
def _setUp(self):
super(HTTPTest, self)._setUp()
try:
import OpenSSL
except:
log.err("Warning! pyOpenSSL is not installed. https websites will "
"not work")
self.control_agent = TrueHeadersSOCKS5Agent(reactor,
proxyEndpoint=TCP4ClientEndpoint(reactor, '127.0.0.1',
config.tor.socks_port))
self.report['socksproxy'] = None
if self.localOptions['socksproxy']:
try:
sockshost, socksport = self.localOptions['socksproxy'].split(':')
self.report['socksproxy'] = self.localOptions['socksproxy']
except ValueError:
raise InvalidSocksProxyOption
socksport = int(socksport)
self.agent = TrueHeadersSOCKS5Agent(reactor,
proxyEndpoint=TCP4ClientEndpoint(reactor, sockshost,
socksport))
else:
self.agent = TrueHeadersAgent(reactor)
self.report['agent'] = 'agent'
if self.followRedirects:
try:
self.control_agent = FixedRedirectAgent(self.control_agent)
self.agent = FixedRedirectAgent(
self.agent,
ignorePrivateRedirects=self.ignorePrivateRedirects
)
self.report['agent'] = 'redirect'
except:
log.err("Warning! You are running an old version of twisted "
"(<= 10.1). I will not be able to follow redirects."
"This may make the testing less precise.")
if len(self.contentDecoders) > 0:
self.control_agent = ContentDecoderAgent(self.control_agent,
self.contentDecoders)
self.agent = ContentDecoderAgent(self.agent,
self.contentDecoders)
self.processInputs()
log.debug("Finished test setup")
def randomize_useragent(self, request):
user_agent = random.choice(userAgents)
request['headers']['User-Agent'] = [user_agent]
def processInputs(self):
pass
def addToReport(self, request, response=None, response_body=None,
failure_string=None, previous_response=None):
"""
Adds to the report the specified request and response.
Args:
request (dict): A dict describing the request that was made
response (instance): An instance of
:class:twisted.web.client.Response.
Note: headers is our modified True Headers version.
failure (instance): An instance of :class:twisted.internet.failure.Failure
"""
log.debug("Adding %s to report" % request)
request_headers = TrueHeaders(request['headers'])
session = {
'request': {
'headers': _representHeaders(request_headers),
'body': request['body'],
'url': request['url'],
'method': request['method'],
'tor': request['tor']
},
'response': None
}
if response:
if (getattr(response, 'request', None) and
getattr(response.request, 'absoluteURI', None)):
session['request']['url'] = response.request.absoluteURI
response_headers = {}
for name, value in response.headers.getAllRawHeaders():
response_headers[name] = value[0]
# Attempt to redact the IP address of the probe from the responses
if config.privacy.includeip is False and \
probe_ip.address is not None:
if isinstance(response_body, (str, unicode)):
response_body = response_body.replace(probe_ip.address, "[REDACTED]")
for key, value in response_headers.items():
response_headers[key] = value.replace(probe_ip.address,
"[REDACTED]")
for key, value in response_headers.items():
response_headers[key] = representBody(value)
if self.localOptions.get('withoutbody', 0) is 0:
response_body = representBody(response_body)
else:
response_body = ''
session['response'] = {
'headers': response_headers,
'body': response_body,
'code': response.code
}
session['failure'] = None
if failure_string:
session['failure'] = failure_string
self.report['requests'].append(session)
if response and response.previousResponse:
previous_response = response.previousResponse
if previous_response:
self.addToReport(request, previous_response,
response_body=None,
failure_string=None)
def _processResponseBody(self, response_body, request, response, body_processor):
log.debug("Processing response body")
HTTPTest.addToReport(self, request, response, response_body)
if body_processor:
body_processor(response_body)
else:
self.processResponseBody(response_body)
response.body = response_body
return response
def _processResponseBodyFail(self, failure, request, response):
if failure.check(PartialDownloadError):
return failure.value.response
failure_string = handleAllFailures(failure)
HTTPTest.addToReport(self, request, response,
failure_string=failure_string)
return response
def processResponseBody(self, body):
"""
Overwrite this method if you wish to interact with the response body of
every request that is made.
Args:
body (str): The body of the HTTP response
"""
pass
def processResponseHeaders(self, headers):
"""
This should take care of dealing with the returned HTTP headers.
Args:
headers (dict): The returned header fields.
"""
pass
def processRedirect(self, location):
"""
Handle a redirection via a 3XX HTTP status code.
Here you may place logic that evaluates the destination that you are
being redirected to. Matches against known censor redirects, etc.
Note: if self.followRedirects is set to True, then this method will
never be called.
XXX perhaps we may want to hook _handleResponse in RedirectAgent to
call processRedirect every time we get redirected.
Args:
location (str): the url that we are being redirected to.
"""
pass
def _cbResponse(self, response, request,
headers_processor, body_processor):
"""
This callback is fired once we have gotten a response for our request.
If we are using a RedirectAgent then this will fire once we have
reached the end of the redirect chain.
Args:
response (:twisted.web.iweb.IResponse:): a provider for getting our response
request (dict): the dict containing our response (XXX this should be dropped)
header_processor (func): a function to be called with argument a
dict containing the response headers. This will lead
self.headerProcessor to not be called.
body_processor (func): a function to be called with as argument the
body of the response. This will lead self.bodyProcessor to not
be called.
"""
if not response:
log.err("Got no response for request %s" % request)
HTTPTest.addToReport(self, request, response)
return
else:
log.debug("Got response")
log.debug("code: %d" % response.code)
log.debug("headers: %s" % response.headers.getAllRawHeaders())
if str(response.code).startswith('3'):
self.processRedirect(response.headers.getRawHeaders('Location')[0])
# [!] We are passing to the headers_processor the headers dict and
# not the Headers() object
response_headers_dict = list(response.headers.getAllRawHeaders())
if headers_processor:
headers_processor(response_headers_dict)
else:
self.processResponseHeaders(response_headers_dict)
finished = readBody(response)
finished.addErrback(self._processResponseBodyFail, request,
response)
finished.addCallback(self._processResponseBody, request,
response, body_processor)
return finished
def doRequest(self, url, method="GET",
headers={}, body=None, headers_processor=None,
body_processor=None, use_tor=False):
"""
Perform an HTTP request with the specified method and headers.
Args:
url (str): the full URL of the request. The scheme may be either
http, https, or httpo for http over Tor Hidden Service.
Kwargs:
method (str): the HTTP method name to use for the request
headers (dict): the request headers to send
body (str): the request body
headers_processor : a function to be used for processing the HTTP
header responses (defaults to self.processResponseHeaders).
This function takes as argument the HTTP headers as a dict.
body_processory: a function to be used for processing the HTTP
response body (defaults to self.processResponseBody). This
function takes the response body as an argument.
use_tor (bool): specify if the HTTP request should be done over Tor
or not.
"""
# We prefix the URL with 's' to make the connection go over the
# configured socks proxy
if use_tor:
log.debug("Using Tor for the request to %s" % url)
agent = self.control_agent
else:
agent = self.agent
if self.localOptions['socksproxy']:
log.debug("Using SOCKS proxy %s for request" % (self.localOptions['socksproxy']))
log.debug("Performing request %s %s %s" % (url, method, headers))
request = {}
request['method'] = method
request['url'] = url
request['headers'] = headers
request['body'] = body
request['tor'] = {
'exit_ip': None,
'exit_name': None
}
if use_tor:
request['tor']['is_tor'] = True
else:
request['tor']['is_tor'] = False
if self.randomizeUA:
log.debug("Randomizing user agent")
self.randomize_useragent(request)
self.report['requests'] = self.report.get('requests', [])
# If we have a request body payload, set the request body to such
# content
if body:
body_producer = StringProducer(request['body'])
else:
body_producer = None
headers = TrueHeaders(request['headers'])
def errback(failure, request):
if request['tor']['is_tor']:
log.msg("Error performing torified HTTP request: %s" % request['url'])
else:
log.msg("Error performing HTTP request: %s" % request['url'])
failure_string = handleAllFailures(failure)
previous_response = None
if getattr(failure, "previousResponse", None):
previous_response = failure.previousResponse
if getattr(failure, "requestLocation", None):
request['url'] = failure.requestLocation
self.addToReport(request, failure_string=failure_string,
previous_response=previous_response)
return failure
if use_tor:
state = config.tor_state
if state:
state.add_stream_listener(StreamListener(request))
d = agent.request(request['method'], request['url'], headers,
body_producer)
d.addErrback(errback, request)
d.addCallback(self._cbResponse, request, headers_processor,
body_processor)
return d
ooniprobe-2.2.0/ooni/templates/dnst.py 0000644 0001750 0001750 00000020337 12702717733 016147 0 ustar irl irl # -*- encoding: utf-8 -*-
#
# :authors: Arturo Filastò
# :licence: see LICENSE
from twisted.internet import udp, error, base
from twisted.internet.defer import TimeoutError
from twisted.names import client, dns
from twisted.names.client import Resolver
from ooni.utils import log
from ooni.nettest import NetTestCase
from ooni.errors import failureToString
import socket
from socket import gaierror
dns.DNSDatagramProtocol.noisy = False
def _bindSocket(self):
"""
_bindSocket taken from Twisted 13.1.0 to suppress logging.
"""
try:
skt = self.createInternetSocket()
skt.bind((self.interface, self.port))
except socket.error as le:
raise error.CannotListenError(self.interface, self.port, le)
# Make sure that if we listened on port 0, we update that to
# reflect what the OS actually assigned us.
self._realPortNumber = skt.getsockname()[1]
# Here we remove the logging.
# log.msg("%s starting on %s" % (
# self._getLogPrefix(self.protocol), self._realPortNumber))
self.connected = 1
self.socket = skt
self.fileno = self.socket.fileno
udp.Port._bindSocket = _bindSocket
def connectionLost(self, reason=None):
"""
Taken from Twisted 13.1.0 to suppress log.msg printing.
"""
# Here we remove the logging.
# log.msg('(UDP Port %s Closed)' % self._realPortNumber)
self._realPortNumber = None
base.BasePort.connectionLost(self, reason)
self.protocol.doStop()
self.socket.close()
del self.socket
del self.fileno
if hasattr(self, "d"):
self.d.callback(None)
del self.d
udp.Port.connectionLost = connectionLost
def representAnswer(answer):
answer_types = {
dns.SOA: 'SOA',
dns.NS: 'NS',
dns.PTR: 'PTR',
dns.A: 'A',
dns.CNAME: 'CNAME',
dns.MX: 'MX'
}
answer_type = answer_types.get(answer.type, 'unknown')
represented_answer = {
"answer_type": answer_type
}
if answer_type is 'SOA':
represented_answer['ttl'] = answer.payload.ttl
represented_answer['hostname'] = answer.payload.mname.name
represented_answer['responsible_name'] = answer.payload.rname.name
represented_answer['serial_number'] = answer.payload.serial
represented_answer['refresh_interval'] = answer.payload.refresh
represented_answer['retry_interval'] = answer.payload.retry
represented_answer['minimum_ttl'] = answer.payload.minimum
represented_answer['expiration_limit'] = answer.payload.expire
elif answer_type in ['NS', 'PTR', 'CNAME']:
represented_answer['hostname'] = answer.payload.name.name
elif answer_type is 'A':
represented_answer['ipv4'] = answer.payload.dottedQuad()
return represented_answer
class DNSTest(NetTestCase):
name = "Base DNS Test"
version = "0.2.0"
requiresRoot = False
queryTimeout = [1]
def _setUp(self):
super(DNSTest, self)._setUp()
self.report['queries'] = []
def performPTRLookup(self, address, dns_server = None):
"""
Does a reverse DNS lookup on the input ip address
:address: the IP Address as a dotted quad to do a reverse lookup on.
:dns_server: is the dns_server that should be used for the lookup as a
tuple of ip port (ex. ("127.0.0.1", 53))
if None, system dns settings will be used
"""
ptr = '.'.join(address.split('.')[::-1]) + '.in-addr.arpa'
return self.dnsLookup(ptr, 'PTR', dns_server)
def performALookup(self, hostname, dns_server = None):
"""
Performs an A lookup and returns an array containg all the dotted quad
IP addresses in the response.
:hostname: is the hostname to perform the A lookup on
:dns_server: is the dns_server that should be used for the lookup as a
tuple of ip port (ex. ("127.0.0.1", 53))
if None, system dns settings will be used
"""
return self.dnsLookup(hostname, 'A', dns_server)
def performNSLookup(self, hostname, dns_server = None):
"""
Performs a NS lookup and returns an array containg all nameservers in
the response.
:hostname: is the hostname to perform the NS lookup on
:dns_server: is the dns_server that should be used for the lookup as a
tuple of ip port (ex. ("127.0.0.1", 53))
if None, system dns settings will be used
"""
return self.dnsLookup(hostname, 'NS', dns_server)
def performSOALookup(self, hostname, dns_server = None):
"""
Performs a SOA lookup and returns the response (name,serial).
:hostname: is the hostname to perform the SOA lookup on
:dns_server: is the dns_server that should be used for the lookup as a
tuple of ip port (ex. ("127.0.0.1", 53))
if None, system dns settings will be used
"""
return self.dnsLookup(hostname,'SOA',dns_server)
def dnsLookup(self, hostname, dns_type, dns_server = None):
"""
Performs a DNS lookup and returns the response.
:hostname: is the hostname to perform the DNS lookup on
:dns_type: type of lookup 'NS'/'A'/'SOA'
:dns_server: is the dns_server that should be used for the lookup as a
tuple of ip port (ex. ("127.0.0.1", 53))
"""
types = {
'NS': dns.NS,
'A': dns.A,
'SOA': dns.SOA,
'PTR': dns.PTR
}
dnsType = types[dns_type]
query = [dns.Query(hostname, dnsType, dns.IN)]
def gotResponse(message):
log.debug(dns_type + " Lookup successful")
log.debug(str(message))
if dns_server:
msg = message.answers
else:
msg = message[0]
answers = []
addrs = []
for answer in msg:
addr = None
if answer.type is dns.SOA:
addr = (answer.name.name,answer.payload.serial)
elif answer.type in [dns.NS, dns.PTR, dns.CNAME]:
addr = answer.payload.name.name
elif answer.type is dns.A:
addr = answer.payload.dottedQuad()
else:
log.debug("Unidentified answer %s" % answer)
addrs.append(addr)
answers.append(representAnswer(answer))
if dns_type == 'SOA':
for authority in message.authority:
answers.append(representAnswer(authority))
DNSTest.addToReport(self, query, resolver=dns_server,
query_type=dns_type, answers=answers)
return addrs
def gotError(failure):
failure.trap(gaierror, TimeoutError)
DNSTest.addToReport(self, query, resolver=dns_server,
query_type=dns_type, failure=failure)
return failure
if dns_server:
resolver = Resolver(servers=[dns_server])
d = resolver.queryUDP(query, timeout=self.queryTimeout)
else:
lookupFunction = {
'NS': client.lookupNameservers,
'SOA': client.lookupAuthority,
'A': client.lookupAddress,
'PTR': client.lookupPointer
}
d = lookupFunction[dns_type](hostname)
d.addCallback(gotResponse)
d.addErrback(gotError)
return d
def addToReport(self, query, resolver=None, query_type=None,
answers=None, failure=None):
log.debug("Adding %s to report)" % query)
result = {
'resolver_hostname': None,
'resolver_port': None
}
if resolver is not None and len(resolver) == 2:
result['resolver_hostname'] = resolver[0]
result['resolver_port'] = resolver[1]
result['query_type'] = query_type
result['hostname'] = str(query[0].name)
result['failure'] = None
if failure:
result['failure'] = failureToString(failure)
result['answers'] = []
if answers:
result['answers'] = answers
self.report['queries'].append(result)
ooniprobe-2.2.0/ooni/backend_client.py 0000644 0001750 0001750 00000025233 13064300762 016116 0 ustar irl irl import os
import json
from urlparse import urljoin, urlparse
from twisted.web.error import Error
from twisted.web.client import Agent, Headers
from twisted.internet import defer, reactor
from twisted.internet.endpoints import TCP4ClientEndpoint
from twisted.python.versions import Version
from twisted import version as _twisted_version
_twisted_14_0_2_version = Version('twisted', 14, 0, 2)
from ooni import errors as e, constants
from ooni.settings import config
from ooni.utils import log, onion
from ooni.utils.net import BodyReceiver, StringProducer, Downloader
from ooni.utils.socks import TrueHeadersSOCKS5Agent
def guess_backend_type(address):
if address is None:
raise e.InvalidAddress
if onion.is_onion_address(address):
return 'onion'
elif address.startswith('https://'):
return 'https'
elif address.startswith('http://'):
return 'http'
else:
raise e.InvalidAddress
class OONIBClient(object):
def __init__(self, address=None, settings={}):
self.base_headers = {}
self.backend_type = settings.get('type', None)
self.base_address = settings.get('address', address)
self.front = settings.get('front', '').encode('ascii')
if self.backend_type is None:
self.backend_type = guess_backend_type(self.base_address)
self.backend_type = self.backend_type.encode('ascii')
self.settings = {
'type': self.backend_type,
'address': self.base_address,
'front': self.front
}
self._setupBaseAddress()
def _setupBaseAddress(self):
parsed_address = urlparse(self.base_address)
if self.backend_type == 'onion':
if not onion.is_onion_address(self.base_address):
log.err("Invalid onion address.")
raise e.InvalidAddress(self.base_address)
if parsed_address.scheme in ('http', 'httpo'):
self.base_address = ("http://%s" % parsed_address.netloc)
else:
self.base_address = ("%s://%s" % (parsed_address.scheme,
parsed_address.netloc))
elif self.backend_type == 'http':
self.base_address = ("http://%s" % parsed_address.netloc)
elif self.backend_type == 'https':
self.base_address = ("https://%s" % parsed_address.netloc)
elif self.backend_type == 'cloudfront':
self.base_headers['Host'] = [parsed_address.netloc]
self.base_address = ("https://%s" % self.front)
self.base_address = self.base_address.encode('ascii')
def isSupported(self):
if self.backend_type in ("https", "cloudfront"):
if _twisted_version < _twisted_14_0_2_version:
log.err("HTTPS and cloudfronted backends require "
"twisted > 14.0.2.")
return False
elif self.backend_type == "http":
if config.advanced.insecure_backend is not True:
log.err("Plaintext backends are not supported. To "
"enable at your own risk set "
"advanced->insecure_backend to true")
return False
elif self.backend_type == "onion":
# XXX add an extra check to ensure tor is running
if not config.tor_state and config.tor.socks_port is None:
return False
return True
def isReachable(self):
raise NotImplemented
def _request(self, method, urn, genReceiver, bodyProducer=None, retries=3):
if self.backend_type == 'onion':
agent = TrueHeadersSOCKS5Agent(reactor,
proxyEndpoint=TCP4ClientEndpoint(reactor,
'127.0.0.1',
config.tor.socks_port))
else:
agent = Agent(reactor)
attempts = 0
finished = defer.Deferred()
def perform_request(attempts):
uri = urljoin(self.base_address, urn)
d = agent.request(method, uri, bodyProducer=bodyProducer,
headers=Headers(self.base_headers))
@d.addCallback
def callback(response):
try:
content_length = int(response.headers.getRawHeaders('content-length')[0])
except:
content_length = None
response.deliverBody(genReceiver(finished, content_length))
def errback(err, attempts):
# We we will recursively keep trying to perform a request until
# we have reached the retry count.
if attempts < retries:
log.err("Lookup {} failed. Retrying.".format(uri))
attempts += 1
perform_request(attempts)
else:
log.err("Failed. Giving up.")
finished.errback(err)
d.addErrback(errback, attempts)
perform_request(attempts)
return finished
def queryBackend(self, method, urn, query=None, retries=3):
log.debug("Querying backend {0}{1} with {2}".format(self.base_address,
urn, query))
bodyProducer = None
if query:
bodyProducer = StringProducer(json.dumps(query))
def genReceiver(finished, content_length):
def process_response(s):
# If empty string then don't parse it.
if not s:
return
try:
response = json.loads(s)
except ValueError:
raise e.get_error(None)
if 'error' in response:
log.debug("Got this backend error message %s" % response)
raise e.get_error(response['error'])
return response
return BodyReceiver(finished, content_length, process_response)
return self._request(method, urn, genReceiver, bodyProducer, retries)
def download(self, urn, download_path):
def genReceiver(finished, content_length):
return Downloader(download_path, finished, content_length)
return self._request('GET', urn, genReceiver)
class BouncerClient(OONIBClient):
def isReachable(self):
return defer.succeed(True)
@defer.inlineCallbacks
def lookupTestCollector(self, net_tests):
try:
test_collector = yield self.queryBackend('POST', '/bouncer/net-tests',
query={'net-tests': net_tests})
except Exception as exc:
log.exception(exc)
raise e.CouldNotFindTestCollector
defer.returnValue(test_collector)
@defer.inlineCallbacks
def lookupTestHelpers(self, test_helper_names):
try:
test_helper = yield self.queryBackend('POST', '/bouncer/test-helpers',
query={'test-helpers': test_helper_names})
except Exception as exc:
log.exception(exc)
raise e.CouldNotFindTestHelper
if not test_helper:
raise e.CouldNotFindTestHelper
defer.returnValue(test_helper)
class CollectorClient(OONIBClient):
def isReachable(self):
# XXX maybe in the future we can have a dedicated API endpoint to
# test the reachability of the collector.
d = self.queryBackend('GET', '/invalidpath')
@d.addCallback
def cb(_):
# We should never be getting an acceptable response for a
# request to an invalid path.
return False
@d.addErrback
def err(failure):
failure.trap(Error)
return failure.value.status == '404'
return d
def getInputPolicy(self):
return self.queryBackend('GET', '/policy/input')
def getNettestPolicy(self):
return self.queryBackend('GET', '/policy/nettest')
def createReport(self, test_details):
request = {
'software_name': test_details['software_name'],
'software_version': test_details['software_version'],
'probe_asn': test_details['probe_asn'],
'probe_cc': test_details['probe_cc'],
'test_name': test_details['test_name'],
'test_version': test_details['test_version'],
'test_start_time': test_details['test_start_time'],
'input_hashes': test_details['input_hashes'],
'data_format_version': test_details['data_format_version'],
'format': 'json'
}
# import values from the environment
request.update([(k.lower(),v) for (k,v) in os.environ.iteritems()
if k.startswith('PROBE_')])
return self.queryBackend('POST', '/report', query=request)
def updateReport(self, report_id, serialization_format, entry_content):
request = {
'format': serialization_format,
'content': entry_content
}
return self.queryBackend('POST', '/report/%s' % report_id,
query=request)
def closeReport(self, report_id):
return self.queryBackend('POST', '/report/' + report_id + '/close')
class WebConnectivityClient(OONIBClient):
def isReachable(self):
d = self.queryBackend('GET', '/status')
@d.addCallback
def cb(result):
if result.get("status", None) != "ok":
return False
return True
@d.addErrback
def err(_):
return False
return d
def control(self, http_request, tcp_connect,
http_request_headers=None,
include_http_responses=False):
if http_request_headers is None:
http_request_headers = {}
request = {
'http_request': http_request,
'tcp_connect': tcp_connect,
'http_request_headers': http_request_headers,
'include_http_responses': include_http_responses
}
return self.queryBackend('POST', '/', query=request)
def get_preferred_bouncer():
preferred_backend = config.advanced.get(
"preferred_backend", "onion"
)
bouncer_address = getattr(
constants, "CANONICAL_BOUNCER_{0}".format(
preferred_backend.upper()
)
)
if preferred_backend == "cloudfront":
return BouncerClient(
settings={
'address': bouncer_address[0],
'front': bouncer_address[1],
'type': 'cloudfront'
})
else:
return BouncerClient(bouncer_address)
ooniprobe-2.2.0/ooni/deck/ 0000755 0001750 0001750 00000000000 13071152230 013511 5 ustar irl irl ooniprobe-2.2.0/ooni/deck/deck.py 0000644 0001750 0001750 00000034721 13046133036 015005 0 ustar irl irl import os
import json
import uuid
import errno
import hashlib
from copy import deepcopy
from string import Template
import yaml
from twisted.internet import defer
from twisted.python.filepath import FilePath
from ooni import errors as e
from ooni.backend_client import BouncerClient, CollectorClient
from ooni.backend_client import get_preferred_bouncer
from ooni.deck.backend import lookup_collector_and_test_helpers
from ooni.deck.legacy import convert_legacy_deck
from ooni.geoip import probe_ip
from ooni.nettest import NetTestLoader, nettest_to_path
from ooni.measurements import generate_summary
from ooni.settings import config
from ooni.utils import log, generate_filename
def resolve_file_path(v, prepath=None):
from ooni.deck.store import input_store
if v.startswith("$"):
# This raises InputNotFound and we let it carry onto the caller
return input_store.get(v[1:])["filepath"]
if prepath is not None and (not os.path.isabs(v)):
return FilePath(prepath).preauthChild(v).path
return v
def options_to_args(options):
args = []
for k, v in options.items():
if v is None:
continue
if v is False:
continue
if (len(k)) == 1:
args.append('-'+k)
else:
args.append('--'+k)
if v is True:
continue
args.append(v)
return args
def normalize_options(options):
"""
Takes some options that have a mixture of - and _ and returns the
equivalent options with only '_'.
"""
normalized_opts = {}
for k, v in options.items():
normalized_key = k.replace('-', '_')
assert normalized_key not in normalized_opts, "The key {0} cannot be normalized".format(k)
normalized_opts[normalized_key] = v
return normalized_opts
class UnknownTaskKey(Exception):
pass
class MissingTaskDataKey(Exception):
pass
class NGDeck(object):
def __init__(self,
deck_data=None,
deck_path=None,
global_options={},
no_collector=False,
arbitrary_paths=False):
# Used to resolve relative paths inside of decks.
self.deck_directory = os.getcwd()
self.requires_tor = False
self.no_collector = no_collector
self.name = ""
self.description = ""
self.icon = ""
self.id = None
self.schedule = None
self.metadata = {}
self.global_options = normalize_options(global_options)
self.bouncer = None
self._arbitrary_paths = arbitrary_paths
self._is_setup = False
self._measurement_path = FilePath(config.measurements_directory)
self._tasks = []
if deck_path is not None:
self.open(deck_path)
elif deck_data is not None:
self.load(deck_data)
def open(self, deck_path, global_options=None):
with open(deck_path) as fh:
deck_data = yaml.safe_load(fh)
self.id = os.path.basename(deck_path[:-1*len('.yaml')])
self.deck_directory = os.path.abspath(os.path.dirname(deck_path))
self.load(deck_data, global_options)
def load(self, deck_data, global_options=None):
if self.id is None:
# This happens when you load a deck not from a filepath so we
# use the first 16 characters of the SHA256 hexdigest as an ID
self.id = hashlib.sha256(json.dumps(deck_data)).hexdigest()[:16]
if global_options is not None:
self.global_options = normalize_options(global_options)
if isinstance(deck_data, list):
deck_data = convert_legacy_deck(deck_data)
self.name = deck_data.pop("name", "Un-named Deck")
self.description = deck_data.pop("description", "No description")
self.icon = deck_data.pop("icon", "fa-gears")
bouncer_address = self.global_options.get('bouncer',
deck_data.pop("bouncer", None))
if bouncer_address is None:
self.bouncer = get_preferred_bouncer()
elif isinstance(bouncer_address, dict):
self.bouncer = BouncerClient(settings=bouncer_address)
else:
self.bouncer = BouncerClient(bouncer_address)
self.schedule = deck_data.pop("schedule", None)
tasks_data = deck_data.pop("tasks", [])
for key, metadata in deck_data.items():
self.metadata[key] = metadata
# We override the task metadata with the global options if present
self.metadata.update(self.global_options)
for task_data in tasks_data:
deck_task = DeckTask(
data=task_data,
parent_metadata=self.metadata,
global_options=self.global_options,
cwd=self.deck_directory,
arbitrary_paths=self._arbitrary_paths
)
if deck_task.requires_tor:
self.requires_tor = True
if (deck_task.requires_bouncer and
self.bouncer.backend_type == "onion"):
self.requires_tor = True
self._tasks.append(deck_task)
if self.metadata.get('no_collector', False):
self.no_collector = True
if (self.no_collector is False and
self.bouncer.backend_type == "onion"):
self.requires_tor = True
@property
def tasks(self):
return self._tasks
def write(self, fh):
"""
Writes a properly formatted deck to the supplied file handle.
:param fh: an open file handle
:return:
"""
deck_data = {
"name": self.name,
"description": self.description,
"tasks": [task.data for task in self._tasks]
}
if self.schedule is not None:
deck_data["schedule"] = self.schedule
for key, value in self.metadata.items():
deck_data[key] = value
fh.write("---\n")
yaml.safe_dump(deck_data, fh, default_flow_style=False)
@defer.inlineCallbacks
def query_bouncer(self):
preferred_backend = config.advanced.get(
"preferred_backend", "onion"
)
log.msg("Looking up collector and test helpers with {0}".format(
self.bouncer.base_address)
)
net_test_loaders = []
for task in self._tasks:
if task.type == "ooni":
net_test_loaders.append(task.ooni["net_test_loader"])
yield lookup_collector_and_test_helpers(
net_test_loaders,
self.bouncer,
preferred_backend,
self.no_collector
)
defer.returnValue(net_test_loaders)
def _measurement_completed(self, result, task):
if not task.output_path:
measurement_id = task.id
measurement_dir = self._measurement_path.child(measurement_id)
measurement_dir.child("measurements.njson.progress").moveTo(
measurement_dir.child("measurements.njson")
)
generate_summary(
measurement_dir.child("measurements.njson").path,
measurement_dir.child("summary.json").path,
measurement_dir.child("anomaly").path,
deck_id=self.id
)
measurement_dir.child("running.pid").remove()
def _measurement_failed(self, failure, task):
if not task.output_path:
# XXX do we also want to delete measurements.njson.progress?
measurement_id = task.id
measurement_dir = self._measurement_path.child(measurement_id)
measurement_dir.child("running.pid").remove()
return failure
def _run_ooni_task(self, task, director):
net_test_loader = task.ooni["net_test_loader"]
# XXX-REFACTOR we do this so late to avoid the collision between the
# same id and hence generating the same filename.
test_details = net_test_loader.getTestDetails()
task.id = generate_filename(test_details, deck_id=self.id)
measurement_id = None
report_filename = task.output_path
if not task.output_path:
measurement_id = task.id
measurement_dir = self._measurement_path.child(measurement_id)
try:
measurement_dir.createDirectory()
except OSError as ose:
if ose.errno == errno.EEXIST:
raise Exception("Directory already exists, there is a "
"collision")
report_filename = measurement_dir.child("measurements.njson.progress").path
pid_file = measurement_dir.child("running.pid")
with pid_file.open('w') as out_file:
out_file.write("{0}".format(os.getpid()))
d = director.start_net_test_loader(
net_test_loader,
report_filename,
collector_client=net_test_loader.collector,
test_details=test_details,
measurement_id=measurement_id
)
d.addCallback(self._measurement_completed, task)
d.addErrback(self._measurement_failed, task)
return d
@defer.inlineCallbacks
def setup(self):
"""
This method needs to be called before you are able to run a deck.
"""
from ooni.deck.store import InputNotFound
for task in self._tasks:
try:
yield task.setup()
except InputNotFound:
log.msg("Skipping the task {0} because the input cannot be "
"found".format(task.id))
task.skip = True
self._is_setup = True
@defer.inlineCallbacks
def run(self, director, from_schedule=False):
assert self._is_setup, "You must call setup() before you can run a " \
"deck"
if self.requires_tor:
yield director.start_tor()
yield self.query_bouncer()
director.deckStarted(self.id, from_schedule)
for task in self._tasks:
if task.skip is True:
log.debug("Skipping running {0}".format(task.id))
continue
if task.type == "ooni":
yield self._run_ooni_task(task, director)
director.deckFinished(self.id, from_schedule)
self._is_setup = False
class DeckTask(object):
_metadata_keys = ["name"]
_supported_tasks = ["ooni"]
def __init__(self, data,
parent_metadata={},
global_options={},
cwd=None,
arbitrary_paths=False):
self.parent_metadata = normalize_options(parent_metadata)
self.global_options = global_options
self.cwd = cwd
self.data = deepcopy(data)
self.skip = False
self.id = "invalid"
self.type = None
self.metadata = {}
self.requires_tor = False
self.requires_bouncer = False
# If this is set to true a deck can specify any path. It should only
# be run against trusted decks or when you create a deck
# programmaticaly to a run test specified from the command line.
self._arbitrary_paths = arbitrary_paths
self.ooni = {
'bouncer_client': None,
'test_details': {},
'test_name': None
}
self.output_path = None
self._load(data)
def _pop_option(self, name, task_data, default=None):
try:
value = self.global_options[name]
if value in [None, 0]:
raise KeyError
except KeyError:
value = task_data.pop(name,
self.parent_metadata.get(name, default))
task_data.pop(name, None)
return value
def _load_ooni(self, task_data):
required_keys = ["test_name"]
for required_key in required_keys:
if required_key not in task_data:
raise MissingTaskDataKey(required_key)
self.ooni['test_name'] = task_data.pop('test_name')
# This raises e.NetTestNotFound, we let it go onto the caller
nettest_path = nettest_to_path(self.ooni['test_name'],
self._arbitrary_paths)
annotations = self._pop_option('annotations', task_data, {})
collector_address = self._pop_option('collector', task_data, None)
try:
self.output_path = self.global_options['reportfile']
except KeyError:
self.output_path = task_data.pop('reportfile', None)
if task_data.get('no-collector', False):
collector_address = None
elif config.reports.upload is False:
collector_address = None
net_test_loader = NetTestLoader(
options_to_args(task_data),
annotations=annotations,
test_file=nettest_path
)
if isinstance(collector_address, dict):
net_test_loader.collector = CollectorClient(
settings=collector_address
)
elif collector_address is not None:
net_test_loader.collector = CollectorClient(
collector_address
)
if (net_test_loader.collector is not None and
net_test_loader.collector.backend_type == "onion"):
self.requires_tor = True
try:
net_test_loader.checkOptions()
if net_test_loader.requiresTor:
self.requires_tor = True
except e.MissingTestHelper:
self.requires_bouncer = True
self.ooni['net_test_loader'] = net_test_loader
@defer.inlineCallbacks
def _setup_ooni(self):
yield probe_ip.lookup()
for input_file in self.ooni['net_test_loader'].inputFiles:
filename = Template(input_file['filename']).safe_substitute(
probe_cc=probe_ip.geodata['countrycode'].lower()
)
file_path = resolve_file_path(filename, self.cwd)
input_file['test_options'][input_file['key']] = file_path
def setup(self):
self.id = str(uuid.uuid4())
return getattr(self, "_setup_"+self.type)()
def _load(self, data):
for key in self._metadata_keys:
try:
self.metadata[key] = data.pop(key)
except KeyError:
continue
task_type, task_data = data.popitem()
if task_type not in self._supported_tasks:
raise UnknownTaskKey(task_type)
self.type = task_type
getattr(self, "_load_"+task_type)(task_data)
assert len(data) == 0, "Got an unidentified key"
ooniprobe-2.2.0/ooni/deck/store.py 0000644 0001750 0001750 00000020006 13046133036 015222 0 ustar irl irl import csv
import json
import errno
from copy import deepcopy
from twisted.internet import defer
from twisted.python.filepath import FilePath
from ooni.utils import mkdir_p, log
from ooni.deck.deck import NGDeck
from ooni.otime import timestampNowISO8601UTC
from ooni.resources import check_for_update
from ooni.settings import config
# These are the decks to be run by default.
DEFAULT_DECKS = ['web', 'tor', 'im', 'http-invalid']
class InputNotFound(Exception):
pass
class DeckNotFound(Exception):
pass
def write_txt_from_csv(in_file, out_file, func, skip_header=True):
with in_file.open('r') as in_fh, out_file.open('w') as out_fh:
csvreader = csv.reader(in_fh)
if skip_header:
csvreader.next()
for row in csvreader:
out_fh.write(func(row))
def write_descriptor(out_file, name, desc_id, filepath, file_type):
with out_file.open('w') as out_fh:
json.dump({
"name": name,
"filepath": filepath,
"last_updated": timestampNowISO8601UTC(),
"id": desc_id,
"type": file_type
}, out_fh)
class InputStore(object):
def __init__(self):
self.path = FilePath(config.inputs_directory)
self.resources = FilePath(config.resources_directory)
self._cache_stale = True
self._cache = {}
@defer.inlineCallbacks
def update_url_lists(self, country_code):
countries = ["global"]
if country_code != "ZZ":
countries.append(country_code)
for cc in countries:
cc = cc.lower()
in_file = self.resources.child("citizenlab-test-lists").child("{0}.csv".format(cc))
if not in_file.exists():
yield check_for_update(country_code)
if not in_file.exists():
log.msg("Could not find input for country "
"{0} in {1}".format(cc, in_file.path))
continue
# XXX maybe move this to some utility function.
# It's duplicated in oonideckgen.
data_fname = "citizenlab-test-lists_{0}.txt".format(cc)
desc_fname = "citizenlab-test-lists_{0}.desc".format(cc)
out_file = self.path.child("data").child(data_fname)
write_txt_from_csv(in_file, out_file,
lambda row: "{}\n".format(row[0])
)
desc_file = self.path.child("descriptors").child(desc_fname)
if cc == "global":
name = "List of globally accessed websites"
else:
# XXX resolve this to a human readable country name
country_name = cc
name = "List of websites for {0}".format(country_name)
write_descriptor(desc_file, name,
"citizenlab_{0}_urls".format(cc),
out_file.path,
"file/url")
self._cache_stale = True
yield defer.succeed(None)
@defer.inlineCallbacks
def update_tor_bridge_lines(self, country_code):
from ooni.utils import onion
in_file = self.resources.child("tor-bridges").child(
"tor-bridges-ip-port.csv"
)
if not in_file.exists():
yield check_for_update(country_code)
data_fname = "tor-bridge-lines.txt"
desc_fname = "tor-bridge-lines.desc"
out_file = self.path.child("data").child(data_fname)
def format_row(row):
host, port, nickname, protocol = row
if protocol.lower() not in onion.pt_names:
return "{}:{}\n".format(host, port)
return "{} {}:{}\n".format(protocol, host, port)
write_txt_from_csv(in_file, out_file, format_row)
desc_file = self.path.child("descriptors").child(desc_fname)
write_descriptor(
desc_file, "Tor bridge lines",
"tor_bridge_lines", out_file.path,
"file/ip-port"
)
self._cache_stale = True
# Do an empty defer to fit inside of a event loop clock
yield defer.succeed(None)
@defer.inlineCallbacks
def create(self, country_code=None):
# XXX This is a hax to avoid race conditions in testing because this
# object is a singleton and config can have a custom home directory
# passed at runtime.
self.path = FilePath(config.inputs_directory)
self.resources = FilePath(config.resources_directory)
mkdir_p(self.path.child("descriptors").path)
mkdir_p(self.path.child("data").path)
yield self.update_url_lists(country_code)
yield self.update_tor_bridge_lines(country_code)
@defer.inlineCallbacks
def update(self, country_code=None):
# XXX why do we make a difference between create and update?
yield self.create(country_code)
def _update_cache(self):
new_cache = {}
descs = self.path.child("descriptors")
if not descs.exists():
self._cache = new_cache
return
for fn in descs.listdir():
with descs.child(fn).open("r") as in_fh:
input_desc = json.load(in_fh)
new_cache[input_desc.pop("id")] = input_desc
self._cache = new_cache
self._cache_stale = False
return
def list(self):
if self._cache_stale:
self._update_cache()
return deepcopy(self._cache)
def get(self, input_id):
if self._cache_stale:
self._update_cache()
try:
input_desc = deepcopy(self._cache[input_id])
except KeyError:
raise InputNotFound(input_id)
return input_desc
def getContent(self, input_id):
input_desc = self.get(input_id)
with open(input_desc["filepath"]) as fh:
return fh.read()
class DeckStore(object):
def __init__(self, enabled_directory=config.decks_enabled_directory,
available_directory=config.decks_available_directory):
self.enabled_directory = FilePath(enabled_directory)
self.available_directory = FilePath(available_directory)
self._cache = {}
self._cache_stale = True
def _list(self):
if self._cache_stale:
self._update_cache()
for deck_id, deck in self._cache.iteritems():
yield (deck_id, deck)
def list(self):
decks = []
for deck_id, deck in self._list():
decks.append((deck_id, deck))
return decks
def list_enabled(self):
decks = []
for deck_id, deck in self._list():
if not self.is_enabled(deck_id):
continue
decks.append((deck_id, deck))
return decks
def is_enabled(self, deck_id):
return self.enabled_directory.child(deck_id + '.yaml').exists()
def enable(self, deck_id):
deck_path = self.available_directory.child(deck_id + '.yaml')
if not deck_path.exists():
raise DeckNotFound(deck_id)
deck_enabled_path = self.enabled_directory.child(deck_id + '.yaml')
try:
deck_path.linkTo(deck_enabled_path)
except OSError as ose:
if ose.errno != errno.EEXIST:
raise
def disable(self, deck_id):
deck_enabled_path = self.enabled_directory.child(deck_id + '.yaml')
if not deck_enabled_path.exists():
raise DeckNotFound(deck_id)
deck_enabled_path.remove()
def _update_cache(self):
new_cache = {}
for deck_path in self.available_directory.listdir():
if not deck_path.endswith('.yaml'):
continue
deck = NGDeck(
deck_path=self.available_directory.child(deck_path).path
)
new_cache[deck.id] = deck
self._cache = new_cache
self._cache_stale = False
def get(self, deck_id):
if self._cache_stale:
self._update_cache()
try:
return deepcopy(self._cache[deck_id])
except KeyError:
raise DeckNotFound(deck_id)
deck_store = DeckStore()
input_store = InputStore()
ooniprobe-2.2.0/ooni/deck/__init__.py 0000644 0001750 0001750 00000000031 12767752454 015645 0 ustar irl irl from .deck import NGDeck
ooniprobe-2.2.0/ooni/deck/legacy.py 0000644 0001750 0001750 00000004376 13046133036 015346 0 ustar irl irl class NotAnOption(Exception):
pass
def subargs_to_options(subargs):
options = {}
def parse_option_name(arg):
if arg.startswith("--"):
return arg[2:]
elif arg.startswith("-"):
return arg[1:]
raise NotAnOption
subargs = iter(reversed(subargs))
for subarg in subargs:
try:
value = subarg
name = parse_option_name(subarg)
options[name] = True
except NotAnOption:
try:
name = parse_option_name(subargs.next())
options[name] = value
except StopIteration:
break
return options
boolean_options = [
'no-collector',
'no-geoip',
'no-yamloo',
'verbose',
'help',
'no-default-reporter',
'resume'
]
def convert_legacy_deck(deck_data):
"""
I take a legacy deck list and convert it to the new deck format.
:param deck_data: in the legacy format
:return: deck_data in the new format
"""
assert isinstance(deck_data, list), "Legacy decks are lists"
new_deck_data = {}
new_deck_data["name"] = "Legacy deck"
new_deck_data["description"] = "This is a legacy deck converted to the " \
"new format"
new_deck_data["bouncer"] = None
new_deck_data["tasks"] = []
for deck_item in deck_data:
deck_task = {"ooni": {}}
options = deck_item["options"]
deck_task["ooni"]["test_name"] = options.pop("test_file")
deck_task["ooni"]["annotations"] = options.pop("annotations", {})
deck_task["ooni"]["collector"] = options.pop("collector", None)
# XXX here we end up picking only the last not none bouncer_address
bouncer_address = options.pop("bouncer", None)
if bouncer_address is not None:
new_deck_data["bouncer"] = bouncer_address
subargs = options.pop("subargs", [])
for name, value in subargs_to_options(subargs).items():
deck_task["ooni"][name] = value
for name, value in options.items():
if name in boolean_options:
value = False if value == 0 else True
deck_task["ooni"][name] = value
new_deck_data["tasks"].append(deck_task)
return new_deck_data
ooniprobe-2.2.0/ooni/deck/backend.py 0000644 0001750 0001750 00000017352 12767752454 015513 0 ustar irl irl from twisted.internet import defer
from ooni import errors as e
from ooni.backend_client import guess_backend_type, WebConnectivityClient, \
CollectorClient
from ooni.utils import log
def sort_addresses_by_priority(priority_address,
alternate_addresses,
preferred_backend):
prioritised_addresses = []
backend_type = guess_backend_type(priority_address)
priority_address = {
'address': priority_address,
'type': backend_type
}
# We prefer an onion collector to an https collector to a cloudfront
# collector to a plaintext collector
address_priority = ['onion', 'https', 'cloudfront', 'http']
address_priority.remove(preferred_backend)
address_priority.insert(0, preferred_backend)
def filter_by_type(collectors, collector_type):
return filter(lambda x: x['type'] == collector_type, collectors)
if (priority_address['type'] != preferred_backend):
valid_alternatives = filter_by_type(alternate_addresses,
preferred_backend)
if len(valid_alternatives) > 0:
alternate_addresses += [priority_address]
priority_address = valid_alternatives[0]
alternate_addresses.remove(priority_address)
prioritised_addresses += [priority_address]
for address_type in address_priority:
prioritised_addresses += filter_by_type(alternate_addresses,
address_type)
return prioritised_addresses
@defer.inlineCallbacks
def get_reachable_test_helper(test_helper_name, test_helper_address,
test_helper_alternate, preferred_backend):
# For the moment we look for alternate addresses only of
# web_connectivity test helpers.
if test_helper_name == 'web-connectivity':
for web_connectivity_settings in sort_addresses_by_priority(
test_helper_address, test_helper_alternate,
preferred_backend):
web_connectivity_test_helper = WebConnectivityClient(
settings=web_connectivity_settings)
if not web_connectivity_test_helper.isSupported():
log.err("Unsupported %s web_connectivity test_helper "
"%s" % (
web_connectivity_settings['type'],
web_connectivity_settings['address']
))
continue
reachable = yield web_connectivity_test_helper.isReachable()
if not reachable:
log.err("Unreachable %s web_connectivity test helper %s" % (
web_connectivity_settings['type'],
web_connectivity_settings['address']
))
continue
defer.returnValue(web_connectivity_settings)
raise e.NoReachableTestHelpers
else:
defer.returnValue(test_helper_address.encode('ascii'))
@defer.inlineCallbacks
def get_reachable_collector(collector_address, collector_alternate,
preferred_backend):
for collector_settings in sort_addresses_by_priority(
collector_address,
collector_alternate,
preferred_backend):
collector = CollectorClient(settings=collector_settings)
if not collector.isSupported():
log.err("Unsupported %s collector %s" % (
collector_settings['type'],
collector_settings['address']))
continue
reachable = yield collector.isReachable()
if not reachable:
log.err("Unreachable %s collector %s" % (
collector_settings['type'],
collector_settings['address']))
continue
defer.returnValue(collector)
raise e.NoReachableCollectors
@defer.inlineCallbacks
def get_reachable_test_helpers_and_collectors(net_tests, preferred_backend):
for net_test in net_tests:
primary_address = net_test['collector']
alternate_addresses = net_test.get('collector-alternate', [])
net_test['collector'] = yield get_reachable_collector(
primary_address, alternate_addresses, preferred_backend)
for test_helper_name, test_helper_address in net_test['test-helpers'].items():
test_helper_alternate = \
net_test.get('test-helpers-alternate', {}).get(test_helper_name, [])
net_test['test-helpers'][test_helper_name] = \
yield get_reachable_test_helper(test_helper_name,
test_helper_address,
test_helper_alternate,
preferred_backend)
defer.returnValue(net_tests)
@defer.inlineCallbacks
def lookup_collector_and_test_helpers(net_test_loaders,
bouncer,
preferred_backend,
no_collector=False):
required_nettests = []
requires_test_helpers = False
requires_collector = False
for net_test_loader in net_test_loaders:
nettest = {
'name': net_test_loader.testName,
'version': net_test_loader.testVersion,
'test-helpers': [],
# XXX deprecate this very soon
'input-hashes': []
}
if not net_test_loader.collector and not no_collector:
requires_collector = True
if len(net_test_loader.missingTestHelpers) > 0:
requires_test_helpers = True
nettest['test-helpers'] += map(lambda x: x[1],
net_test_loader.missingTestHelpers)
required_nettests.append(nettest)
if not requires_test_helpers and not requires_collector:
defer.returnValue(None)
print("Using bouncer %s" % bouncer)
response = yield bouncer.lookupTestCollector(required_nettests)
try:
provided_net_tests = yield get_reachable_test_helpers_and_collectors(
response['net-tests'], preferred_backend)
except e.NoReachableCollectors:
log.err("Could not find any reachable collector")
raise
except e.NoReachableTestHelpers:
log.err("Could not find any reachable test helpers")
raise
def find_collector_and_test_helpers(test_name, test_version):
# input_files = [u""+x['hash'] for x in input_files]
for net_test in provided_net_tests:
if net_test['name'] != test_name:
continue
if net_test['version'] != test_version:
continue
# XXX remove the notion of policies based on input file hashes
# if set(net_test['input-hashes']) != set(input_files):
# continue
return net_test['collector'], net_test['test-helpers']
for net_test_loader in net_test_loaders:
log.msg("Setting collector and test helpers for %s" %
net_test_loader.testName)
collector, test_helpers = \
find_collector_and_test_helpers(
test_name=net_test_loader.testName,
test_version=net_test_loader.testVersion
# input_files=net_test_loader.inputFiles
)
for option, name in net_test_loader.missingTestHelpers:
test_helper_address_or_settings = test_helpers[name]
net_test_loader.localOptions[option] = test_helper_address_or_settings
net_test_loader.testHelpers[option] = test_helper_address_or_settings
if not net_test_loader.collector and not no_collector:
log.debug("Using collector {0}".format(collector))
net_test_loader.collector = collector
ooniprobe-2.2.0/ooni/common/ 0000755 0001750 0001750 00000000000 13071152230 014073 5 ustar irl irl ooniprobe-2.2.0/ooni/common/ip_utils.py 0000644 0001750 0001750 00000002602 13046133036 016302 0 ustar irl irl from ipaddr import IPv4Address, IPv6Address
from ipaddr import AddressValueError
def in_private_ip_space(address):
ip_address = IPv4Address(address)
return any(
[ip_address.is_private, ip_address.is_loopback]
)
def is_public_ipv4_address(address):
try:
return not in_private_ip_space(address)
except AddressValueError:
return False
def is_private_ipv4_address(address):
try:
return in_private_ip_space(address)
except AddressValueError:
return False
def is_private_address(address, only_loopback=False):
"""
Checks to see if an IP address is in private IP space and if the
hostname is either localhost or *.local.
:param address: an IP address of a hostname
:param only_loopback: will only check if the IP address is either
127.0.0.1/8 or ::1 in ipv6
:return: True if the IP address or host is in private space
"""
try:
ip_address = IPv4Address(address)
except AddressValueError:
try:
ip_address = IPv6Address(address)
except AddressValueError:
if address == "localhost":
return True
elif address.endswith(".local"):
return True
return False
candidates = [ip_address.is_loopback]
if not only_loopback:
candidates.append(ip_address.is_private)
return any(candidates)
ooniprobe-2.2.0/ooni/common/txextra.py 0000644 0001750 0001750 00000017734 13046134074 016170 0 ustar irl irl import itertools
from copy import copy
from twisted.web.http_headers import Headers
from twisted.web import error
from twisted.web.client import BrowserLikeRedirectAgent
from twisted.web._newclient import ResponseFailed
from twisted.web._newclient import HTTPClientParser, ParseError
from twisted.python.failure import Failure
from twisted.web import client, _newclient
from twisted.web._newclient import RequestNotSent, RequestGenerationFailed
from twisted.web._newclient import TransportProxyProducer, STATUS
from twisted.internet import reactor
from twisted.internet.defer import Deferred, fail, maybeDeferred, failure
from twisted.python import log
from .ip_utils import is_private_address
class TrueHeaders(Headers):
def __init__(self, rawHeaders=None):
self._rawHeaders = dict()
if rawHeaders is not None:
for name, values in rawHeaders.iteritems():
if type(values) is list:
self.setRawHeaders(name, values[:])
elif type(values) is str:
self.setRawHeaders(name, values)
def setRawHeaders(self, name, values):
if name.lower() not in self._rawHeaders:
self._rawHeaders[name.lower()] = dict()
self._rawHeaders[name.lower()]['name'] = name
self._rawHeaders[name.lower()]['values'] = values
def copy(self):
rawHeaders = {}
for k, v in self.getAllRawHeaders():
rawHeaders[k] = v
return self.__class__(rawHeaders)
def getAllRawHeaders(self):
for _, v in self._rawHeaders.iteritems():
yield v['name'], v['values']
def getRawHeaders(self, name, default=None):
if name.lower() in self._rawHeaders:
return self._rawHeaders[name.lower()]['values']
return default
def getDiff(self, headers, ignore=[]):
"""
Args:
headers: a TrueHeaders object
ignore: specify a list of header fields to ignore
Returns:
a set containing the header names that are not present in
header_dict or not present in self.
"""
diff = set()
field_names = []
headers_a = copy(self)
headers_b = copy(headers)
for name in ignore:
try:
del headers_a._rawHeaders[name.lower()]
except KeyError:
pass
try:
del headers_b._rawHeaders[name.lower()]
except KeyError:
pass
for k, v in itertools.chain(headers_a.getAllRawHeaders(),
headers_b.getAllRawHeaders()):
field_names.append(k)
for name in field_names:
if self.getRawHeaders(name) and headers.getRawHeaders(name):
pass
else:
diff.add(name)
return list(diff)
class HTTPClientParser(_newclient.HTTPClientParser):
def logPrefix(self):
return 'HTTPClientParser'
def connectionMade(self):
self.headers = TrueHeaders()
self.connHeaders = TrueHeaders()
self.state = STATUS
self._partialHeader = None
def headerReceived(self, name, value):
if self.isConnectionControlHeader(name.lower()):
headers = self.connHeaders
else:
headers = self.headers
headers.addRawHeader(name, value)
def statusReceived(self, status):
# This is a fix for invalid number of parts
try:
return _newclient.HTTPClientParser.statusReceived(self, status)
except ParseError as exc:
if exc.args[0] == 'wrong number of parts':
return _newclient.HTTPClientParser.statusReceived(self,
status + " XXX")
raise
class HTTP11ClientProtocol(_newclient.HTTP11ClientProtocol):
def request(self, request):
if self._state != 'QUIESCENT':
return fail(RequestNotSent())
self._state = 'TRANSMITTING'
_requestDeferred = maybeDeferred(request.writeTo, self.transport)
self._finishedRequest = Deferred()
self._currentRequest = request
self._transportProxy = TransportProxyProducer(self.transport)
self._parser = HTTPClientParser(request, self._finishResponse)
self._parser.makeConnection(self._transportProxy)
self._responseDeferred = self._parser._responseDeferred
def cbRequestWrotten(ignored):
if self._state == 'TRANSMITTING':
self._state = 'WAITING'
self._responseDeferred.chainDeferred(self._finishedRequest)
def ebRequestWriting(err):
if self._state == 'TRANSMITTING':
self._state = 'GENERATION_FAILED'
self.transport.loseConnection()
self._finishedRequest.errback(
failure.Failure(RequestGenerationFailed([err])))
else:
log.err(err, 'Error writing request, but not in valid state '
'to finalize request: %s' % self._state)
_requestDeferred.addCallbacks(cbRequestWrotten, ebRequestWriting)
return self._finishedRequest
class _HTTP11ClientFactory(client._HTTP11ClientFactory):
noisy = False
def buildProtocol(self, addr):
return HTTP11ClientProtocol(self._quiescentCallback)
class HTTPConnectionPool(client.HTTPConnectionPool):
_factory = _HTTP11ClientFactory
class TrueHeadersAgent(client.Agent):
def __init__(self, *args, **kw):
super(TrueHeadersAgent, self).__init__(*args, **kw)
self._pool = HTTPConnectionPool(reactor, False)
class FixedRedirectAgent(BrowserLikeRedirectAgent):
"""
This is a redirect agent with this patch manually applied:
https://twistedmatrix.com/trac/ticket/8265
"""
def __init__(self, agent, redirectLimit=20, ignorePrivateRedirects=False):
self.ignorePrivateRedirects = ignorePrivateRedirects
BrowserLikeRedirectAgent.__init__(self, agent, redirectLimit)
def _handleRedirect(self, response, method, uri, headers, redirectCount):
"""
Handle a redirect response, checking the number of redirects already
followed, and extracting the location header fields.
This is patched to fix a bug in infinite redirect loop.
"""
if redirectCount >= self._redirectLimit:
err = error.InfiniteRedirection(
response.code,
b'Infinite redirection detected',
location=uri)
raise ResponseFailed([Failure(err)], response)
locationHeaders = response.headers.getRawHeaders(b'location', [])
if not locationHeaders:
err = error.RedirectWithNoLocation(
response.code, b'No location header field', uri)
raise ResponseFailed([Failure(err)], response)
location = self._resolveLocation(
# This is the fix to properly handle redirects
response.request.absoluteURI,
locationHeaders[0]
)
if getattr(client, 'URI', None):
uri = client.URI.fromBytes(location)
else:
# Backward compatibility with twisted 14.0.2
uri = client._URI.fromBytes(location)
if self.ignorePrivateRedirects and is_private_address(uri.host,
only_loopback=True):
return response
deferred = self._agent.request(method, location, headers)
def _chainResponse(newResponse):
if isinstance(newResponse, Failure):
# This is needed to write the response even in case of failure
newResponse.previousResponse = response
newResponse.requestLocation = location
return newResponse
newResponse.setPreviousResponse(response)
return newResponse
deferred.addBoth(_chainResponse)
return deferred.addCallback(
self._handleResponse, method, uri, headers, redirectCount + 1)
ooniprobe-2.2.0/ooni/common/__init__.py 0000644 0001750 0001750 00000000473 12733731376 016232 0 ustar irl irl """
This modules contains functionality that is shared amongst ooni-probe and
ooni-backend. If the code in here starts growing too much I think it would
make sense to either:
* Make the code in here into it's own package that is imported by
ooni-probe and ooni-backend.
* Merge ooniprobe with oonibackend.
"""
ooniprobe-2.2.0/ooni/common/tcp_utils.py 0000644 0001750 0001750 00000000437 12733731376 016501 0 ustar irl irl from twisted.internet.protocol import Factory, Protocol
class TCPConnectProtocol(Protocol):
def connectionMade(self):
self.transport.loseConnection()
class TCPConnectFactory(Factory):
noisy = False
def buildProtocol(self, addr):
return TCPConnectProtocol()
ooniprobe-2.2.0/ooni/common/http_utils.py 0000644 0001750 0001750 00000001524 13070747126 016663 0 ustar irl irl import re
from base64 import b64encode
def representBody(body):
if not body:
return body
try:
body = unicode(body, 'utf-8')
except UnicodeDecodeError:
body = {
'data': b64encode(body),
'format': 'base64'
}
return body
TITLE_REGEXP = re.compile("(.*?)", re.IGNORECASE | re.DOTALL)
def extractTitle(body):
m = TITLE_REGEXP.search(body, re.IGNORECASE | re.DOTALL)
if m:
return unicode(m.group(1), errors='ignore')
return ''
REQUEST_HEADERS = {
'User-Agent': ['Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, '
'like Gecko) Chrome/47.0.2526.106 Safari/537.36'],
'Accept-Language': ['en-US;q=0.8,en;q=0.5'],
'Accept': ['text/html,application/xhtml+xml,application/xml;q=0.9,'
'*/*;q=0.8']
}
ooniprobe-2.2.0/ooni/errors.py 0000644 0001750 0001750 00000015741 12750665111 014513 0 ustar irl irl from twisted.internet.defer import CancelledError
from twisted.internet.defer import TimeoutError as DeferTimeoutError
from twisted.web._newclient import ResponseNeverReceived
from twisted.web.error import Error
from twisted.internet.error import ConnectionRefusedError, TCPTimedOutError
from twisted.internet.error import DNSLookupError, ConnectError, ConnectionLost
from twisted.names.error import DNSNameError, DNSServerError
from twisted.internet.error import TimeoutError as GenericTimeoutError
from twisted.internet.error import ProcessDone, ConnectionDone
from twisted.python import usage
from txsocksx.errors import SOCKSError
from txsocksx.errors import MethodsNotAcceptedError, AddressNotSupported
from txsocksx.errors import ConnectionError, NetworkUnreachable
from txsocksx.errors import ConnectionLostEarly, ConnectionNotAllowed
from txsocksx.errors import NoAcceptableMethods, ServerFailure
from txsocksx.errors import HostUnreachable, ConnectionRefused
from txsocksx.errors import TTLExpired, CommandNotSupported
from socket import gaierror
known_failures = [
(ConnectionRefusedError, 'connection_refused_error'),
(ConnectionLost, 'connection_lost_error'),
(CancelledError, 'task_timed_out'),
(gaierror, 'address_family_not_supported_error'),
(DNSLookupError, 'dns_lookup_error'),
(DNSNameError, 'dns_name_error'),
(DNSServerError, 'dns_server_failure'),
(TCPTimedOutError, 'tcp_timed_out_error'),
(ResponseNeverReceived, 'response_never_received'),
(DeferTimeoutError, 'deferred_timeout_error'),
(GenericTimeoutError, 'generic_timeout_error'),
(MethodsNotAcceptedError, 'socks_methods_not_supported'),
(AddressNotSupported, 'socks_address_not_supported'),
(NetworkUnreachable, 'socks_network_unreachable'),
(ConnectionError, 'socks_connect_error'),
(ConnectionLostEarly, 'socks_connection_lost_early'),
(ConnectionNotAllowed, 'socks_connection_not_allowed'),
(NoAcceptableMethods, 'socks_no_acceptable_methods'),
(ServerFailure, 'socks_server_failure'),
(HostUnreachable, 'socks_host_unreachable'),
(ConnectionRefused, 'socks_connection_refused'),
(TTLExpired, 'socks_ttl_expired'),
(CommandNotSupported, 'socks_command_not_supported'),
(SOCKSError, 'socks_error'),
(ProcessDone, 'process_done'),
(ConnectionDone, 'connection_done'),
(ConnectError, 'connect_error'),
]
def handleAllFailures(failure):
"""
Trap all the known Failures and we return a string that
represents the failure. Any unknown Failures will be reraised and
returned by failure.trap().
"""
failure.trap(*[failure_type for failure_type, _ in known_failures])
return failureToString(failure)
def failureToString(failure):
"""
Given a failure instance return a string representing the kind of error
that occurred.
Args:
failure: a :class:twisted.internet.error instance
Returns:
A string representing the HTTP response error message.
"""
for failure_type, failure_string in known_failures:
if isinstance(failure.value, failure_type):
return failure_string
# Failure without a corresponding failure message
return 'unknown_failure %s' % str(failure.value)
class DirectorException(Exception):
pass
class UnableToStartTor(DirectorException):
pass
class InvalidAddress(Exception):
pass
class InvalidOONIBCollectorAddress(InvalidAddress):
pass
class InvalidOONIBBouncerAddress(InvalidAddress):
pass
class AllReportersFailed(Exception):
pass
class GeoIPDataFilesNotFound(Exception):
pass
class ReportNotCreated(Exception):
pass
class ReportAlreadyClosed(Exception):
pass
class TorStateNotFound(Exception):
pass
class TorControlPortNotFound(Exception):
pass
class InsufficientPrivileges(Exception):
pass
class ProbeIPUnknown(Exception):
pass
class NoMoreReporters(Exception):
pass
class TorNotRunning(Exception):
pass
class OONIBError(Exception):
pass
class OONIBInvalidRequest(OONIBError):
pass
class OONIBReportError(OONIBError):
pass
class OONIBReportUpdateError(OONIBReportError):
pass
class OONIBReportCreationError(OONIBReportError):
pass
class OONIBTestDetailsLookupError(OONIBReportError):
pass
class OONIBInputError(OONIBError):
pass
class OONIBInputDescriptorNotFound(OONIBInputError):
pass
class OONIBInvalidInputHash(OONIBError):
pass
class OONIBInvalidNettestName(OONIBError):
pass
class UnableToLoadDeckInput(Exception):
pass
class CouldNotFindTestHelper(Exception):
pass
class CouldNotFindTestCollector(Exception):
pass
class NetTestNotFound(Exception):
pass
class MissingRequiredOption(Exception):
def __init__(self, message, net_test_loader):
super(MissingRequiredOption, self).__init__()
self.net_test_loader = net_test_loader
self.message = message
def __str__(self):
return ','.join(self.message)
class MissingTestHelper(MissingRequiredOption):
pass
class OONIUsageError(usage.UsageError):
def __init__(self, net_test_loader):
super(OONIUsageError, self).__init__()
self.net_test_loader = net_test_loader
class FailureToLoadNetTest(Exception):
pass
class NoPostProcessor(Exception):
pass
class InvalidOption(Exception):
pass
class IncoherentOptions(Exception):
def __init__(self, first_options, second_options):
super(IncoherentOptions, self).__init__()
self.message = "%s is different to %s" % (first_options, second_options)
def __str__(self):
return self.message
class TaskTimedOut(Exception):
pass
class InvalidInputFile(Exception):
pass
class ReporterException(Exception):
pass
class InvalidDestination(ReporterException):
pass
class ReportLogExists(Exception):
pass
class InvalidConfigFile(Exception):
pass
class ConfigFileIncoherent(Exception):
pass
def get_error(error_key):
if error_key == 'test-helpers-key-missing':
return CouldNotFindTestHelper
elif error_key == 'input-descriptor-not-found':
return OONIBInputDescriptorNotFound
elif error_key == 'invalid-request':
return OONIBInvalidRequest
elif error_key == 'invalid-input-hash':
return OONIBInvalidInputHash
elif error_key == 'invalid-nettest-name':
return OONIBInvalidNettestName
elif isinstance(error_key, int):
return Error("%d" % error_key)
else:
return OONIBError
class IfaceError(Exception):
pass
class ProtocolNotRegistered(Exception):
pass
class ProtocolAlreadyRegistered(Exception):
pass
class LibraryNotInstalledError(Exception):
pass
class InsecureBackend(Exception):
pass
class CollectorUnsupported(Exception):
pass
class HTTPSCollectorUnsupported(CollectorUnsupported):
pass
class BackendNotSupported(Exception):
pass
class NoReachableCollectors(Exception):
pass
class NoReachableTestHelpers(Exception):
pass
class InvalidPreferredBackend(Exception):
pass
ooniprobe-2.2.0/ooni/nettest.py 0000644 0001750 0001750 00000077116 13046134112 014660 0 ustar irl irl import os
import re
import sys
import copy
import time
from twisted.internet import defer
from twisted.python.filepath import FilePath
from twisted.trial.runner import filenameToModule
from twisted.python import failure, usage, reflect
from ooni import __version__ as ooniprobe_version, errors
from ooni import otime
from ooni.tasks import Measurement
from ooni.utils import log, sanitize_options, randomStr
from ooni.utils.net import hasRawSocketPermission
from ooni.settings import config
from ooni.geoip import probe_ip
from ooni import errors as e
from inspect import getmembers
from StringIO import StringIO
class NoTestCasesFound(Exception):
pass
def getTestClassFromFile(net_test_file):
"""
Will return the first class that is an instance of NetTestCase.
XXX this means that if inside of a test there are more than 1 test case
then we will only run the first one.
"""
module = filenameToModule(net_test_file)
for __, item in getmembers(module):
try:
assert issubclass(item, NetTestCase)
return item
except (TypeError, AssertionError):
pass
def getOption(opt_parameter, required_options, type='text'):
"""
Arguments:
usage_options: a list as should be the optParameters of an UsageOptions
class.
required_options: a list containing the strings of the options that are
required.
type: a string containing the type of the option.
Returns:
a dict containing
{
'description': the description of the option,
'default': the default value of the option,
'required': True|False if the option is required or not,
'type': the type of the option ('text' or 'file')
}
"""
option_name, _, default, description = opt_parameter
if option_name in required_options:
required = True
else:
required = False
return {
'description': description,
'value': default,
'required': required,
'type': type
}
def getArguments(test_class):
arguments = {}
if test_class.inputFile:
option_name = test_class.inputFile[0]
arguments[option_name] = getOption(
test_class.inputFile,
test_class.requiredOptions,
type='file')
try:
list(test_class.usageOptions.optParameters)
except AttributeError:
return arguments
for opt_parameter in test_class.usageOptions.optParameters:
option_name = opt_parameter[0]
opt_type = "text"
if opt_parameter[3].lower().startswith("file"):
opt_type = "file"
arguments[option_name] = getOption(
opt_parameter,
test_class.requiredOptions,
type=opt_type)
return arguments
def normalizeTestName(test_class_name):
return test_class_name.lower().replace(' ', '_')
def getNetTestInformation(net_test_file):
"""
Returns a dict containing:
{
'id': the test filename excluding the .py extension,
'name': the full name of the test,
'description': the description of the test,
'version': version number of this test,
'arguments': a dict containing as keys the supported arguments and as
values the argument description.
}
"""
test_class = getTestClassFromFile(net_test_file)
test_id = os.path.basename(net_test_file).replace('.py', '')
information = {
'id': test_id,
'name': test_class.name,
'description': test_class.description,
'version': test_class.version,
'arguments': getArguments(test_class),
'simple_options': test_class.simpleOptions,
'path': net_test_file
}
return information
def usageOptionsFactory(test_name, test_version):
class UsageOptions(usage.Options):
optParameters = []
optFlags = []
synopsis = "{} {} [options]".format(
os.path.basename(sys.argv[0]),
test_name
)
def opt_help(self):
map(log.msg, self.__str__().split("\n"))
sys.exit(0)
def opt_version(self):
"""
Display the net_test version and exit.
"""
log.msg("{} version: {}".format(test_name, test_version))
sys.exit(0)
return UsageOptions
def netTestCaseFactory(test_class, local_options):
class NetTestCaseWithLocalOptions(test_class):
localOptions = local_options
return NetTestCaseWithLocalOptions
ONION_INPUT_REGEXP = re.compile("(httpo://[a-z0-9]{16}\.onion)/input/(["
"a-z0-9]{64})$")
class NetTestLoader(object):
method_prefix = 'test'
collector = None
yamloo = True
requiresTor = False
def __init__(self, options, test_file=None, test_string=None,
annotations=None):
self.options = options
if annotations is None:
annotations = {}
if not isinstance(annotations, dict):
log.warn("BUG: Annotations is not a dictionary. Resetting it.")
annotations = {}
self.annotations = annotations
self.annotations['platform'] = self.annotations.get('platform',
config.platform)
self.requiresTor = False
self.testName = ""
self.testVersion = ""
self.reportId = None
self.testHelpers = {}
self.missingTestHelpers = []
self.usageOptions = None
self.inputFiles = []
self._testCases = []
self.localOptions = None
if test_file:
self.loadNetTestFile(test_file)
elif test_string:
self.loadNetTestString(test_string)
def getTestDetails(self):
return {
'probe_asn': probe_ip.geodata['asn'],
'probe_cc': probe_ip.geodata['countrycode'],
'probe_ip': probe_ip.geodata['ip'],
'probe_city': probe_ip.geodata['city'],
'software_name': 'ooniprobe',
'software_version': ooniprobe_version,
# XXX only sanitize the input files
'options': sanitize_options(self.options),
'annotations': self.annotations,
'data_format_version': '0.2.0',
'test_name': self.testName,
'test_version': self.testVersion,
'test_helpers': self.testHelpers,
'test_start_time': otime.timestampNowLongUTC(),
# XXX We should deprecate this key very soon
'input_hashes': [],
'report_id': self.reportId
}
def getTestCases(self):
"""
Specialises the test_classes to include the local options.
:return:
"""
test_cases = []
for test_class, test_method in self._testCases:
test_cases.append((netTestCaseFactory(test_class,
self.localOptions),
test_method))
return test_cases
def _accumulateInputFiles(self, test_class):
if not test_class.inputFile:
return
key = test_class.inputFile[0]
filename = self.localOptions[key]
if not filename:
return
input_file = {
'key': key,
'test_options': self.localOptions,
'filename': None
}
m = ONION_INPUT_REGEXP.match(filename)
if m:
raise e.InvalidInputFile("Input files hosted on hidden services "
"are no longer supported")
else:
input_file['filename'] = filename
self.inputFiles.append(input_file)
def _accumulateTestOptions(self, test_class):
"""
Accumulate the optParameters and optFlags for the NetTestCase class
into the usageOptions of the NetTestLoader.
"""
if getattr(test_class.usageOptions, 'optParameters', None):
for parameter in test_class.usageOptions.optParameters:
# XXX should look into if this is still necessary, seems like
# something left over from a bug in some nettest.
# In theory optParameters should always have a length of 4.
if len(parameter) == 5:
parameter.pop()
self.usageOptions.optParameters.append(parameter)
if getattr(test_class.usageOptions, 'optFlags', None):
for parameter in test_class.usageOptions.optFlags:
self.usageOptions.optFlags.append(parameter)
if getattr(test_class, 'inputFile', None):
self.usageOptions.optParameters.append(test_class.inputFile)
if getattr(test_class, 'baseParameters', None):
for parameter in test_class.baseParameters:
self.usageOptions.optParameters.append(parameter)
if getattr(test_class, 'baseFlags', None):
for flag in test_class.baseFlags:
self.usageOptions.optFlags.append(flag)
def parseLocalOptions(self):
"""
Parses the localOptions for the NetTestLoader.
"""
self.localOptions = self.usageOptions()
try:
self.localOptions.parseOptions(self.options)
except usage.UsageError:
tb = sys.exc_info()[2]
raise e.OONIUsageError(self), None, tb
def _checkTestClassOptions(self, test_class):
if test_class.requiresRoot and not hasRawSocketPermission():
raise e.InsufficientPrivileges
if test_class.requiresTor:
self.requiresTor = True
self._checkRequiredOptions(test_class)
self._setTestHelpers(test_class)
test_instance = netTestCaseFactory(test_class, self.localOptions)()
test_instance.requirements()
def _setTestHelpers(self, test_class):
for option, name in test_class.requiredTestHelpers.items():
if self.localOptions.get(option, None):
self.testHelpers[option] = self.localOptions[option]
def _checkRequiredOptions(self, test_class):
missing_options = []
for required_option in test_class.requiredOptions:
log.debug("Checking if %s is present" % required_option)
if required_option not in self.localOptions or \
self.localOptions[required_option] is None:
missing_options.append(required_option)
missing_test_helpers = [opt in test_class.requiredTestHelpers.keys()
for opt in missing_options]
if len(missing_test_helpers) and all(missing_test_helpers):
self.missingTestHelpers = map(lambda x:
(x, test_class.requiredTestHelpers[x]),
missing_options)
raise e.MissingTestHelper(missing_options, test_class)
elif missing_options:
raise e.MissingRequiredOption(missing_options, test_class)
def loadNetTestString(self, net_test_string):
"""
Load NetTest from a string.
WARNING input to this function *MUST* be sanitized and *NEVER* take
untrusted input.
Failure to do so will result in code exec.
net_test_string:
a string that contains the net test to be run.
"""
net_test_file_object = StringIO(net_test_string)
ns = {}
test_cases = []
exec net_test_file_object.read() in ns
for item in ns.itervalues():
test_cases.extend(self._getTestMethods(item))
if not test_cases:
raise e.NoTestCasesFound
self._setupTestCases(test_cases)
def loadNetTestFile(self, net_test_file):
"""
Load NetTest from a file.
"""
test_cases = []
module = filenameToModule(net_test_file)
for __, item in getmembers(module):
test_cases.extend(self._getTestMethods(item))
if not test_cases:
raise e.NoTestCasesFound
self._setupTestCases(test_cases)
def _setupTestCases(self, test_cases):
"""
Creates all the necessary test_cases (a list of tuples containing the
NetTestCase (test_class, test_method))
example:
[(test_classA, [test_method1,
test_method2,
test_method3,
test_method4,
test_method5]),
(test_classB, [test_method1,
test_method2])]
Note: the inputs must be valid for test_classA and test_classB.
net_test_file:
is either a file path or a file like object that will be used to
generate the test_cases.
"""
test_class, _ = test_cases[0]
self.testName = normalizeTestName(test_class.name)
self.testVersion = test_class.version
self._testCases = test_cases
self.usageOptions = usageOptionsFactory(self.testName,
self.testVersion)
if config.reports.unique_id is True:
self.reportId = randomStr(64)
for test_class, test_methods in self._testCases:
self._accumulateTestOptions(test_class)
def checkOptions(self):
self.parseLocalOptions()
test_options_exc = None
usage_options = self._testCases[0][0].usageOptions
for test_class, test_methods in self._testCases:
try:
self._accumulateInputFiles(test_class)
self._checkTestClassOptions(test_class)
if usage_options != test_class.usageOptions:
raise e.IncoherentOptions(usage_options.__name__,
test_class.usageOptions.__name__)
except Exception as exc:
test_options_exc = exc
if test_options_exc is not None:
raise test_options_exc
def _getTestMethods(self, item):
"""
Look for test_ methods in subclasses of NetTestCase
"""
test_cases = []
try:
assert issubclass(item, NetTestCase)
methods = reflect.prefixedMethodNames(item, self.method_prefix)
test_methods = []
for method in methods:
test_methods.append(self.method_prefix + method)
if test_methods:
test_cases.append((item, test_methods))
except (TypeError, AssertionError):
pass
return test_cases
class NetTestState(object):
def __init__(self, allTasksDone):
"""
This keeps track of the state of a running NetTests case.
Args:
allTasksDone is a deferred that will get fired once all the NetTest
cases have reached a final done state.
"""
self.doneTasks = 0
self.tasks = 0
self.completedScheduling = False
self.allTasksDone = allTasksDone
def taskCreated(self):
self.tasks += 1
def checkAllTasksDone(self):
log.debug("Checking all tasks for completion %s == %s" %
(self.doneTasks, self.tasks))
if self.completedScheduling and \
self.doneTasks == self.tasks:
if self.allTasksDone.called:
log.err("allTasksDone was already called. This is probably a bug.")
else:
self.allTasksDone.callback(self.doneTasks)
def taskDone(self):
"""
This is called every time a task has finished running.
"""
self.doneTasks += 1
self.checkAllTasksDone()
def allTasksScheduled(self):
"""
This should be called once all the tasks that need to run have been
scheduled.
XXX this is ghetto.
The reason for which we are calling allTasksDone inside of the
allTasksScheduled method is called after all tasks are done, then we
will run into a race condition. The race is that we don't end up
checking that all the tasks are complete because no task is to be
scheduled.
"""
self.completedScheduling = True
self.checkAllTasksDone()
class NetTest(object):
director = None
def __init__(self, test_cases, test_details, report):
"""
net_test_loader:
an instance of :class:ooni.nettest.NetTestLoader containing
the test to be run.
report:
an instance of :class:ooni.reporter.Reporter
"""
self.report = report
self.testDetails = test_details
self.testCases = test_cases
self._startTime = 0
self._totalInputs = 0
self._completedInputs = 0
self.summary = {}
# This will fire when all the measurements have been completed and
# all the reports are done. Done means that they have either completed
# successfully or all the possible retries have been reached.
self.done = defer.Deferred()
self.done.addCallback(self.doneNetTest)
self.state = NetTestState(self.done)
def __str__(self):
return ' '.join(tc.name for tc, _ in self.testCases)
def uniqueClasses(self):
classes = []
for test_class, test_method in self.testCases:
if test_class not in classes:
classes.append(test_class)
return classes
def doneNetTest(self, result):
if self.summary:
log.msg("Summary for %s" % self.testDetails['test_name'])
log.msg("------------" + "-"*len(self.testDetails['test_name']))
for test_class in self.uniqueClasses():
test_instance = test_class()
test_instance.displaySummary(self.summary)
if self.testDetails["report_id"]:
log.msg("Report ID: %s" % self.testDetails["report_id"])
@property
def completionRate(self):
return float(self._completedInputs) / (time.time() - self._startTime)
@property
def completionPercentage(self):
if self._totalInputs == 0:
return 0.0
# Never return 100%
if self._completedInputs >= self._totalInputs:
return 0.99
return float(self._completedInputs) / float(self._totalInputs)
@property
def completionEta(self):
remaining_inputs = self._totalInputs - self._completedInputs
# We adjust for negative values
if remaining_inputs <= 0:
return 1
return (self.completionRate * remaining_inputs) * 1.5 # fudge factor
def doneReport(self, report_results):
"""
This will get called every time a report is done and therefore a
measurement is done.
The state for the NetTest is informed of the fact that another task has
reached the done state.
"""
self._completedInputs += 1
log.msg("")
log.msg("Status")
log.msg("------")
log.msg("%d completed %d remaining" % (self._completedInputs,
self._totalInputs))
log.msg("%0.1f%% (ETA: %ds)" % (self.completionPercentage * 100,
self.completionEta))
self.state.taskDone()
return report_results
def makeMeasurement(self, test_instance, test_method, test_input=None):
"""
Creates a new instance of :class:ooni.tasks.Measurement and add's it's
callbacks and errbacks.
Args:
test_class:
a subclass of :class:ooni.nettest.NetTestCase
test_method:
a string that represents the method to be called on test_class
test_input:
optional argument that represents the input to be passed to the
NetTestCase
"""
measurement = Measurement(test_instance, test_method, test_input)
measurement.netTest = self
if self.director:
measurement.done.addCallback(self.director.measurementSucceeded,
measurement)
measurement.done.addErrback(self.director.measurementFailed,
measurement)
return measurement
@defer.inlineCallbacks
def initialize(self):
for test_class, test_cases in self.testCases:
# Initialize Input Processor
test_instance = test_class()
test_class.inputs = yield defer.maybeDeferred(
test_instance.getInputProcessor
)
for _ in test_cases:
if test_instance._totalInputs != None:
self._totalInputs += test_instance._totalInputs
else:
self._totalInputs += 1
# Run the setupClass method
yield defer.maybeDeferred(
test_class.setUpClass
)
def generateMeasurements(self):
"""
This is a generator that yields measurements and registers the
callbacks for when a measurement is successful or has failed.
FIXME: If this generator throws exception TaskManager scheduler is
irreversibly damaged.
"""
self._startTime = time.time()
for test_class, test_methods in self.testCases:
# load a singular input processor for all instances
all_inputs = test_class.inputs
for test_input in all_inputs:
measurements = []
test_instance = test_class()
# Set each instances inputs to a singular input processor
test_instance.inputs = all_inputs
test_instance._setUp()
test_instance.summary = self.summary
for method in test_methods:
try:
measurement = self.makeMeasurement(
test_instance,
method,
test_input)
except Exception:
log.exception(failure.Failure())
log.err('Failed to run %s %s %s' % (test_instance, method, test_input))
continue # it's better to skip single measurement...
log.debug("Running %s %s" % (test_instance, method))
measurements.append(measurement.done)
self.state.taskCreated()
yield measurement
# This is to skip setting callbacks on measurements that
# cannot be run.
if len(measurements) == 0:
continue
# When the measurement.done callbacks have all fired
# call the postProcessor before writing the report
if self.report:
post = defer.DeferredList(measurements)
@post.addBoth
def set_runtime(results):
runtime = time.time() - test_instance._start_time
for _, m in results:
m.testInstance.report['test_runtime'] = runtime
test_instance.report['test_runtime'] = runtime
return results
# Call the postProcessor, which must return a single report
# or a deferred
post.addCallback(test_instance.postProcessor)
def noPostProcessor(failure, report):
failure.trap(e.NoPostProcessor)
return report
post.addErrback(noPostProcessor, test_instance.report)
post.addCallback(self.report.write)
if self.report and self.director:
# ghetto hax to keep NetTestState counts are accurate
[post.addBoth(self.doneReport) for _ in measurements]
self.state.allTasksScheduled()
class NetTestCase(object):
"""
This is the base of the OONI nettest universe. When you write a nettest
you will subclass this object.
* inputs: can be set to a static set of inputs. All the tests (the methods
starting with the "test" prefix) will be run once per input. At every
run the _input_ attribute of the TestCase instance will be set to the
value of the current iteration over inputs. Any python iterable object
can be set to inputs.
* inputFile: attribute should be set to an array containing the command
line argument that should be used as the input file. Such array looks
like this:
``["commandlinearg", "c", "default value" "The description"]``
The second value of such arrray is the shorthand for the command line
arg. The user will then be able to specify inputs to the test via:
``ooniprobe mytest.py --commandlinearg path/to/file.txt``
or
``ooniprobe mytest.py -c path/to/file.txt``
* inputProcessor: should be set to a function that takes as argument a
filename and it will return the input to be passed to the test
instance.
* name: should be set to the name of the test.
* author: should contain the name and contact details for the test author.
The format for such string is as follows:
``The Name ``
* version: is the version string of the test.
* requiresRoot: set to True if the test must be run as root.
* usageOptions: a subclass of twisted.python.usage.Options for processing
of command line arguments
* localOptions: contains the parsed command line arguments.
Quirks:
Every class that is prefixed with test *must* return a
twisted.internet.defer.Deferred.
"""
name = "This test is nameless"
author = "Jane Doe "
version = "0.0.0"
description = "Sorry, this test has no description :("
inputs = None
inputFile = None
inputFilename = None
usageOptions = usage.Options
optParameters = None
baseParameters = None
baseFlags = None
requiredTestHelpers = {}
requiredOptions = []
requiresRoot = False
requiresTor = False
simpleOptions = {}
localOptions = {}
_totalInputs = None
@classmethod
def setUpClass(cls):
"""
You can override this hook with logic that should be run once before
any test method in the NetTestCase is run.
This can be useful to populate class attribute that should be valid
for all the runtime of the NetTest.
"""
pass
def _setUp(self):
"""
This is the internal setup method to be overwritten by templates.
It gets called once for every input.
"""
self.report = {}
def requirements(self):
"""
Place in here logic that will be executed before the test is to be run.
If some condition is not met then you should raise an exception.
"""
pass
def setUp(self):
"""
Place here your logic to be executed when the test is being setup.
It gets called once every test method + input.
"""
pass
def postProcessor(self, measurements):
"""
Subclass this to do post processing tasks that are to occur once all
the test methods have been called once per input.
postProcessing works exactly like test methods, in the sense that
anything that gets written to the object self.report[] will be added to
the final test report.
You should also place in this method any logic that is required for
generating the summary.
"""
raise e.NoPostProcessor
def displaySummary(self, summary):
"""
This gets called after the test has run to allow printing out of a
summary of the test run.
"""
pass
def inputProcessor(self, filename):
"""
You may replace this with your own custom input processor. It takes as
input a file name.
An inputProcessor is an iterator that will yield one item from the file
and takes as argument a filename.
This can be useful when you have some input data that is in a certain
format and you want to set the input attribute of the test to something
that you will be able to properly process.
For example you may wish to have an input processor that will allow you
to ignore comments in files. This can be easily achieved like so::
fp = open(filename)
for x in fp.xreadlines():
if x.startswith("#"):
continue
yield x.strip()
fp.close()
Other fun stuff is also possible.
"""
log.debug("Running default input processor")
with open(filename) as f:
for line in f:
l = line.strip()
# Skip empty lines
if not l:
continue
# Skip comment lines
elif l.startswith('#'):
continue
yield l
@property
def inputFileSpecified(self):
"""
Returns:
True
when inputFile is supported and is specified
False
when input is either not support or not specified
"""
if not self.inputFile:
return False
k = self.inputFile[0]
if self.localOptions.get(k):
return True
else:
return False
def getInputProcessor(self):
"""
This method must be called after all options are validated by
_checkValidOptions and _checkRequiredOptions, which ensure that
if the inputFile is a required option it will be present.
We check to see if it's possible to have an input file and if the user
has specified such file.
If the operations to be done here are network related or blocking, they
should be wrapped in a deferred. That is the return value of this
method should be a :class:`twisted.internet.defer.Deferred`.
Returns:
a generator that will yield one item from the file based on the
inputProcessor.
"""
if self.inputFileSpecified:
if self._totalInputs is None:
self._totalInputs = 0
self.inputFilename = self.localOptions[self.inputFile[0]]
for _ in self.inputProcessor(self.inputFilename):
self._totalInputs += 1
return self.inputProcessor(self.inputFilename)
if isinstance(self.inputs, list):
self._totalInputs = len(self.inputs)
if self.inputs:
return self.inputs
return [None]
def __repr__(self):
return "<%s inputs=%s>" % (self.__class__, self.inputs)
def nettest_to_path(path, allow_arbitrary_paths=False):
"""
Takes as input either a path or a nettest name.
The nettest name may either be prefixed by the category of the nettest (
blocking, experimental, manipulation or third_party) or not.
Args:
allow_arbitrary_paths:
allow also paths that are not relative to the nettest_directory.
Returns:
full path to the nettest file.
"""
if allow_arbitrary_paths and os.path.exists(path):
return path
test_name = path.rsplit("/", 1)[-1]
test_categories = [
"blocking",
"experimental",
"manipulation",
"third_party"
]
nettest_dir = FilePath(config.nettest_directory)
found_path = None
for category in test_categories:
p = nettest_dir.preauthChild(os.path.join(category, test_name) + '.py')
if p.exists():
if found_path is not None:
raise Exception("Found two tests named %s" % test_name)
found_path = p.path
if not found_path:
raise e.NetTestNotFound(path)
return found_path
ooniprobe-2.2.0/ooni/__init__.py 0000644 0001750 0001750 00000000331 13071152215 014714 0 ustar irl irl # -*- encoding: utf-8 -*-
__author__ = "Open Observatory of Network Interference"
__version__ = "2.2.0"
__all__ = [
'agent',
'common',
'nettests',
'scripts',
'templates',
'ui',
'utils'
]
ooniprobe-2.2.0/ooni/tasks.py 0000644 0001750 0001750 00000010774 12777442663 014343 0 ustar irl irl import time
from twisted.internet import defer, reactor
from ooni import errors as e
from ooni.settings import config
from ooni import otime
class BaseTask(object):
_timer = None
_running = None
def __init__(self):
"""
If you want to schedule a task multiple times, remember to create fresh
instances of it.
"""
self.failures = 0
self.startTime = time.time()
self.runtime = 0
# This is a deferred that gets called when a test has reached it's
# final status, this means: all retries have been attempted or the test
# has successfully executed.
# Such deferred will be called on completion by the TaskManager.
self.done = defer.Deferred()
def _failed(self, failure):
self.failures += 1
self.failed(failure)
return failure
def _succeeded(self, result):
self.runtime = time.time() - self.startTime
self.succeeded(result)
return result
def start(self):
self._running = defer.maybeDeferred(self.run)
self._running.addErrback(self._failed)
self._running.addCallback(self._succeeded)
return self._running
def succeeded(self, result):
"""
Place here the logic to handle a successful execution of the task.
"""
pass
def failed(self, failure):
"""
Place in here logic to handle failure.
"""
pass
def run(self):
"""
Override this with the logic of your task.
Must return a deferred.
"""
pass
class TaskWithTimeout(BaseTask):
timeout = 30
# So that we can test the callLater calls
clock = reactor
def _timedOut(self):
"""Internal method for handling timeout failure"""
if self._running:
self._failed(e.TaskTimedOut)
self._running.cancel()
def _cancelTimer(self):
if self._timer.active():
self._timer.cancel()
def _succeeded(self, result):
self._cancelTimer()
return BaseTask._succeeded(self, result)
def _failed(self, failure):
self._cancelTimer()
return BaseTask._failed(self, failure)
def start(self):
self._timer = self.clock.callLater(self.timeout, self._timedOut)
return BaseTask.start(self)
class Measurement(TaskWithTimeout):
def __init__(self, test_instance, test_method, test_input):
"""
test_class:
is the class, subclass of NetTestCase, of the test to be run
test_method:
is a string representing the test method to be called to perform
this measurement
test_input:
is the input to the test
net_test:
a reference to the net_test object such measurement belongs to.
"""
self.testInstance = test_instance
self.testInstance.input = test_input
self.testInstance.setUp()
if 'input' not in self.testInstance.report.keys():
self.testInstance.report['input'] = self.testInstance.input
self.netTestMethod = getattr(self.testInstance, test_method)
if 'timeout' in dir(test_instance):
if isinstance(test_instance.timeout, int) or isinstance(test_instance.timeout, float):
# If the test has a timeout option set we set the measurement
# timeout to that value + 8 seconds to give it enough time to
# trigger it's internal timeout before we start trigger the
# measurement timeout.
self.timeout = test_instance.timeout + 8
elif config.advanced.measurement_timeout:
self.timeout = config.advanced.measurement_timeout
TaskWithTimeout.__init__(self)
def succeeded(self, result):
pass
def failed(self, failure):
pass
def run(self):
if 'measurement_start_time' not in self.testInstance.report.keys():
self.testInstance.report['measurement_start_time'] = otime.timestampNowLongUTC()
if not hasattr(self.testInstance, '_start_time'):
self.testInstance._start_time = time.time()
return self.netTestMethod()
class ReportEntry(TaskWithTimeout):
def __init__(self, reporter, entry):
self.reporter = reporter
self.entry = entry
if config.advanced.reporting_timeout:
self.timeout = config.advanced.reporting_timeout
TaskWithTimeout.__init__(self)
def run(self):
return self.reporter.writeReportEntry(self.entry)
ooniprobe-2.2.0/ooni/resources.py 0000644 0001750 0001750 00000013572 13061505273 015207 0 ustar irl irl import json
import errno
from twisted.python.filepath import FilePath
from twisted.internet import defer
from twisted.web.client import downloadPage, getPage, HTTPClientFactory
from ooni.utils import log, gunzip, rename, mkdir_p
from ooni.settings import config
# Disable logs of HTTPClientFactory
HTTPClientFactory.noisy = False
class UpdateFailure(Exception):
pass
def get_download_url(tag_name, filename):
return ("https://github.com/OpenObservatory/ooni-resources/releases"
"/download/{0}/{1}".format(tag_name, filename))
def get_current_version():
manifest = FilePath(config.resources_directory).child("manifest.json")
if not manifest.exists():
return 0
with manifest.open("r") as f:
manifest = json.load(f)
return int(manifest["version"])
@defer.inlineCallbacks
def get_latest_version():
"""
Fetches the latest version of the resources package.
:return: (int) the latest version number
"""
try:
version = yield getPage(get_download_url("latest", "version"))
except Exception as exc:
raise exc
defer.returnValue(int(version.strip()))
def get_out_of_date_resources(current_manifest, new_manifest,
country_code=None,
resources_directory=config.resources_directory):
current_res = {}
new_res = {}
for r in current_manifest["resources"]:
current_res[r["path"]] = r
for r in new_manifest["resources"]:
new_res[r["path"]] = r
paths_to_delete = [
current_res[path] for path in list(
set(current_res.keys()) -
set(new_res.keys())
)
]
paths_to_update = []
_resources = FilePath(resources_directory)
for path, info in new_res.items():
if (country_code is not None and
info["country_code"] != "ALL" and
info["country_code"] != country_code):
continue
if current_res.get(path, None) is None:
paths_to_update.append(info)
elif current_res[path]["version"] < info["version"]:
paths_to_update.append(info)
else:
pre_path, filename = info["path"].split("/")
# Also perform an update when it doesn't exist on disk, although
# the manifest claims we have a more up to date version.
# This happens if an update by country_code happened and a new
# country code is now required.
if filename.endswith(".gz"):
filename = filename[:-3]
if not _resources.child(pre_path).child(filename).exists():
paths_to_update.append(info)
return paths_to_update, paths_to_delete
@defer.inlineCallbacks
def check_for_update(country_code=None):
"""
Checks if we need to update the resources.
If the country_code is specified then only the resources for that
country will be updated/downloaded.
XXX we currently don't check the shasum of resources although this is
included inside of the manifest.
This should probably be done once we have signing of resources.
:return: the latest version.
"""
temporary_files = []
def cleanup():
# If we fail we need to delete all the temporary files
for _, src_file_path in temporary_files:
src_file_path.remove()
current_version = get_current_version()
latest_version = yield get_latest_version()
resources_dir = FilePath(config.resources_directory)
mkdir_p(resources_dir.path)
current_manifest = resources_dir.child("manifest.json")
if current_manifest.exists():
with current_manifest.open("r") as f:
current_manifest_data = json.load(f)
else:
current_manifest_data = {
"resources": []
}
# We should download a newer manifest
if current_version < latest_version:
new_manifest = current_manifest.temporarySibling()
new_manifest.alwaysCreate = 0
temporary_files.append((current_manifest, new_manifest))
try:
yield downloadPage(
get_download_url(latest_version, "manifest.json"),
new_manifest.path
)
except:
cleanup()
raise UpdateFailure("Failed to download manifest")
new_manifest_data = json.loads(new_manifest.getContent())
else:
new_manifest_data = current_manifest_data
to_update, to_delete = get_out_of_date_resources(
current_manifest_data, new_manifest_data, country_code)
try:
for resource in to_update:
gzipped = False
pre_path, filename = resource["path"].split("/")
if filename.endswith(".gz"):
filename = filename[:-3]
gzipped = True
dst_file = resources_dir.child(pre_path).child(filename)
mkdir_p(dst_file.parent().path)
src_file = dst_file.temporarySibling()
src_file.alwaysCreate = 0
temporary_files.append((dst_file, src_file))
# The paths for the download require replacing "/" with "."
download_url = get_download_url(latest_version,
resource["path"].replace("/", "."))
yield downloadPage(download_url, src_file.path)
if gzipped:
gunzip(src_file.path)
except Exception as exc:
cleanup()
log.exception(exc)
raise UpdateFailure("Failed to download resource {0}".format(resource["path"]))
for dst_file, src_file in temporary_files:
log.msg("Moving {0} to {1}".format(src_file.path,
dst_file.path))
rename(src_file.path, dst_file.path)
for resource in to_delete:
log.msg("Deleting old resources")
pre_path, filename = resource["path"].split("/")
resources_dir.child(pre_path).child(filename).remove()
ooniprobe-2.2.0/ooni/kit/ 0000755 0001750 0001750 00000000000 13071152230 013372 5 ustar irl irl ooniprobe-2.2.0/ooni/kit/daphn3.py 0000644 0001750 0001750 00000015033 12672606310 015134 0 ustar irl irl import yaml
from twisted.internet import protocol, defer
from ooni.utils import log
def read_pcap(filename):
"""
@param filename: Filesystem path to the pcap.
Returns:
[{"client": "\x17\x52\x15"}, {"server": "\x17\x15\x13"}]
"""
from scapy.all import IP, Raw, rdpcap
packets = rdpcap(filename)
checking_first_packet = True
client_ip_addr = None
server_ip_addr = None
ssl_packets = []
messages = []
"""
pcap assumptions:
pcap only contains packets exchanged between a Tor client and a Tor
server. (This assumption makes sure that there are only two IP addresses
in the pcap file)
The first packet of the pcap is sent from the client to the server. (This
assumption is used to get the IP address of the client.)
All captured packets are TLS packets: that is TCP session
establishment/teardown packets should be filtered out (no SYN/SYN+ACK)
"""
"""
Minimally validate the pcap and also find out what's the client
and server IP addresses.
"""
for packet in packets:
if checking_first_packet:
client_ip_addr = packet[IP].src
checking_first_packet = False
else:
if packet[IP].src != client_ip_addr:
server_ip_addr = packet[IP].src
try:
if (packet[Raw]):
ssl_packets.append(packet)
except IndexError:
pass
"""Form our list."""
for packet in ssl_packets:
if packet[IP].src == client_ip_addr:
messages.append({"client": str(packet[Raw])})
elif packet[IP].src == server_ip_addr:
messages.append({"server": str(packet[Raw])})
else:
raise("Detected third IP address! pcap is corrupted.")
return messages
def read_yaml(filename):
f = open(filename)
obj = yaml.safe_load(f)
f.close()
return obj
class NoInputSpecified(Exception):
pass
class StepError(Exception):
pass
def daphn3MutateString(string, i):
"""
Takes a string and mutates the ith bytes of it.
"""
mutated = ""
for y in range(len(string)):
if y == i:
mutated += chr(ord(string[i]) + 1)
else:
mutated += string[y]
return mutated
def daphn3Mutate(steps, step_idx, mutation_idx):
"""
Take a set of steps and a step index and mutates the step of that
index at the mutation_idx'th byte.
"""
mutated_steps = []
for idx, step in enumerate(steps):
if idx == step_idx:
step_string = step.values()[0]
step_key = step.keys()[0]
mutated_string = daphn3MutateString(step_string,
mutation_idx)
mutated_steps.append({step_key: mutated_string})
else:
mutated_steps.append(step)
return mutated_steps
class Daphn3Protocol(protocol.Protocol):
steps = None
role = "client"
report = None
# We use this index to keep track of where we are in the state machine
current_step = 0
current_data_received = 0
# We use this to keep track of the mutated steps
mutated_steps = None
d = defer.Deferred()
def _current_step_role(self):
return self.steps[self.current_step].keys()[0]
def _current_step_data(self):
step_idx, mutation_idx = self.factory.mutation
log.debug("Mutating %s %s" % (step_idx, mutation_idx))
mutated_step = daphn3Mutate(self.steps,
step_idx, mutation_idx)
log.debug("Mutated packet into %s" % mutated_step)
return mutated_step[self.current_step].values()[0]
def sendPayload(self):
self.debug("Sending payload")
current_step_role = self._current_step_role()
current_step_data = self._current_step_data()
if current_step_role == self.role:
print "In a state to do shit %s" % current_step_data
self.transport.write(current_step_data)
self.nextStep()
else:
print "Not in a state to do anything"
def connectionMade(self):
print "Got connection"
def debug(self, msg):
log.debug("Current step %s" % self.current_step)
log.debug("Current data received %s" % self.current_data_received)
log.debug("Current role %s" % self.role)
log.debug("Current steps %s" % self.steps)
log.debug("Current step data %s" % self._current_step_data())
def nextStep(self):
"""
XXX this method is overwritten individually by client and server transport.
There is probably a smarter way to do this and refactor the common
code into one place, but for the moment like this is good.
"""
pass
def dataReceived(self, data):
current_step_role = self.steps[self.current_step].keys()[0]
log.debug("Current step role %s" % current_step_role)
if current_step_role == self.role:
log.debug("Got a state error!")
raise StepError("I should not have gotten data, while I did, \
perhaps there is something wrong with the state machine?")
self.current_data_received += len(data)
expected_data_in_this_state = len(self.steps[self.current_step].values()[0])
log.debug("Current data received %s" % self.current_data_received)
if self.current_data_received >= expected_data_in_this_state:
self.nextStep()
def nextMutation(self):
log.debug("Moving onto next mutation")
# [step_idx, mutation_idx]
c_step_idx, c_mutation_idx = self.factory.mutation
log.debug("[%s]: c_step_idx: %s | c_mutation_idx: %s" % (self.role,
c_step_idx, c_mutation_idx))
if c_step_idx >= (len(self.steps) - 1):
log.err("No censorship fingerprint bisected.")
log.err("Givinig up.")
self.transport.loseConnection()
return
# This means we have mutated all bytes in the step
# we should proceed to mutating the next step.
log.debug("steps: %s | %s" % (self.steps, self.steps[c_step_idx]))
if c_mutation_idx >= (len(self.steps[c_step_idx].values()[0]) - 1):
log.debug("Finished mutating step")
# increase step
self.factory.mutation[0] += 1
# reset mutation idx
self.factory.mutation[1] = 0
else:
log.debug("Mutating next byte in step")
# increase mutation index
self.factory.mutation[1] += 1
def connectionLost(self, reason):
self.debug("--- Lost the connection ---")
self.nextMutation()
ooniprobe-2.2.0/ooni/kit/__init__.py 0000644 0001750 0001750 00000000060 12373757527 015527 0 ustar irl irl #__all__ = ['domclass']
#from . import domclass
ooniprobe-2.2.0/ooni/kit/domclass.py 0000644 0001750 0001750 00000017046 12672606310 015572 0 ustar irl irl """
how this works
--------------
This classifier uses the DOM structure of a website to determine how similar
the two sites are.
The procedure we use is the following:
* First we parse all the DOM tree of the web page and we build a list of
TAG parent child relationships (ex. =>
(html, a), (a, b), (html, c)).
* We then use this information to build a matrix (M) where m[i][j] = P(of
transitioning from tag[i] to tag[j]). If tag[i] does not exists P() = 0.
Note: M is a square matrix that is number_of_tags wide.
* We then calculate the eigenvectors (v_i) and eigenvalues (e) of M.
* The corelation between page A and B is given via this formula:
correlation = dot_product(e_A, e_B), where e_A and e_B are
resepectively the eigenvalues for the probability matrix A and the
probability matrix B.
"""
import numpy
import time
from ooni import log
# All HTML4 tags
# XXX add link to W3C page where these came from
alltags = ['A', 'ABBR', 'ACRONYM', 'ADDRESS', 'APPLET', 'AREA', 'B', 'BASE',
'BASEFONT', 'BD', 'BIG', 'BLOCKQUOTE', 'BODY', 'BR', 'BUTTON', 'CAPTION',
'CENTER', 'CITE', 'CODE', 'COL', 'COLGROUP', 'DD', 'DEL', 'DFN', 'DIR', 'DIV',
'DL', 'DT', 'E M', 'FIELDSET', 'FONT', 'FORM', 'FRAME', 'FRAMESET', 'H1', 'H2',
'H3', 'H4', 'H5', 'H6', 'HEAD', 'HR', 'HTML', 'I', 'IFRAME ', 'IMG',
'INPUT', 'INS', 'ISINDEX', 'KBD', 'LABEL', 'LEGEND', 'LI', 'LINK', 'MAP',
'MENU', 'META', 'NOFRAMES', 'NOSCRIPT', 'OBJECT', 'OL', 'OPTGROUP', 'OPTION',
'P', 'PARAM', 'PRE', 'Q', 'S', 'SAMP', 'SCRIPT', 'SELECT', 'SMALL', 'SPAN',
'STRIKE', 'STRONG', 'STYLE', 'SUB', 'SUP', 'TABLE', 'TBODY', 'TD',
'TEXTAREA', 'TFOOT', 'TH', 'THEAD', 'TITLE', 'TR', 'TT', 'U', 'UL', 'VAR']
# Reduced subset of only the most common tags
commontags = ['A', 'B', 'BLOCKQUOTE', 'BODY', 'BR', 'BUTTON', 'CAPTION',
'CENTER', 'CITE', 'CODE', 'COL', 'DD', 'DIV',
'DL', 'DT', 'EM', 'FIELDSET', 'FONT', 'FORM', 'FRAME', 'FRAMESET', 'H1', 'H2',
'H3', 'H4', 'H5', 'H6', 'HEAD', 'HR', 'HTML', 'IFRAME ', 'IMG',
'INPUT', 'INS', 'LABEL', 'LEGEND', 'LI', 'LINK', 'MAP',
'MENU', 'META', 'NOFRAMES', 'NOSCRIPT', 'OBJECT', 'OL', 'OPTION',
'P', 'PRE', 'SCRIPT', 'SELECT', 'SMALL', 'SPAN',
'STRIKE', 'STRONG', 'STYLE', 'SUB', 'SUP', 'TABLE', 'TBODY', 'TD',
'TEXTAREA', 'TFOOT', 'TH', 'THEAD', 'TITLE', 'TR', 'TT', 'U', 'UL']
# The tags we are intested in using for our analysis
thetags = ['A', 'DIV', 'FRAME', 'H1', 'H2',
'H3', 'H4', 'IFRAME ', 'INPUT',
'LABEL','LI', 'P', 'SCRIPT', 'SPAN',
'STYLE', 'TR']
def compute_probability_matrix(dataset):
"""
Compute the probability matrix based on the input dataset.
:dataset: an array of pairs representing the parent child relationships.
"""
matrix = numpy.zeros((len(thetags) + 1, len(thetags) + 1))
for data in dataset:
x = data[0].upper()
y = data[1].upper()
try:
x = thetags.index(x)
except:
x = len(thetags)
try:
y = thetags.index(y)
except:
y = len(thetags)
matrix[x,y] += 1
for x in xrange(len(thetags) + 1):
possibilities = 0
for y in matrix[x]:
possibilities += y
for i in xrange(len(matrix[x])):
if possibilities != 0:
matrix[x][i] = matrix[x][i]/possibilities
return matrix
def compute_eigenvalues(matrix):
"""
Returns the eigenvalues of the supplied square matrix.
:matrix: must be a square matrix and diagonalizable.
"""
return numpy.linalg.eigvals(matrix)
def readDOM(content=None, filename=None, debug=False):
"""
Parses the DOM of the HTML page and returns an array of parent, child
pairs.
:content: the content of the HTML page to be read.
:filename: the filename to be read from for getting the content of the
page.
"""
try:
from bs4 import BeautifulSoup
except ImportError:
log.err("BeautifulSoup is not installed. This test canno run")
raise Exception
if filename:
f = open(filename)
content = ''.join(f.readlines())
f.close()
if debug:
start = time.time()
print "Running BeautifulSoup on content"
dom = BeautifulSoup(content)
if debug:
print "done in %s" % (time.time() - start)
if debug:
start = time.time()
print "Creating couples matrix"
couples = []
for x in dom.findAll():
couples.append((str(x.parent.name), str(x.name)))
if debug:
print "done in %s" % (time.time() - start)
return couples
def compute_eigenvalues_from_DOM(*arg,**kw):
dom = readDOM(*arg, **kw)
probability_matrix = compute_probability_matrix(dom)
eigenvalues = compute_eigenvalues(probability_matrix)
return eigenvalues
def compute_correlation(matrix_a, matrix_b):
correlation = numpy.vdot(matrix_a, matrix_b)
correlation /= numpy.linalg.norm(matrix_a)*numpy.linalg.norm(matrix_b)
correlation = (correlation + 1)/2
return correlation
def benchmark():
"""
Running some very basic benchmarks on this input data:
Data files:
683 filea.txt
678 fileb.txt
diff file* | wc -l
283
We get such results:
Read file B
Running BeautifulSoup on content
done in 0.768223047256
Creating couples matrix
done in 0.023903131485
--------
total done in 0.796372890472
Read file A
Running BeautifulSoup on content
done in 0.752885818481
Creating couples matrix
done in 0.0163578987122
--------
total done in 0.770951986313
Computing prob matrix
done in 0.0475239753723
Computing eigenvalues
done in 0.00161099433899
Computing prob matrix B
done in 0.0408289432526
Computing eigen B
done in 0.000268936157227
Computing correlation
done in 0.00016713142395
Corelation: 0.999999079331
What this means is that the bottleneck is not in the maths, but is rather
in the computation of the DOM tree matrix.
XXX We should focus on optimizing the parsing of the HTML (this depends on
beautiful soup). Perhaps we can find and alternative to it that is
sufficient for us.
"""
start = time.time()
print "Read file B"
site_a = readDOM(filename='filea.txt', debug=True)
print "--------"
print "total done in %s" % (time.time() - start)
start = time.time()
print "Read file A"
site_b = readDOM(filename='fileb.txt', debug=True)
print "--------"
print "total done in %s" % (time.time() - start)
a = {}
b = {}
start = time.time()
print "Computing prob matrix"
a['matrix'] = compute_probability_matrix(site_a)
print "done in %s" % (time.time() - start)
start = time.time()
print "Computing eigenvalues"
a['eigen'] = compute_eigenvalues(a['matrix'])
print "done in %s" % (time.time() - start)
start = time.time()
start = time.time()
print "Computing prob matrix B"
b['matrix'] = compute_probability_matrix(site_b)
print "done in %s" % (time.time() - start)
start = time.time()
print "Computing eigen B"
b['eigen'] = compute_eigenvalues(b['matrix'])
print "done in %s" % (time.time() - start)
start = time.time()
print "Computing correlation"
correlation = compute_correlation(a['eigen'], b['eigen'])
print "done in %s" % (time.time() - start)
print "Corelation: %s" % correlation
#benchmark()
ooniprobe-2.2.0/ooni/scripts/ 0000755 0001750 0001750 00000000000 13071152230 014272 5 ustar irl irl ooniprobe-2.2.0/ooni/scripts/oonireport.py 0000644 0001750 0001750 00000023143 12767752455 017101 0 ustar irl irl from __future__ import print_function
import os
import sys
import json
import yaml
from twisted.python import usage
from twisted.internet import defer, task, reactor
from ooni.constants import CANONICAL_BOUNCER_ONION
from ooni.reporter import OONIBReporter, OONIBReportLog
from ooni.utils import log
from ooni.settings import config
from ooni.backend_client import BouncerClient, CollectorClient
from ooni import __version__
@defer.inlineCallbacks
def lookup_collector_client(report_header, bouncer):
oonib_client = BouncerClient(bouncer)
net_tests = [{
'test-helpers': [],
'input-hashes': [],
'name': report_header['test_name'],
'version': report_header['test_version'],
}]
result = yield oonib_client.lookupTestCollector(
net_tests
)
collector_client = CollectorClient(
address=result['net-tests'][0]['collector']
)
defer.returnValue(collector_client)
class NoIDFound(Exception):
pass
def report_path_to_id(report_file):
measurement_dir = os.path.dirname(report_file)
measurement_id = os.path.basename(measurement_dir)
if os.path.dirname(measurement_dir) != config.measurements_directory:
raise NoIDFound
return measurement_id
@defer.inlineCallbacks
def upload(report_file, collector=None, bouncer=None, measurement_id=None):
oonib_report_log = OONIBReportLog()
collector_client = None
if collector:
collector_client = CollectorClient(address=collector)
try:
# Try to guess the measurement_id from the file path
measurement_id = report_path_to_id(report_file)
except NoIDFound:
pass
log.msg("Attempting to upload %s" % report_file)
if report_file.endswith(".njson"):
report = NJSONReportLoader(report_file)
else:
log.warn("Uploading of YAML formatted reports will be dropped in "
"future versions")
report = YAMLReportLoader(report_file)
if bouncer and collector_client is None:
collector_client = yield lookup_collector_client(report.header,
bouncer)
if collector_client is None:
if measurement_id:
report_log = yield oonib_report_log.get_report_log(measurement_id)
collector_settings = report_log['collector']
print(collector_settings)
if collector_settings is None or len(collector_settings) == 0:
log.warn("Skipping uploading of %s since this measurement "
"was run by specifying no collector." %
report_file)
defer.returnValue(None)
elif isinstance(collector_settings, dict):
collector_client = CollectorClient(settings=collector_settings)
elif isinstance(collector_settings, str):
collector_client = CollectorClient(address=collector_settings)
else:
log.msg("Looking up collector with canonical bouncer." % report_file)
collector_client = yield lookup_collector_client(report.header,
CANONICAL_BOUNCER_ONION)
oonib_reporter = OONIBReporter(report.header, collector_client)
log.msg("Creating report for %s with %s" % (report_file,
collector_client.settings))
report_id = yield oonib_reporter.createReport()
report.header['report_id'] = report_id
if measurement_id:
log.debug("Marking it as created")
yield oonib_report_log.created(measurement_id,
collector_client.settings)
log.msg("Writing report entries")
for entry in report:
yield oonib_reporter.writeReportEntry(entry)
log.msg("Written entry")
log.msg("Closing report")
yield oonib_reporter.finish()
if measurement_id:
log.debug("Closing log")
yield oonib_report_log.closed(measurement_id)
@defer.inlineCallbacks
def upload_all(collector=None, bouncer=None, upload_incomplete=False):
oonib_report_log = OONIBReportLog()
reports_to_upload = yield oonib_report_log.get_to_upload()
for report_file, value in reports_to_upload:
try:
yield upload(report_file, collector, bouncer,
value['measurement_id'])
except Exception as exc:
log.exception(exc)
if upload_incomplete:
reports_to_upload = yield oonib_report_log.get_incomplete()
for report_file, value in reports_to_upload:
try:
yield upload(report_file, collector, bouncer,
value['measurement_id'])
except Exception as exc:
log.exception(exc)
def print_report(report_file, value):
print("* %s" % report_file)
print(" %s" % value['last_update'])
@defer.inlineCallbacks
def status():
oonib_report_log = OONIBReportLog()
reports_to_upload = yield oonib_report_log.get_to_upload()
print("Reports to be uploaded")
print("----------------------")
for report_file, value in reports_to_upload:
print_report(report_file, value)
reports_in_progress = yield oonib_report_log.get_in_progress()
print("Reports in progress")
print("-------------------")
for report_file, value in reports_in_progress:
print_report(report_file, value)
reports_incomplete = yield oonib_report_log.get_incomplete()
print("Incomplete reports")
print("------------------")
for report_file, value in reports_incomplete:
print_report(report_file, value)
class ReportLoader(object):
_header_keys = (
'probe_asn',
'probe_cc',
'probe_ip',
'probe_city',
'test_start_time',
'test_name',
'test_version',
'options',
'input_hashes',
'software_name',
'software_version',
'data_format_version',
'report_id',
'test_helpers',
'annotations',
'id'
)
def __iter__(self):
return self
def close(self):
self._fp.close()
class YAMLReportLoader(ReportLoader):
def __init__(self, report_filename):
self._fp = open(report_filename)
self._yfp = yaml.safe_load_all(self._fp)
self.header = self._yfp.next()
def next(self):
try:
return self._yfp.next()
except StopIteration:
self.close()
raise StopIteration
class NJSONReportLoader(ReportLoader):
def __init__(self, report_filename):
self._fp = open(report_filename)
self.header = self._peek_header()
def _peek_header(self):
header = {}
first_entry = json.loads(next(self._fp))
for key in self._header_keys:
header[key] = first_entry.get(key, None)
self._fp.seek(0)
return header
def next(self):
try:
entry = json.loads(next(self._fp))
for key in self._header_keys:
entry.pop(key, None)
test_keys = entry.pop('test_keys')
entry.update(test_keys)
return entry
except StopIteration:
self.close()
raise StopIteration
class Options(usage.Options):
synopsis = """%s [options] upload | status
""" % (os.path.basename(sys.argv[0]),)
optFlags = [
["default-collector", "d", "Upload the reports to the default "
"collector that is looked up with the "
"canonical bouncer."]
]
optParameters = [
["configfile", "f", None,
"Specify the configuration file to use."],
["collector", "c", None,
"Specify the collector to upload the result to."],
["bouncer", "b", None,
"Specify the bouncer to query for a collector."]
]
def opt_version(self):
print("oonireport version: %s" % __version__)
sys.exit(0)
def parseArgs(self, *args):
if len(args) == 0:
raise usage.UsageError(
"Must specify at least one command"
)
return
self['command'] = args[0]
if self['command'] not in ("upload", "status"):
raise usage.UsageError(
"Must specify either command upload or status"
)
if self['command'] == "upload":
try:
self['report_file'] = args[1]
except IndexError:
self['report_file'] = None
def tor_check():
if not config.tor.socks_port:
log.err("Currently oonireport requires that you start Tor yourself "
"and set the socks_port inside of ooniprobe.conf")
sys.exit(1)
def oonireport(_reactor=reactor, _args=sys.argv[1:]):
options = Options()
try:
options.parseOptions(_args)
except Exception as exc:
print("Error: %s" % exc)
print(options)
sys.exit(2)
config.global_options = dict(options)
config.set_paths()
config.read_config_file()
if options['default-collector']:
options['bouncer'] = CANONICAL_BOUNCER_ONION
if options['command'] == "upload" and options['report_file']:
log.start()
tor_check()
return upload(options['report_file'],
options['collector'],
options['bouncer'])
elif options['command'] == "upload":
log.start()
tor_check()
return upload_all(options['collector'],
options['bouncer'])
elif options['command'] == "status":
return status()
else:
print(options)
def run():
task.react(oonireport)
if __name__ == "__main__":
run()
ooniprobe-2.2.0/ooni/scripts/oonideckgen.py 0000644 0001750 0001750 00000007253 13015035407 017144 0 ustar irl irl from __future__ import print_function
import os
import sys
from twisted.internet import defer, task
from twisted.python import usage
from ooni.utils import mkdir_p
from ooni.otime import prettyDateNowUTC
from ooni import errors
from ooni.geoip import probe_ip
from ooni.resources import check_for_update
from ooni.deck import NGDeck
from ooni import __version__
class Options(usage.Options):
synopsis = """%s [options]
""" % sys.argv[0]
optParameters = [
["country-code", "c", None,
"Specify the two letter country code for which we should "
"generate the deck."],
["collector", None, None, "Specify a custom collector to use when "
"submitting reports"],
["bouncer", None, None, "Specify a custom bouncer to use"],
["output", "o", None,
"Specify the path where we should be writing the deck to."]
]
def opt_version(self):
print("oonideckgen version: %s" % __version__)
sys.exit(0)
def generate_deck(options):
deck_data = {
"name": "Default ooniprobe deck",
"description": "Default ooniprobe deck generated on {0}".format(
prettyDateNowUTC()),
"schedule": "@daily",
"tasks": [
{
"ooni": {
"test_name": "http_invalid_request_line"
},
},
{
"ooni": {
"test_name": "http_header_field_manipulation"
},
},
{
"ooni": {
"test_name": "web_connectivity",
"file": "$citizenlab_${probe_cc}_urls"
},
},
{
"ooni": {
"test_name": "web_connectivity",
"file": "$citizenlab_global_urls"
}
}
]
}
if options["collector"] is not None:
deck_data["collector"] = options['collector']
if options["bouncer"] is not None:
deck_data["bouncer"] = options['bouncer']
deck = NGDeck(deck_data=deck_data)
with open(options['output'], 'w+') as fw:
deck.write(fw)
print("Deck written to {0}".format(options['output']))
print("Run ooniprobe like so:")
print("ooniprobe -i {0}".format(options['output']))
@defer.inlineCallbacks
def get_user_country_code():
yield probe_ip.lookup(include_country=True)
defer.returnValue(probe_ip.geodata['countrycode'])
@defer.inlineCallbacks
def oonideckgen(reactor):
options = Options()
try:
options.parseOptions()
except usage.UsageError as error_message:
print("%s: %s" % (sys.argv[0], error_message))
print(options)
sys.exit(1)
print("Checking for update of resources")
yield check_for_update()
if not options['output']:
options['output'] = os.getcwd()
if not options['country-code']:
try:
options['country-code'] = yield get_user_country_code()
except errors.ProbeIPUnknown:
print("Could not determine your IP address.")
print("Check your internet connection or specify a country code "
"with -c.")
sys.exit(4)
if len(options['country-code']) != 2:
print("%s: --country-code must be 2 characters" % sys.argv[0])
sys.exit(2)
if os.path.isdir(options['output']):
options['output'] = os.path.join(options['output'], 'web-full.yaml')
options['country-code'] = options['country-code'].lower()
mkdir_p(os.path.dirname(options['output']))
generate_deck(options)
def run():
task.react(oonideckgen)
if __name__ == "__main__":
run()
ooniprobe-2.2.0/ooni/scripts/__init__.py 0000644 0001750 0001750 00000000000 12767752455 016423 0 ustar irl irl ooniprobe-2.2.0/ooni/scripts/ooniprobe.py 0000644 0001750 0001750 00000002356 13042705306 016654 0 ustar irl irl #!/usr/bin/env python
import webbrowser
from multiprocessing import Process
from twisted.internet import task, defer
def ooniprobe(reactor):
from ooni.ui.cli import runWithDaemonDirector, runWithDirector
from ooni.ui.cli import setupGlobalOptions, initializeOoniprobe
from ooni.settings import config
global_options = setupGlobalOptions(logging=True, start_tor=True,
check_incoherences=True)
if global_options['info']:
config.log_info()
return defer.succeed(None)
if global_options['queue']:
return runWithDaemonDirector(global_options)
if global_options['web-ui']:
from ooni.settings import config
from ooni.scripts.ooniprobe_agent import status_agent, start_agent
if status_agent() != 0:
p = Process(target=start_agent)
p.start()
p.join()
print("Started ooniprobe-agent")
webbrowser.open_new(config.web_ui_url)
return defer.succeed(None)
if global_options['initialize']:
initializeOoniprobe(global_options)
return defer.succeed(None)
return runWithDirector(global_options)
def run():
task.react(ooniprobe)
if __name__ == "__main__":
run()
ooniprobe-2.2.0/ooni/scripts/ooniprobe_agent.py 0000644 0001750 0001750 00000014674 13042705306 020040 0 ustar irl irl from __future__ import print_function
import os
import sys
import time
import errno
import signal
from twisted.scripts import twistd
from twisted.python import usage
from ooni.utils import log, is_process_running
from ooni.settings import config
from ooni.agent.agent import AgentService
from ooni import __version__
class StartOoniprobeAgentPlugin:
tapname = "ooniprobe"
def makeService(self, so):
return AgentService(config.advanced.webui_port)
class OoniprobeTwistdConfig(twistd.ServerOptions):
subCommands = [
("StartOoniprobeAgent", None, usage.Options, "ooniprobe agent")
]
class StartOptions(usage.Options):
pass
class StopOptions(usage.Options):
pass
class StatusOptions(usage.Options):
pass
class RunOptions(usage.Options):
pass
class AgentOptions(usage.Options):
synopsis = """%s [options] command
""" % (os.path.basename(sys.argv[0]),)
subCommands = [
['start', None, StartOptions, "Start the ooniprobe-agent in the "
"background"],
['stop', None, StopOptions, "Stop the ooniprobe-agent"],
['status', None, StatusOptions, "Show status of the ooniprobe-agent"],
['run', None, RunOptions, "Run the ooniprobe-agent in the foreground"]
]
def postOptions(self):
self.twistd_args = []
def opt_version(self):
"""
Display the ooniprobe version and exit.
"""
print("ooniprobe-agent version:", __version__)
sys.exit(0)
def start_agent(options=None):
config.set_paths()
config.initialize_ooni_home()
config.read_config_file()
os.chdir(config.running_path)
# Since we are starting the logger below ourselves we make twistd log to
# a null log observer
twistd_args = ['--logger', 'ooni.utils.log.ooniloggerNull',
'--umask', '022']
twistd_config = OoniprobeTwistdConfig()
if options is not None:
twistd_args.extend(options.twistd_args)
twistd_args.append("StartOoniprobeAgent")
try:
twistd_config.parseOptions(twistd_args)
except usage.error as ue:
print("ooniprobe: usage error from twistd: {}\n".format(ue))
sys.exit(1)
twistd_config.loadedPlugins = {
"StartOoniprobeAgent": StartOoniprobeAgentPlugin()
}
try:
get_running_pidfile()
print("Stop ooniprobe-agent before attempting to start it")
return 1
except NotRunning:
pass
print("Starting ooniprobe agent.")
print("To view the GUI go to %s" % config.web_ui_url)
log.start()
twistd.runApp(twistd_config)
return 0
class NotRunning(RuntimeError):
pass
def get_running_pidfile():
"""
:return: This pid of the running ooniprobe-agent instance.
:raises: NotRunning if it's not running
"""
running_pidfile = None
for pidfile in [config.system_pid_path, config.user_pid_path]:
if not os.path.exists(pidfile):
# Didn't find the pid_file
continue
pid = open(pidfile, "r").read()
pid = int(pid)
if is_process_running(pid):
return pidfile
raise NotRunning
def is_stale_pidfile(pidfile):
try:
with open(pidfile) as fd:
pid = int(fd.read())
except Exception:
return False # that's either garbage in the pid-file or a race
return not is_process_running(pid)
def get_stale_pidfiles():
return [f for f in [config.system_pid_path, config.user_pid_path] if is_stale_pidfile(f)]
def status_agent():
try:
get_running_pidfile()
print("ooniprobe-agent is running")
return 0
except NotRunning:
print("ooniprobe-agent is NOT running")
return 1
def do_stop_agent():
# This function is borrowed from tahoe
try:
pidfile = get_running_pidfile()
except NotRunning:
print("ooniprobe-agent is NOT running. Nothing to do.")
return 2
pid = open(pidfile, "r").read()
pid = int(pid)
try:
os.kill(pid, signal.SIGTERM)
except OSError as ose:
if ose.errno == errno.ESRCH:
print("No process was running.") # it's just a race
return 2
elif ose.errno == errno.EPERM:
# The process is owned by root. We assume it's running
print("ooniprobe-agent is owned by root. We cannot stop it.")
return 3
else:
raise
# the process wants to clean it's own pidfile itself
start = time.time()
time.sleep(0.1)
wait = 40
first_time = True
while True:
# poll once per second until we see the process is no longer running
try:
os.kill(pid, 0)
except OSError:
print("process %d is dead" % pid)
return
wait -= 1
if wait < 0:
if first_time:
print("It looks like pid %d is still running "
"after %d seconds" % (pid, (time.time() - start)))
print("Sending a SIGKILL and waiting for it to terminate "
"until you kill me.")
try:
os.kill(pid, signal.SIGKILL)
except OSError as ose:
# Race condition check. It could have dies already. If
# so we are happy.
if ose.errno == errno.ESRCH:
print("process %d is dead" % pid)
return
wait = 10
first_time = False
else:
print("pid %d still running after %d seconds" % \
(pid, (time.time() - start)))
wait = 10
time.sleep(1)
# we define rc=1 to mean "I think something is still running, sorry"
return 1
def stop_agent():
retval = do_stop_agent()
for pidfile in get_stale_pidfiles():
try:
os.remove(pidfile)
print("Cleaned up stale pidfile {0}".format(pidfile))
except EnvironmentError:
print("Failed to delete the pidfile {0}: {1}".format(pidfile, exc))
return retval
def run():
options = AgentOptions()
options.parseOptions()
if options.subCommand == None:
print(options)
return
if options.subCommand == "stop":
return stop_agent()
if options.subCommand == "status":
return status_agent()
if options.subCommand == "run":
options.twistd_args += ("--nodaemon",)
return start_agent(options)
if __name__ == "__main__":
run()
ooniprobe-2.2.0/ooni/scripts/ooniresources.py 0000644 0001750 0001750 00000002005 12767752455 017572 0 ustar irl irl import sys
from twisted.python import usage
from ooni import __version__
class Options(usage.Options):
synopsis = """%s
[DEPRECATED] Usage of this script is deprecated and it will be deleted
in future versions of ooniprobe.
""" % sys.argv[0]
optFlags = [
["update-inputs", None, "(deprecated) update the resources needed for "
"inputs."],
["update-geoip", None, "(deprecated) Update the geoip related "
"resources."]
]
optParameters = []
def opt_version(self):
print("ooniresources version: %s" % __version__)
sys.exit(0)
def run():
options = Options()
try:
options.parseOptions()
except usage.UsageError as error_message:
print "%s: %s" % (sys.argv[0], error_message)
print "%s: Try --help for usage details." % (sys.argv[0])
sys.exit(1)
print("WARNING: Usage of this script is deprecated. We will not do "
"anything.")
sys.exit(0)
ooniprobe-2.2.0/ooni/settings.ini 0000644 0001750 0001750 00000000302 13071150737 015151 0 ustar irl irl [directories]
usr_share = /Users/x/.virtualenvs/ooni-probe-test/share/ooni
var_lib = /Users/x/.virtualenvs/ooni-probe-test/var/lib/ooni
etc = /Users/x/.virtualenvs/ooni-probe-test/var/lib/ooni
ooniprobe-2.2.0/ooni/otime.py 0000644 0001750 0001750 00000001250 12767752455 014322 0 ustar irl irl from datetime import datetime
def prettyDateNow():
"""
Returns a good looking string for the local time.
"""
return datetime.now().ctime()
def prettyDateNowUTC():
"""
Returns a good looking string for utc time.
"""
return datetime.utcnow().ctime()
def timestampNowLongUTC():
"""
Returns a timestamp in the format of %Y-%m-%d %H:%M:%S in Universal Time
Coordinates.
"""
return datetime.utcnow().strftime("%Y-%m-%d %H:%M:%S")
def timestampNowISO8601UTC():
"""
Returns a timestamp in the format of %Y-%m-%d %H:%M:%S in Universal Time
Coordinates.
"""
return datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%SZ")
ooniprobe-2.2.0/ooni/geoip.py 0000644 0001750 0001750 00000020127 13032460744 014273 0 ustar irl irl from __future__ import absolute_import
import re
import os
import json
import time
import random
from hashlib import sha256
from twisted.web import client, http_headers
client._HTTP11ClientFactory.noisy = False
from twisted.internet import reactor, defer
from ooni.utils import log
from ooni import errors
try:
from pygeoip import GeoIP
except ImportError:
try:
import GeoIP as CGeoIP
def GeoIP(database_path, *args, **kwargs):
return CGeoIP.open(database_path, CGeoIP.GEOIP_STANDARD)
except ImportError:
log.err("Unable to import pygeoip. We will not be able to run geo IP related measurements")
class GeoIPDataFilesNotFound(Exception):
pass
def ip_to_location(ipaddr):
from ooni.settings import config
country_file = config.get_data_file_path(
'resources/maxmind-geoip/GeoIP.dat'
)
asn_file = config.get_data_file_path(
'resources/maxmind-geoip/GeoIPASNum.dat'
)
location = {'city': None, 'countrycode': 'ZZ', 'asn': 'AS0'}
if not asn_file or not country_file:
log.err("Could not find GeoIP data file in data directories."
"Try running ooniresources or"
" edit your ooniprobe.conf")
return location
country_dat = GeoIP(country_file)
asn_dat = GeoIP(asn_file)
country_code = country_dat.country_code_by_addr(ipaddr)
if country_code is not None:
location['countrycode'] = country_code
asn = asn_dat.org_by_addr(ipaddr)
if asn is not None:
location['asn'] = asn.split(' ')[0]
return location
def database_version():
from ooni.settings import config
version = {
'GeoIP': {
'sha256': None,
'timestamp': None,
},
'GeoIPASNum': {
'sha256': None,
'timestamp': None
}
}
for key in version.keys():
geoip_file = config.get_data_file_path(
"resources/maxmind-geoip/" + key + ".dat"
)
if not geoip_file or not os.path.isfile(geoip_file):
continue
timestamp = os.stat(geoip_file).st_mtime
sha256hash = sha256()
with open(geoip_file) as f:
while True:
chunk = f.read(8192)
if not chunk:
break
sha256hash.update(chunk)
version[key]['timestamp'] = timestamp
version[key]['sha256'] = sha256hash.hexdigest()
return version
class HTTPGeoIPLookupper(object):
url = None
_agent = client.Agent
def __init__(self):
self.agent = self._agent(reactor)
def _response(self, response):
from ooni.utils.net import BodyReceiver
content_length = response.headers.getRawHeaders('content-length')
finished = defer.Deferred()
response.deliverBody(BodyReceiver(finished, content_length))
finished.addCallback(self.parseResponse)
return finished
def parseResponse(self, response_body):
"""
Override this with the logic for parsing the response.
Should return the IP address of the probe.
"""
pass
def failed(self, failure):
log.err("Failed to lookup via %s" % self.url)
log.exception(failure)
return failure
def lookup(self):
from ooni.utils.net import userAgents
headers = {}
headers['User-Agent'] = [random.choice(userAgents)]
d = self.agent.request("GET", self.url, http_headers.Headers(headers))
d.addCallback(self._response)
d.addErrback(self.failed)
return d
class UbuntuGeoIP(HTTPGeoIPLookupper):
url = "http://geoip.ubuntu.com/lookup"
def parseResponse(self, response_body):
m = re.match(".*(.*).*", response_body)
probe_ip = m.group(1)
return probe_ip
INITIAL = 0
IN_PROGRESS = 1
class ProbeIP(object):
strategy = None
address = None
# How long should we consider geoip results valid?
_expire_in = 10*60
def __init__(self):
self.geoIPServices = {
'ubuntu': UbuntuGeoIP
}
self.geodata = {
'asn': 'AS0',
'city': None,
'countrycode': 'ZZ',
'ip': '127.0.0.1'
}
self._last_lookup = 0
self._reset_state()
def _reset_state(self):
self._state = INITIAL
self._looking_up = defer.Deferred()
self._looking_up.addCallback(self._looked_up)
self._looking_up.addErrback(self._lookup_failed)
def _looked_up(self, result):
self._last_lookup = time.time()
self._reset_state()
return result
def _lookup_failed(self, failure):
self._reset_state()
return failure
def resolveGeodata(self,
include_ip=None,
include_asn=None,
include_country=None):
from ooni.settings import config
self.geodata = ip_to_location(self.address)
self.geodata['ip'] = self.address
if not config.privacy.includeasn and include_asn is not True:
self.geodata['asn'] = 'AS0'
if not config.privacy.includecountry and include_country is not True:
self.geodata['countrycode'] = 'ZZ'
if not config.privacy.includeip and include_ip is not True:
self.geodata['ip'] = '127.0.0.1'
@defer.inlineCallbacks
def lookup(self, include_ip=None, include_asn=None, include_country=None):
if self._state == IN_PROGRESS:
yield self._looking_up
elif self._last_lookup < time.time() - self._expire_in:
self.address = None
if self.address:
self.resolveGeodata(include_ip, include_asn, include_country)
defer.returnValue(self.address)
else:
self._state = IN_PROGRESS
try:
yield self.askTor()
log.msg("Found your IP via Tor")
self.resolveGeodata(include_ip, include_asn, include_country)
self._looking_up.callback(self.address)
defer.returnValue(self.address)
except errors.TorStateNotFound:
log.debug("Tor is not running. Skipping IP lookup via Tor.")
except Exception:
log.msg("Unable to lookup the probe IP via Tor.")
try:
yield self.askGeoIPService()
log.msg("Found your IP via a GeoIP service")
self.resolveGeodata(include_ip, include_asn, include_country)
self._looking_up.callback(self.address)
defer.returnValue(self.address)
except Exception as exc:
log.msg("Unable to lookup the probe IP via GeoIPService")
self._looking_up.errback(defer.failure.Failure(exc))
raise
@defer.inlineCallbacks
def askGeoIPService(self):
# Shuffle the order in which we test the geoip services.
services = self.geoIPServices.items()
random.shuffle(services)
for service_name, service in services:
s = service()
log.msg("Looking up your IP address via %s" % service_name)
try:
self.address = yield s.lookup()
self.strategy = 'geo_ip_service-' + service_name
break
except Exception:
log.msg("Failed to lookup your IP via %s" % service_name)
if not self.address:
raise errors.ProbeIPUnknown
def askTor(self):
"""
Obtain the probes IP address by asking the Tor Control port via GET INFO
address.
XXX this lookup method is currently broken when there are cached descriptors or consensus documents
see: https://trac.torproject.org/projects/tor/ticket/8214
"""
from ooni.settings import config
if config.tor_state:
d = config.tor_state.protocol.get_info("address")
@d.addCallback
def cb(result):
self.strategy = 'tor_get_info_address'
self.address = result.values()[0]
return d
else:
raise errors.TorStateNotFound
probe_ip = ProbeIP()
ooniprobe-2.2.0/ooni/constants.py 0000644 0001750 0001750 00000006757 13015035407 015214 0 ustar irl irl CANONICAL_BOUNCER_ONION = 'httpo://nkvphnp3p6agi5qq.onion'
CANONICAL_BOUNCER_HTTPS = 'https://bouncer.ooni.io'
CANONICAL_BOUNCER_CLOUDFRONT = (
'https://d3kr4emv7f56qa.cloudfront.net/',
'a0.awsstatic.com'
)
MEEK_BRIDGES = [
("meek 0.0.2.0:2 B9E7141C594AF25699E0079C1F0146F409495296 "
"url=https://d2zfqthxsdq309.cloudfront.net/ front=a0.awsstatic.com"),
("meek 0.0.2.0:3 A2C13B7DFCAB1CBF3A884B6EB99A98067AB6EF44 "
"url=https://az786092.vo.msecnd.net/ front=ajax.aspnetcdn.com")
]
# These are bridges taken from TBB
OBFS4_BRIDGES = [
("obfs4 154.35.22.10:41835 8FB9F4319E89E5C6223052AA525A192AFBC85D55 "
"cert=GGGS1TX4R81m3r0HBl79wKy1OtPPNR2CZUIrHjkRg65Vc2VR8fOyo64f9kmT1UAFG7j0HQ iat-mode=0"),
("obfs4 198.245.60.50:443 752CF7825B3B9EA6A98C83AC41F7099D67007EA5 "
"cert=xpmQtKUqQ/6v5X7ijgYE/f03+l2/EuQ1dexjyUhh16wQlu"
"/cpXUGalmhDIlhuiQPNEKmKw iat-mode=0"),
("obfs4 192.99.11.54:443 7B126FAB960E5AC6A629C729434FF84FB5074EC2 "
"cert=VW5f8+IBUWpPFxF+rsiVy2wXkyTQG7vEd"
"+rHeN2jV5LIDNu8wMNEOqZXPwHdwMVEBdqXEw iat-mode=0"),
("obfs4 109.105.109.165:10527 8DFCD8FB3285E855F5A55EDDA35696C743ABFC4E "
"cert=Bvg/itxeL4TWKLP6N1MaQzSOC6tcRIBv6q57DYAZc3b2AzuM"
"+/TfB7mqTFEfXILCjEwzVA iat-mode=0"),
("obfs4 83.212.101.3:41213 A09D536DD1752D542E1FBB3C9CE4449D51298239 "
"cert=lPRQ/MXdD1t5SRZ9MquYQNT9m5DV757jtdXdlePmRCudUU9CFUOX1Tm7"
"/meFSyPOsud7Cw iat-mode=0"),
("obfs4 104.131.108.182:56880 EF577C30B9F788B0E1801CF7E433B3B77792B77A "
"cert=0SFhfDQrKjUJP8Qq6wrwSICEPf3Vl"
"/nJRsYxWbg3QRoSqhl2EB78MPS2lQxbXY4EW1wwXA iat-mode=0"),
("obfs4 109.105.109.147:13764 BBB28DF0F201E706BE564EFE690FE9577DD8386D "
"cert=KfMQN/tNMFdda61hMgpiMI7pbwU1T+wxjTulYnfw"
"+4sgvG0zSH7N7fwT10BI8MUdAD7iJA iat-mode=0"),
("obfs4 154.35.22.11:49868 A832D176ECD5C7C6B58825AE22FC4C90FA249637 "
"cert=YPbQqXPiqTUBfjGFLpm9JYEFTBvnzEJDKJxXG5Sxzrr"
"/v2qrhGU4Jls9lHjLAhqpXaEfZw iat-mode=0"),
("obfs4 154.35.22.12:80 00DC6C4FA49A65BD1472993CF6730D54F11E0DBB "
"cert=N86E9hKXXXVz6G7w2z8wFfhIDztDAzZ"
"/3poxVePHEYjbKDWzjkRDccFMAnhK75fc65pYSg iat-mode=0"),
("obfs4 154.35.22.13:443 FE7840FE1E21FE0A0639ED176EDA00A3ECA1E34D "
"cert=fKnzxr+m+jWXXQGCaXe4f2gGoPXMzbL+bTBbXMYXuK0tMotd"
"+nXyS33y2mONZWU29l81CA iat-mode=0"),
("obfs4 154.35.22.10:80 8FB9F4319E89E5C6223052AA525A192AFBC85D55 "
"cert=GGGS1TX4R81m3r0HBl79wKy1OtPPNR2CZUIrHjkRg65Vc2VR8fOyo64f9kmT1UAFG7j0HQ iat-mode=0"),
("obfs4 154.35.22.10:443 8FB9F4319E89E5C6223052AA525A192AFBC85D55 "
"cert=GGGS1TX4R81m3r0HBl79wKy1OtPPNR2CZUIrHjkRg65Vc2VR8fOyo64f9kmT1UAFG7j0HQ iat-mode=0"),
("obfs4 154.35.22.11:443 A832D176ECD5C7C6B58825AE22FC4C90FA249637 "
"cert=YPbQqXPiqTUBfjGFLpm9JYEFTBvnzEJDKJxXG5Sxzrr"
"/v2qrhGU4Jls9lHjLAhqpXaEfZw iat-mode=0"),
("obfs4 154.35.22.11:80 A832D176ECD5C7C6B58825AE22FC4C90FA249637 "
"cert=YPbQqXPiqTUBfjGFLpm9JYEFTBvnzEJDKJxXG5Sxzrr"
"/v2qrhGU4Jls9lHjLAhqpXaEfZw iat-mode=0"),
("obfs4 154.35.22.9:60873 C73ADBAC8ADFDBF0FC0F3F4E8091C0107D093716 "
"cert=gEGKc5WN/bSjFa6UkG9hOcft1tuK"
"+cV8hbZ0H6cqXiMPLqSbCh2Q3PHe5OOr6oMVORhoJA iat-mode=0"),
("obfs4 154.35.22.9:80 C73ADBAC8ADFDBF0FC0F3F4E8091C0107D093716 "
"cert=gEGKc5WN/bSjFa6UkG9hOcft1tuK"
"+cV8hbZ0H6cqXiMPLqSbCh2Q3PHe5OOr6oMVORhoJA iat-mode=0"),
("obfs4 154.35.22.9:443 C73ADBAC8ADFDBF0FC0F3F4E8091C0107D093716 "
"cert=gEGKc5WN/bSjFa6UkG9hOcft1tuK"
"+cV8hbZ0H6cqXiMPLqSbCh2Q3PHe5OOr6oMVORhoJA iat-mode=0")
]
ooniprobe-2.2.0/ooni/ui/ 0000755 0001750 0001750 00000000000 13071152230 013220 5 ustar irl irl ooniprobe-2.2.0/ooni/ui/cli.py 0000644 0001750 0001750 00000051652 13046133036 014357 0 ustar irl irl import sys
import os
import json
import random
import textwrap
import urlparse
from twisted.python import usage
from twisted.internet import defer
from ooni import errors, __version__
from ooni.settings import config, OONIPROBE_ROOT
from ooni.utils import log
class LifetimeExceeded(Exception): pass
class Options(usage.Options):
synopsis = """%s [options] [path to test].py
""" % (os.path.basename(sys.argv[0]),)
longdesc = ("ooniprobe loads and executes a suite or a set of suites of"
" network tests. These are loaded from modules, packages and"
" files listed on the command line.")
optFlags = [["help", "h"],
["no-collector", "n", "Disable writing to collector"],
["no-njson", "N", "Disable writing to disk"],
["no-geoip", "g", "Disable geoip lookup on start. "
"With this option your IP address can be disclosed in the report."],
["list", "s", "List the currently installed ooniprobe "
"nettests"],
["verbose", "v", "Show more verbose information"],
["web-ui", "w", "Start the web UI"],
["initialize", "z", "Initialize ooniprobe to begin running "
"it"],
["info", None, "Print system wide info and exit"]
]
optParameters = [
["reportfile", "o", None, "Specify the report file name to write "
"to."],
["testdeck", "i", None, "Specify as input a test deck: a yaml file "
"containing the tests to run and their "
"arguments."],
["collector", "c", None, "Specify the address of the collector for "
"test results. In most cases a user will "
"prefer to specify a bouncer over this."],
["bouncer", "b", None, "Specify the bouncer used to "
"obtain the address of the "
"collector and test helpers."],
["logfile", "l", None, "Write to this logs to this filename."],
["pcapfile", "O", None, "Write a PCAP of the ooniprobe session to "
"this filename."],
["configfile", "f", None, "Specify a path to the ooniprobe "
"configuration file."],
["datadir", "d", None, "Specify a path to the ooniprobe data "
"directory."],
["annotations", "a", None, "Annotate the report with a key:value[, "
"key:value] format."],
["preferred-backend", "P", None, "Set the preferred backend to use "
"when submitting results and/or "
"communicating with test helpers. "
"Can be either onion, "
"https or cloudfront"],
["queue", "Q", None, "AMQP Queue URL "
"amqp://user:pass@host:port/vhost/queue"]
]
compData = usage.Completions(
extraActions=[usage.CompleteFiles(
"*.py", descr="file | module | package | TestCase | testMethod",
repeat=True)],)
tracer = None
def __init__(self):
usage.Options.__init__(self)
def opt_spew(self):
"""
Print an insanely verbose log of everything that happens.
Useful when debugging freezes or locks in complex code.
"""
from twisted.python.util import spewer
sys.settrace(spewer)
def opt_version(self):
"""
Display the ooniprobe version and exit.
"""
print "ooniprobe version:", __version__
sys.exit(0)
def parseArgs(self, *args):
flag_opts = ['testdeck', 'list', 'web-ui', 'info']
if any([self[opt] for opt in flag_opts]):
return
try:
self['test_file'] = args[0]
self['subargs'] = args[1:]
except:
raise usage.UsageError("No test filename specified!")
def parseOptions():
cmd_line_options = Options()
if len(sys.argv) == 1:
cmd_line_options.getUsage()
try:
cmd_line_options.parseOptions()
except usage.UsageError as ue:
print cmd_line_options.getUsage()
raise SystemExit("%s: %s" % (sys.argv[0], ue))
return dict(cmd_line_options)
def director_startup_handled_failures(failure):
log.err("Could not start the director")
failure.trap(errors.TorNotRunning,
errors.InvalidOONIBCollectorAddress,
errors.UnableToLoadDeckInput,
errors.CouldNotFindTestHelper,
errors.CouldNotFindTestCollector,
errors.ProbeIPUnknown,
errors.InvalidInputFile,
errors.ConfigFileIncoherent,
SystemExit)
if isinstance(failure.value, errors.TorNotRunning):
log.err("Tor does not appear to be running")
log.err("Reporting with a collector is not possible")
log.msg(
"Try with a different collector or disable collector reporting with -n")
elif isinstance(failure.value, errors.InvalidOONIBCollectorAddress):
log.err("Invalid format for oonib collector address.")
log.msg(
"Should be in the format http://:")
log.msg("for example: ooniprobe -c httpo://nkvphnp3p6agi5qq.onion")
elif isinstance(failure.value, errors.UnableToLoadDeckInput):
log.err("Unable to fetch the required inputs for the test deck.")
log.msg(
"Please file a ticket on our issue tracker: https://github.com/thetorproject/ooni-probe/issues")
elif isinstance(failure.value, errors.CouldNotFindTestHelper):
log.err("Unable to obtain the required test helpers.")
log.msg(
"Try with a different bouncer or check that Tor is running properly.")
elif isinstance(failure.value, errors.CouldNotFindTestCollector):
log.err("Could not find a valid collector.")
log.msg(
"Try with a different bouncer, specify a collector with -c or disable reporting to a collector with -n.")
elif isinstance(failure.value, errors.ProbeIPUnknown):
log.err("Failed to lookup probe IP address.")
log.msg("Check your internet connection.")
elif isinstance(failure.value, errors.InvalidInputFile):
log.err("Invalid input file \"%s\"" % failure.value)
elif isinstance(failure.value, errors.ConfigFileIncoherent):
log.err("Incoherent config file")
if config.advanced.debug:
log.exception(failure)
def director_startup_other_failures(failure):
log.err("An unhandled exception occurred while starting the director!")
log.exception(failure)
def initializeOoniprobe(global_options):
print("It looks like this is the first time you are running ooniprobe")
if not sys.stdin.isatty():
print("ERROR: STDIN is not attached to a tty. Quiting.")
sys.exit(8)
print("Please take a minute to read through the informed consent documentation and "
"understand what are the risks associated with running ooniprobe.")
print("Press enter to continue...")
raw_input()
with open(os.path.join(OONIPROBE_ROOT, 'ui', 'consent-form.md')) as f:
consent_form_text = ''.join(f.readlines())
from pydoc import pager
pager(consent_form_text)
answer = ""
while answer.lower() != "yes":
print('Type "yes" if you are fully aware of the risks associated with using ooniprobe and you wish to proceed')
answer = raw_input("> ")
print("")
print("Now help us configure some things!")
answer = raw_input('Should we upload measurements to a collector? (Y/n) ')
should_upload = True
if answer.lower().startswith("n"):
should_upload = False
answer = raw_input('Should we include your IP in measurements? (y/N) ')
include_ip = False
if answer.lower().startswith("y"):
include_ip = True
answer = raw_input('Should we include your ASN (your network) in '
'measurements? (Y/n) ')
include_asn = True
if answer.lower().startswith("n"):
include_asn = False
answer = raw_input('Should we include your Country in '
'measurements? (Y/n) ')
include_country = True
if answer.lower().startswith("n"):
include_country = False
answer = raw_input('How would you like reports to be uploaded? (onion, '
'https, cloudfront) ')
preferred_backend = 'onion'
if answer.lower().startswith("https"):
preferred_backend = 'https'
elif answer.lower().startswith("cloudfront"):
preferred_backend = 'cloudfront'
config.create_config_file(include_ip=include_ip,
include_asn=include_asn,
include_country=include_country,
should_upload=should_upload,
preferred_backend=preferred_backend)
config.set_initialized()
print("ooniprobe is now initialized. You can begin using it!")
def setupGlobalOptions(logging, start_tor, check_incoherences):
global_options = parseOptions()
config.global_options = global_options
config.set_paths()
config.initialize_ooni_home()
try:
config.read_config_file(check_incoherences=check_incoherences)
except errors.ConfigFileIncoherent:
sys.exit(6)
if not config.is_initialized():
initializeOoniprobe(global_options)
if global_options['verbose']:
config.advanced.debug = True
if not start_tor:
config.advanced.start_tor = False
if logging:
log.start(global_options['logfile'])
if config.privacy.includepcap or global_options['pcapfile']:
from ooni.utils.net import hasRawSocketPermission
if hasRawSocketPermission():
from ooni.utils.txscapy import ScapyFactory
config.scapyFactory = ScapyFactory(config.advanced.interface)
else:
log.err("Insufficient Privileges to capture packets."
" See ooniprobe.conf privacy.includepcap")
sys.exit(2)
global_options['check_incoherences'] = check_incoherences
return global_options
def setupAnnotations(global_options):
annotations={}
for annotation in global_options["annotations"].split(","):
pair = annotation.split(":")
if len(pair) == 2:
key = pair[0].strip()
value = pair[1].strip()
annotations[key] = value
else:
log.err("Invalid annotation: %s" % annotation)
sys.exit(1)
global_options["annotations"] = annotations
return annotations
def setupCollector(global_options, collector_client):
from ooni.backend_client import CollectorClient
if global_options['collector']:
collector_client = CollectorClient(global_options['collector'])
elif config.reports.get('collector', None) is not None:
collector_client = CollectorClient(config.reports['collector'])
if not collector_client.isSupported():
raise errors.CollectorUnsupported
return collector_client
def createDeck(global_options, url=None):
from ooni.deck import NGDeck
from ooni.deck.legacy import subargs_to_options
if url:
log.msg("Creating deck for: %s" % (url))
test_deck_path = global_options.pop('testdeck', None)
test_name = global_options.pop('test_file', None)
no_collector = global_options.pop('no-collector', False)
try:
if test_deck_path is not None:
deck = NGDeck(
global_options=global_options,
no_collector=no_collector
)
deck.open(test_deck_path)
else:
deck = NGDeck(
global_options=global_options,
no_collector=no_collector,
arbitrary_paths=True
)
log.debug("No test deck detected")
if url is not None:
args = ('-u', url)
else:
args = tuple()
if any(global_options['subargs']):
args = global_options['subargs'] + args
test_options = subargs_to_options(args)
test_options['test_name'] = test_name
deck.load({
"tasks": [
{"ooni": test_options}
]
})
except errors.MissingRequiredOption as option_name:
log.err('Missing required option: "%s"' % option_name)
incomplete_net_test_loader = option_name.net_test_loader
map(log.msg, incomplete_net_test_loader.usageOptions().getUsage().split("\n"))
raise SystemExit(2)
except errors.NetTestNotFound as path:
log.err('Requested NetTest file not found (%s)' % path)
raise SystemExit(3)
except errors.OONIUsageError as e:
log.exception(e)
map(log.msg, e.net_test_loader.usageOptions().getUsage().split("\n"))
raise SystemExit(4)
except errors.HTTPSCollectorUnsupported:
log.err("HTTPS collectors require a twisted version of at least 14.0.2.")
raise SystemExit(6)
except errors.InsecureBackend:
log.err("Attempting to report to an insecure collector.")
log.err("To enable reporting to insecure collector set the "
"advanced->insecure_backend option to true in "
"your ooniprobe.conf file.")
raise SystemExit(7)
except Exception as e:
if config.advanced.debug:
log.exception(e)
log.err(e)
raise SystemExit(5)
return deck
def runTestWithDirector(director, global_options, url=None,
start_tor=True,
create_input_store=True):
deck = createDeck(global_options, url=url)
d = director.start(create_input_store=create_input_store,
start_tor=start_tor)
@defer.inlineCallbacks
def post_director_start(_):
try:
yield deck.setup()
yield deck.run(director, from_schedule=False)
except errors.UnableToLoadDeckInput as error:
raise defer.failure.Failure(error)
except errors.NoReachableTestHelpers as error:
raise defer.failure.Failure(error)
except errors.NoReachableCollectors as error:
raise defer.failure.Failure(error)
except SystemExit as error:
raise error
d.addCallback(post_director_start)
d.addErrback(director_startup_handled_failures)
d.addErrback(director_startup_other_failures)
return d
def runWithDirector(global_options, create_input_store=True):
"""
Instance the director, parse command line options and start an ooniprobe
test!
"""
from ooni.director import Director
start_tor = False
director = Director()
if global_options['list']:
net_tests = [net_test for net_test in director.getNetTests().items()]
log.msg("")
log.msg("Installed nettests")
log.msg("==================")
for net_test_id, net_test in net_tests:
optList = []
for name, details in net_test['arguments'].items():
optList.append({'long': name, 'doc': details['description']})
desc = ('\n' +
net_test['name'] +
'\n' +
'-'*len(net_test['name']) +
'\n' +
'\n'.join(textwrap.wrap(net_test['description'], 80)) +
'\n\n' +
'$ ooniprobe {}/{}'.format(net_test['category'],
net_test['id']) +
'\n\n' +
''.join(usage.docMakeChunks(optList))
)
map(log.msg, desc.split("\n"))
log.msg("Note: Third party tests require an external "
"application to run properly.")
raise SystemExit(0)
if global_options.get('annotations') is not None:
global_options['annotations'] = setupAnnotations(global_options)
if global_options.get('preferred-backend') is not None:
config.advanced.preferred_backend = global_options['preferred-backend']
if global_options['no-collector']:
log.msg("Not reporting using a collector")
global_options['collector'] = None
start_tor = False
elif config.advanced.get("preferred_backend", "onion") == "onion":
start_tor = True
if (global_options['collector'] and
config.advanced.get("preferred_backend", "onion") == "onion"):
start_tor |= True
return runTestWithDirector(
director=director,
start_tor=start_tor,
global_options=global_options,
create_input_store=create_input_store
)
# this variant version of runWithDirector splits the process in two,
# allowing a single director instance to be reused with multiple decks.
def runWithDaemonDirector(global_options):
"""
Instance the director, parse command line options and start an ooniprobe
test!
"""
from twisted.internet import reactor, protocol
from ooni.director import Director
try:
import pika
from pika import exceptions
from pika.adapters import twisted_connection
except ImportError:
print "Pika is required for queue connection."
print "Install with \"pip install pika\"."
raise SystemExit(7)
director = Director()
if global_options.get('annotations') is not None:
global_options['annotations'] = setupAnnotations(global_options)
if global_options['no-collector']:
log.msg("Not reporting using a collector")
global_options['collector'] = None
start_tor = False
else:
start_tor = True
finished = defer.Deferred()
@defer.inlineCallbacks
def readmsg(_, channel, queue_object, consumer_tag, counter):
# Wait for a message and decode it.
if counter >= lifetime:
log.msg("Counter")
queue_object.close(LifetimeExceeded())
yield channel.basic_cancel(consumer_tag=consumer_tag)
finished.callback(None)
else:
log.msg("Waiting for message")
try:
ch, method, properties, body = yield queue_object.get()
log.msg("Got message")
data = json.loads(body)
counter += 1
log.msg("Received %d/%d: %s" % (counter, lifetime, data['url'],))
# acknowledge the message
ch.basic_ack(delivery_tag=method.delivery_tag)
d = runTestWithDirector(director=director,
start_tor=start_tor,
global_options=global_options,
url=data['url'].encode('utf8'))
# When the test has been completed, go back to waiting for a message.
d.addCallback(readmsg, channel, queue_object, consumer_tag, counter+1)
except exceptions.AMQPError, v:
log.msg("Error")
log.exception(v)
finished.errback(v)
@defer.inlineCallbacks
def runQueue(connection, name, qos):
# Set up the queue consumer. When a message is received, run readmsg
channel = yield connection.channel()
yield channel.basic_qos(prefetch_count=qos)
queue_object, consumer_tag = yield channel.basic_consume(
queue=name,
no_ack=False)
readmsg(None, channel, queue_object, consumer_tag, 0)
# Create the AMQP connection. This could be refactored to allow test URLs
# to be submitted through an HTTP server interface or something.
urlp = urlparse.urlparse(config.global_options['queue'])
urlargs = dict(urlparse.parse_qsl(urlp.query))
# random lifetime requests counter
lifetime = random.randint(820, 1032)
# AMQP connection details are sent through the cmdline parameter '-Q'
creds = pika.PlainCredentials(urlp.username or 'guest',
urlp.password or 'guest')
parameters = pika.ConnectionParameters(urlp.hostname,
urlp.port or 5672,
urlp.path.rsplit('/',1)[0] or '/',
creds,
heartbeat_interval=120,
)
cc = protocol.ClientCreator(reactor,
twisted_connection.TwistedProtocolConnection,
parameters)
d = cc.connectTCP(urlp.hostname, urlp.port or 5672)
d.addCallback(lambda protocol: protocol.ready)
# start the wait/process sequence.
d.addCallback(runQueue, urlp.path.rsplit('/',1)[-1], int(urlargs.get('qos',1)))
return finished
ooniprobe-2.2.0/ooni/ui/consent-form.md 0000644 0001750 0001750 00000030030 12767752460 016176 0 ustar irl irl The [Open Observatory of Network Interference
(OONI)](https://ooni.torproject.org/) is a free software project, under the [Tor
Project](https://www.torproject.org/), which collects and processes network
measurements with the aim of detecting network anomalies, such as censorship and
traffic manipulation.
Running OONI may be against the terms of service of your ISP or legally
questionable in your country. By running OONI you will connect to web services
which may be banned, and use web censorship circumvention methods such as Tor.
The OONI project will publish data submitted by probes, possibly including your
IP address or other identifying information. In addition, your use of OONI will
be clear to anybody who has access to your computer, and to anybody who can
monitor your internet connection (such as your employer, ISP or government).
By running ooniprobe, you are participating as a volunteer in this project. This
form includes information that you should be aware of and consent to *prior* to
running ooniprobe.
## OONI software tests
The OONI project has developed multiple free software tests which are designed to:
* Detect the blocking of websites
* Detect systems responsible for censorship and traffic manipulation
* Evaluate the reachability of [Tor bridges](https://bridges.torproject.org/),
proxies, VPNs, and sensitive domains
Below we provide brief descriptions of how these tests work.
## Test descriptions
The recommended set of tests that users run through the
`oonideckgen` command include the following:
**Web connectivity:** This test examines whether websites are reachable and if
they are not, it attempts to determine whether access to them is blocked through
DNS tampering, TCP connection RST/IP blocking or by having a transparent HTTP
proxy. It does so by identifying the resolver of the user, performing a DNS
lookup, attempting to establish a TCP session and by sending HTTP GET requests
to the servers that are hosting tested websites.
**HTTP invalid request line:** This test tries to detect the presence of network
components (“middle box”) which could be responsible for censorship and/or
traffic manipulation. Instead of sending a normal HTTP request, this test sends
an invalid HTTP request line - containing an invalid HTTP version number, an
invalid field count and a huge request method – to an echo service listening on
the standard HTTP port. If a middle box is present in the tested network, the
invalid HTTP request line will be intercepted by the middle box and this may
trigger error messages which can help identify the proxy technologies.
**HTTP header field manipulation:** This test tries to detect the presence of
network components (“middle box”) which could be responsible for censorship
and/or traffic manipulation. It does so by sending HTTP requests which include
valid, but non-canonical HTTP headers to a backend control server which sends
back any data it receives. If we receive the HTTP headers exactly as we sent
them, then we assume that there is no “middle box” in the network. If,
however, such software is present in the network that we are testing, it will
likely normalize the invalid headers that we are sending or add extra headers.
Another test which attempts to detect traffic manipulation includes **Multi-
protocol traceroute**, which constructs packets in such a way that they perform
a traceroute from multiple protocols and ports simultaneously. Other tests
include **Tor bridge reachability**, **Psiphon**, **Lantern**, **OpenVPN** and
**Meek fronted requests**, which examine whether these services work within a
tested network by attempting to connect to them in an automated way.
Further test descriptions can be found here.
## Risks
Many countries have a lengthy history of subjecting digital rights activists to
various forms of abuse that could make it dangerous for individuals in these
countries to run OONI. The use of OONI might therefore subject users to severe
civil, criminal, or extra-judicial penalties, and such sanctions can potentially
include:
* Imprisonment
* Physical assaults
* Large fines
* Receiving threats
* Being placed on government watch lists
* Targeted for surveillance
While most countries don't have laws which specifically prohibit the use of
network measurement software, it's important to note that the use of OONI can
*still* potentially be criminalized in certain countries under other, broader
laws if, for example, its use is viewed as an illegal or anti-government
activity. OONI users might also face the risk of being criminalized on the
grounds of *national security* if the data obtained and published by running
OONI is viewed as "jeopardizing" the country's external or internal security. In
extreme cases, any form of active network measurement could be illegal, or even
considered a form of espionage.
We therefore strongly urge you to consult with lawyers *prior* to running
ooniprobe. You can also reach out to us with specific inquiries at
**legal@ooni.nu**. Please note though that we are *not* lawyers, but we might be
able to seek legal advice for you or to put you in touch with lawyers who could
address your questions and/or concerns.
Some relevant resources include:
* [Tor Legal FAQ](https://www.eff.org/torchallenge/faq.html)
* [EFF Know Your Rights](https://www.eff.org/issues/know-your-rights)
**Note:** The use of OONI is at your *own risk* in accordance to OONI's software
[license](https://github.com/TheTorProject/ooni- probe/blob/master/LICENSE) and
neither the OONI project nor its parent organization, the Tor Project, can be
held liable.
**Installing ooniprobe**
As with any other software, the usage of ooniprobe can leave traces. As such,
anybody with physical or remote access to your computer might be able to see
that you have downloaded, installed or run OONI.
The installation of [Tor](https://www.torproject.org/) software, which is
designed for online anonymity, is a *prerequisite* for using OONI as all
measurements are by default sent to OONI over Tor. Furthermore, one of the
recommended tests that users run through the `oonideckgen` command line (web
connectivity test) is designed to compare HTTP requests over the network of the
user and over the Tor network. Similarly, OONI's Psiphon, Lantern and OpenVPN
tests require the installation of circumvention software.
We therefore encourage you to consult with a lawyer on the legality of anonymity
software (such as Tor, a VPN or a proxy) *prior* to installing ooniprobe.
To remove traces of software usage, you can re-install your operating system or
wipe your computer and remove everything (operating system, programs and files)
from your hard drive.
**Running ooniprobe**
Third parties (such as your government, ISP and/or employer) monitoring your
internet activity will be able to see all web traffic generated by OONI,
including your IP address, and might be able to link it to you personally.
Many countries employ sophisticated surveillance measures that allow governments
to track individuals' online activities – even if they are using a VPN or a
proxy server to protect their privacy. In such countries, governments might be
able to identify you as a OONI user regardless of what measures you take to
protect your online privacy.
OONI's **[HTTP-invalid-request-line](https://github.com/TheTorProject/ooni-
spec/blob/master/test-specs/ts-007-http-invalid-request-line.md)** test (which
is included in oonideckgen) probably presents the *highest risk* as its use
*might* trigger the suspicion of your ISP (and possibly, of your government),
the operators of network components affected by out-of-spec messages might view
them as attacks and this could potentially lead to prosecution under **computer
misuse laws** (or other laws).
**Testing URLs for censorship**
When running either oonideckgen (OONI's software package) or OONI's **web
connectivity** test, you will connect to and download data from various websites
which are included in the following two lists:
* **Country-specific test list:**
https://github.com/citizenlab/test-lists/tree/master/lists
(search for your country's test list based on its country code)
* **Global test list:**
https://github.com/citizenlab/test-lists/blob/master/lists/global.csv
(including a list of globally accessed websites)
Many websites included in the above lists will likely be controversial and can
include pornography or hate speech, which might be illegal to access in your
country. We therefore recommend that you examine carefully whether you are
willing to take the risk of accessing and downloading data from such websites
through OONI tests, especially if this could potentially lead to various forms
of retribution.
If you are uncertain of the potential implications of connecting to and
downloading data from the websites listed in the above lists, you can pass your
*own* test list with the ooniprobe `-f` command line option.
**Publication of measurements**
The public (including third parties who view the usage of OONI as illegal or
"suspicious") will be able to see the information collected by OONI once it's
published through:
* [OONI Explorer](https://explorer.ooni.torproject.org/world/)
* [OONI's list of measurements](https://measurements.ooni.torproject.org/)
Unless users **[opt-out](https://github.com/TheTorProject/ooni-spec/blob/master
/informed-consent/data-policy.md#opt-out)**, all measurements that are generated
through OONI tests are by default sent to OONI's measurement collector and
automatically published through the above resources.
Published data will include your approximate location, the network (ASN) you are
connecting from, and when you ran ooniprobe. Other identifying information, such
as your IP address, is *not* deliberately collected, but might be included in
HTTP headers or other metadata. The full page content downloaded by OONI could
potentially include further information if, for example, a website includes
tracking codes or custom content based on your network location. Such
information could potentially aid third parties in detecting you as an ooniprobe
user.
## Choices
We provide you with choices in regards to which tests to run, which data
you would like to be collected and whether you would like to send your
measurements to our collector for publication or not, as outlined below.
**Tests**
You can *opt-out* from running all of the tests included in `oonideckgen` by
specifying the test(s) that you want to run and by running it/them manually. You
can view how to run each OONI test through the ooniprobe `-s` command line
option.
You can run each test included in `oonideckgen` separately through the following:
* **Web connectivity test:** `ooniprobe blocking/web_connectivity`
* **HTTP header field manipulation test:** `ooniprobe
manipulation/http_header_field_manipulation`
* **HTTP invalid request line test:** `ooniprobe
manipulation/http_invalid_request_line`
**Data collection and publication**
OONI software users can *opt-out* from sending OONI's measurement collector
specific types of data by [editing the ooniprobe
configuration](https://github.com/TheTorProject/ooni-probe#configuring-
ooniprobe) file inside of `~/.ooni/ooniprobe.conf`. Through this file, users
can opt-out from sending OONI the following types of information:
* Country code
* Autonomous System Number (ASN)
By default, OONI does *not* collect users' IP addresses, but users can choose to
*opt-in* (to provide more accurate information) through the above configuration
file.
Users can also choose to *opt-out* from sending OONI's measurement collector any
data at all, by running ooniprobe with the `-n` command line option. This option
is quite often chosen by users who prefer to *not* have their measurements
published, due to potential risks that could emerge as a result of such
publication.
Learn more about how we handle data through our Data Policy.
## Consent
My consent means the following:
I understand the requirements and the risks of running ooniprobe.
I understand that, unless I opt-out (as explained in the previous section), the
results of the tests that I run will by default be sent to the OONI project and
published by it.
PRESS q to leave this page
ooniprobe-2.2.0/ooni/ui/__init__.py 0000644 0001750 0001750 00000000000 12767752460 015345 0 ustar irl irl ooniprobe-2.2.0/ooni/ui/web/ 0000755 0001750 0001750 00000000000 13071152230 013775 5 ustar irl irl ooniprobe-2.2.0/ooni/ui/web/web.py 0000644 0001750 0001750 00000001730 13024243330 015125 0 ustar irl irl from twisted.web import server
from twisted.internet import reactor
from twisted.application import service
from ooni.ui.web.server import WebUIAPI
from ooni.settings import config
class WebUIService(service.MultiService):
"""This multiservice contains the ooniprobe web user interface."""
def __init__(self, director, scheduler, port_number=8842):
service.MultiService.__init__(self)
self.director = director
self.scheduler = scheduler
self.port_number = port_number
def startService(self):
service.MultiService.startService(self)
web_ui_api = WebUIAPI(config, self.director, self.scheduler)
self._port = reactor.listenTCP(
self.port_number,
server.Site(web_ui_api.app.resource()),
interface=config.advanced.webui_address
)
def stopService(self):
service.MultiService.stopService(self)
if self._port:
self._port.stopListening()
ooniprobe-2.2.0/ooni/ui/web/__init__.py 0000644 0001750 0001750 00000000000 12767752460 016122 0 ustar irl irl ooniprobe-2.2.0/ooni/ui/web/server.py 0000644 0001750 0001750 00000055133 13070703575 015701 0 ustar irl irl from __future__ import print_function
import os
import json
import errno
import string
import random
from functools import wraps
from random import SystemRandom
from glob import glob
from twisted.internet import defer, task, reactor
from twisted.python import usage
from twisted.python.filepath import FilePath, InsecurePath
from twisted.web import static
from klein import Klein
from werkzeug.exceptions import NotFound
from ooni import __version__ as ooniprobe_version
from ooni import errors
from ooni.deck import NGDeck
from ooni.deck.store import DeckNotFound, InputNotFound
from ooni.settings import config
from ooni.utils import log
from ooni.director import DirectorEvent
from ooni.measurements import get_summary, get_measurement, list_measurements
from ooni.measurements import MeasurementNotFound, MeasurementInProgress
from ooni.geoip import probe_ip
class WebUIError(Exception):
def __init__(self, code, message):
self.code = code
self.message = message
def xsrf_protect(check=True):
"""
This is a decorator that implements double submit token CSRF protection.
Basically we set a cookie and ensure that every request contains the
same value inside of the cookie and the request header.
It's based on the assumption that an attacker cannot read the cookie
that is set by the server (since it would be violating the SOP) and hence
is not possible to make a browser trigger requests that contain the
cookie value inside of the requests it sends.
If you wish to disable checking of the token set the value check to False.
This will still lead to the cookie being set.
This decorator needs to be applied after the decorator that registers
the routes.
"""
def deco(f):
@wraps(f)
def wrapper(instance, request, *a, **kw):
should_check = check and instance._enable_xsrf_protection
token_cookie = request.getCookie(b'XSRF-TOKEN')
token_header = request.getHeader(b"X-XSRF-TOKEN")
if (token_cookie != instance._xsrf_token and
instance._enable_xsrf_protection):
request.addCookie(b'XSRF-TOKEN',
instance._xsrf_token,
path=b'/')
if should_check and token_cookie != token_header:
raise WebUIError(404, "Invalid XSRF token")
return f(instance, request, *a, **kw)
return wrapper
return deco
def _requires_value(value, attrs=[]):
def deco(f):
@wraps(f)
def wrapper(instance, request, *a, **kw):
for attr in attrs:
attr_value = getattr(instance, attr)
if attr_value is not value:
raise WebUIError(400, "{0} must be {1}".format(attr,
value))
return f(instance, request, *a, **kw)
return wrapper
return deco
def requires_true(attrs=[]):
"""
This decorator is used to require that a certain set of class attributes are
set to True.
Otherwise it will trigger a WebUIError.
"""
return _requires_value(True, attrs)
def requires_false(attrs=[]):
"""
This decorator is used to require that a certain set of class attributes are
set to False.
Otherwise it will trigger a WebUIError.
"""
return _requires_value(False, attrs)
class LongPoller(object):
def __init__(self, timeout, _reactor=reactor):
self.lock = defer.DeferredLock()
self.deferred_subscribers = []
self._reactor = _reactor
self._timeout = timeout
self.timer = task.LoopingCall(
self.notify,
DirectorEvent("null", "No updates"),
)
self.timer.clock = self._reactor
def start(self):
self.timer.start(self._timeout)
def stop(self):
self.timer.stop()
def _notify(self, lock, event):
for d in self.deferred_subscribers[:]:
assert not d.called, "Deferred is already called"
d.callback(event)
self.deferred_subscribers.remove(d)
self.timer.reset()
lock.release()
def notify(self, event=None):
self.lock.acquire().addCallback(self._notify, event)
def get(self):
d = defer.Deferred()
self.deferred_subscribers.append(d)
return d
class WebUIAPI(object):
app = Klein()
# Maximum number in seconds after which to return a result even if no
# change happened.
_long_polling_timeout = 30
_reactor = reactor
_enable_xsrf_protection = True
def __init__(self, config, director, scheduler, _reactor=reactor):
self._reactor = reactor
self.director = director
self.scheduler = scheduler
self.config = config
self.measurement_path = FilePath(config.measurements_directory)
# We use a double submit token to protect against XSRF
rng = SystemRandom()
token_space = string.letters+string.digits
self._xsrf_token = b''.join([rng.choice(token_space)
for _ in range(30)])
self._director_started = False
self._is_initialized = config.is_initialized()
# We use exponential backoff to trigger retries of the startup of
# the director.
self._director_startup_retries = 0
# Maximum delay should be 30 minutes
self._director_max_retry_delay = 30*60
self.status_poller = LongPoller(
self._long_polling_timeout, _reactor)
self.director_event_poller = LongPoller(
self._long_polling_timeout, _reactor)
# XXX move this elsewhere
self.director_event_poller.start()
self.status_poller.start()
self.director.subscribe(self.handle_director_event)
if self._is_initialized:
self.start_director()
def start_director(self):
log.debug("Starting director")
d = self.director.start()
d.addCallback(self.director_started)
d.addErrback(self.director_startup_failed)
d.addBoth(lambda _: self.status_poller.notify())
@property
def status(self):
quota_warning = None
try:
with open(os.path.join(config.running_path,
"quota_warning")) as in_file:
quota_warning = in_file.read()
except IOError as ioe:
if ioe.errno != errno.ENOENT:
raise
return {
"software_version": ooniprobe_version,
"software_name": "ooniprobe",
"asn": probe_ip.geodata['asn'],
"country_code": probe_ip.geodata['countrycode'],
"director_started": self._director_started,
"initialized": self._is_initialized,
"quota_warning": quota_warning
}
def handle_director_event(self, event):
log.debug("Handling event {0}".format(event.type))
self.director_event_poller.notify(event)
def director_startup_failed(self, failure):
self._director_startup_retries += 1
# We delay the startup using binary exponential backoff with an
# upper bound.
startup_delay = random.uniform(
0, min(2**self._director_startup_retries,
self._director_max_retry_delay)
)
log.err("Failed to start the director, "
"retrying in {0}s".format(startup_delay))
self._reactor.callLater(
startup_delay,
self.start_director
)
def director_started(self, _):
log.debug("Started director")
self._director_started = True
@app.handle_errors(NotFound)
@xsrf_protect(check=False)
def not_found(self, request, _):
request.redirect('/client/')
@app.handle_errors(WebUIError)
@xsrf_protect(check=False)
def web_ui_error(self, request, failure):
error = failure.value
request.setResponseCode(error.code)
return self.render_json({
"error_code": error.code,
"error_message": error.message
}, request)
def render_json(self, obj, request):
json_string = json.dumps(obj) + "\n"
request.setHeader('Content-Type', 'application/json')
request.setHeader('Content-Length', len(json_string))
return json_string
@app.route('/api/notify', methods=["GET"])
@xsrf_protect(check=False)
def api_notify(self, request):
def got_director_event(event):
return self.render_json({
"type": event.type,
"message": event.message
}, request)
d = self.director_event_poller.get()
d.addCallback(got_director_event)
return d
@app.route('/api/status', methods=["GET"])
@xsrf_protect(check=False)
def api_status(self, request):
return self.render_json(self.status, request)
@app.route('/api/status/update', methods=["GET"])
@xsrf_protect(check=False)
def api_status_update(self, request):
def got_status_update(event):
return self.api_status(request)
d = self.status_poller.get()
d.addCallback(got_status_update)
return d
@app.route('/api/initialize', methods=["GET"])
@xsrf_protect(check=False)
@requires_false(attrs=['_is_initialized'])
def api_initialize_get(self, request):
available_decks = []
for deck_id, deck in self.director.deck_store.list():
available_decks.append({
'name': deck.name,
'description': deck.description,
'schedule': deck.schedule,
'enabled': self.director.deck_store.is_enabled(deck_id),
'id': deck_id,
'icon': deck.icon
})
return self.render_json({"available_decks": available_decks}, request)
@app.route('/api/initialize', methods=["POST"])
@xsrf_protect(check=True)
@requires_false(attrs=['_is_initialized'])
def api_initialize(self, request):
try:
initial_configuration = json.load(request.content)
except ValueError:
raise WebUIError(400, 'Invalid JSON message recevied')
required_keys = ['include_ip', 'include_asn', 'include_country',
'should_upload', 'preferred_backend']
options = {}
for required_key in required_keys:
try:
options[required_key] = initial_configuration[required_key]
except KeyError:
raise WebUIError(400, 'Missing required key {0}'.format(
required_key))
config.create_config_file(**options)
try:
deck_config = initial_configuration['deck_config']
except KeyError:
raise WebUIError(400, 'Missing enabled decks')
for deck_id, enabled in deck_config.items():
try:
if enabled is True:
self.director.deck_store.enable(deck_id)
elif enabled is False:
try:
self.director.deck_store.disable(deck_id)
except DeckNotFound:
# We ignore these errors, because it could be that a deck
# that is marked as disabled is already disabled
pass
except DeckNotFound:
raise WebUIError(404, 'Deck not found')
config.set_initialized()
self.scheduler.refresh_deck_list()
self._is_initialized = True
self.status_poller.notify()
self.start_director()
return self.render_json({"result": "ok"}, request)
@app.route('/api/deck//start', methods=["POST"])
@xsrf_protect(check=True)
@requires_true(attrs=['_director_started', '_is_initialized'])
def api_deck_start(self, request, deck_id):
try:
deck = self.director.deck_store.get(deck_id)
except DeckNotFound:
raise WebUIError(404, "Deck not found")
try:
self.run_deck(deck)
except:
raise WebUIError(500, "Failed to start deck")
return self.render_json({"status": "started " + deck.name}, request)
@app.route('/api/deck', methods=["GET"])
@xsrf_protect(check=False)
@requires_true(attrs=['_is_initialized'])
def api_deck_list(self, request):
deck_list = {'decks': []}
for deck_id, deck in self.director.deck_store.list():
nettests = []
for task in deck.tasks:
if task.type == 'ooni':
assert task.ooni['test_name'] is not None
nettests.append(task.ooni['test_name'])
deck_list['decks'].append({
'id': deck_id,
'name': deck.name,
'icon': deck.icon,
'running': self.director.isDeckRunning(
deck_id, from_schedule=False),
'running_scheduled': self.director.isDeckRunning(
deck_id, from_schedule=True),
'nettests': nettests,
'description': deck.description,
'schedule': deck.schedule,
'enabled': self.director.deck_store.is_enabled(deck_id)
})
return self.render_json(deck_list, request)
@app.route('/api/deck//run', methods=["POST"])
@xsrf_protect(check=True)
@requires_true(attrs=['_director_started', '_is_initialized'])
def api_deck_run(self, request, deck_id):
try:
deck = self.director.deck_store.get(deck_id)
except DeckNotFound:
raise WebUIError(404, "Deck not found")
self.run_deck(deck)
return self.render_json({"status": "starting"}, request)
@app.route('/api/deck//enable', methods=["POST"])
@xsrf_protect(check=True)
@requires_true(attrs=['_director_started', '_is_initialized'])
def api_deck_enable(self, request, deck_id):
try:
self.director.deck_store.enable(deck_id)
except DeckNotFound:
raise WebUIError(404, "Deck not found")
self.scheduler.refresh_deck_list()
return self.render_json({"status": "enabled"}, request)
@app.route('/api/deck//disable', methods=["POST"])
@xsrf_protect(check=True)
@requires_true(attrs=['_director_started', '_is_initialized'])
def api_deck_disable(self, request, deck_id):
try:
self.director.deck_store.disable(deck_id)
except DeckNotFound:
raise WebUIError(404, "Deck not found")
self.scheduler.refresh_deck_list()
return self.render_json({"status": "disabled"}, request)
@defer.inlineCallbacks
def run_deck(self, deck):
# These are dangling deferreds
try:
yield deck.setup()
yield deck.run(self.director, from_schedule=False)
self.director_event_poller.notify(DirectorEvent("success",
"Started Deck "
+ deck.id))
except:
self.director_event_poller.notify(DirectorEvent("error",
"Failed to start deck"))
@app.route('/api/nettest//start', methods=["POST"])
@xsrf_protect(check=True)
@requires_true(attrs=['_director_started', '_is_initialized'])
def api_nettest_start(self, request, test_name):
try:
_ = self.director.netTests[test_name]
except KeyError:
raise WebUIError(500, 'Could not find the specified test')
try:
test_options = json.load(request.content)
except ValueError:
raise WebUIError(500, 'Invalid JSON message recevied')
test_options["test_name"] = test_name
deck_data = {
"tasks": [
{"ooni": test_options}
]
}
try:
deck = NGDeck()
deck.load(deck_data)
self.run_deck(deck)
except errors.MissingRequiredOption as option_name:
raise WebUIError(
400, 'Missing required option: "{}"'.format(option_name)
)
except usage.UsageError as ue:
raise WebUIError(
400, 'Error in parsing options'
)
except errors.InsufficientPrivileges:
raise WebUIError(
400, 'Insufficient privileges'
)
except Exception as exc:
log.exception(exc)
raise WebUIError(
500, 'Failed to start nettest'
)
return self.render_json({"status": "started"}, request)
@app.route('/api/nettest', methods=["GET"])
@xsrf_protect(check=False)
@requires_true(attrs=['_is_initialized'])
def api_nettest_list(self, request):
return self.render_json(self.director.netTests, request)
@app.route('/api/input', methods=["GET"])
@xsrf_protect(check=False)
@requires_true(attrs=['_is_initialized'])
def api_input_list(self, request):
input_store_list = self.director.input_store.list()
for key, value in input_store_list.items():
value.pop('filepath')
return self.render_json(input_store_list, request)
@app.route('/api/input//content', methods=["GET"])
@xsrf_protect(check=False)
@requires_true(attrs=['_is_initialized'])
def api_input_content(self, request, input_id):
content = self.director.input_store.getContent(input_id)
request.setHeader('Content-Type', 'text/plain')
request.setHeader('Content-Length', len(content))
return content
@app.route('/api/input/', methods=["GET"])
@xsrf_protect(check=False)
@requires_true(attrs=['_is_initialized'])
def api_input_details(self, request, input_id):
input_desc = self.director.input_store.get(input_id)
input_desc.pop('filepath')
return self.render_json(
input_desc, request
)
@app.route('/api/measurement', methods=["GET"])
@xsrf_protect(check=False)
@requires_true(attrs=['_is_initialized'])
def api_measurement_list(self, request):
measurements = list_measurements(order='desc')
for measurement in measurements:
if measurement['running'] == False:
continue
try:
net_test = self.director.activeMeasurements[measurement['id']]
measurement['progress'] = net_test.completionPercentage * 100
except KeyError:
log.err("Did not find measurement with ID %s" % measurement['id'])
return self.render_json({"measurements": measurements}, request)
@app.route('/api/measurement/', methods=["GET"])
@xsrf_protect(check=False)
@requires_true(attrs=['_is_initialized'])
@defer.inlineCallbacks
def api_measurement_summary(self, request, measurement_id):
try:
measurement = get_measurement(measurement_id)
except InsecurePath:
raise WebUIError(500, "invalid measurement id")
except MeasurementNotFound:
raise WebUIError(404, "measurement not found")
except MeasurementInProgress:
raise WebUIError(400, "measurement in progress")
if measurement['completed'] is False:
raise WebUIError(400, "measurement in progress")
summary = yield get_summary(measurement_id)
defer.returnValue(self.render_json(summary, request))
@app.route('/api/measurement/', methods=["DELETE"])
@xsrf_protect(check=True)
@requires_true(attrs=['_is_initialized'])
def api_measurement_delete(self, request, measurement_id):
try:
measurement = get_measurement(measurement_id)
except InsecurePath:
raise WebUIError(500, "invalid measurement id")
except MeasurementNotFound:
raise WebUIError(404, "measurement not found")
if measurement['running'] is True:
raise WebUIError(400, "Measurement running")
try:
measurement_dir = self.measurement_path.child(measurement_id)
measurement_dir.remove()
except:
raise WebUIError(400, "Failed to delete report")
return self.render_json({"result": "ok"}, request)
@app.route('/api/measurement//keep', methods=["POST"])
@xsrf_protect(check=True)
@requires_true(attrs=['_is_initialized'])
def api_measurement_keep(self, request, measurement_id):
try:
measurement_dir = self.measurement_path.child(measurement_id)
except InsecurePath:
raise WebUIError(500, "invalid measurement id")
summary = measurement_dir.child("keep")
with summary.open("w+") as f:
pass
return self.render_json({"status": "ok"}, request)
@app.route('/api/measurement//',
methods=["GET"])
@xsrf_protect(check=False)
@requires_true(attrs=['_is_initialized'])
def api_measurement_view(self, request, measurement_id, idx):
try:
measurement_dir = self.measurement_path.child(measurement_id)
except InsecurePath:
raise WebUIError(500, "Invalid measurement id")
measurements = measurement_dir.child("measurements.njson")
# This gets the line idx of the measurement file.
# XXX maybe implement some caching here
with measurements.open("r") as f:
r = None
for f_idx, line in enumerate(f):
if f_idx == idx:
r = json.loads(line)
break
if r is None:
raise WebUIError(404, "Could not find measurement with this idx")
return self.render_json(r, request)
@app.route('/api/logs',
methods=["GET"])
@xsrf_protect(check=True)
@requires_true(attrs=['_is_initialized'])
def api_get_logs(self, request):
with open(log.oonilogger.log_filepath) as input_file:
log_data = input_file.read()
logs = {
'latest': log_data,
'older': []
}
if request.args.get('all', False) is not False:
for log_filepath in glob(log.oonilogger.log_filepath + ".*"):
with open(log_filepath) as input_file:
log_data = input_file.read()
logs['older'].append(log_data)
logs['older'].reverse()
return self.render_json(logs, request)
@app.route('/client/', branch=True)
@xsrf_protect(check=False)
def static(self, request):
return static.File(config.web_ui_directory)
ooniprobe-2.2.0/ooni/ui/web/client/ 0000755 0001750 0001750 00000000000 13071152230 015253 5 ustar irl irl ooniprobe-2.2.0/ooni/ui/web/client/favicons/ 0000755 0001750 0001750 00000000000 13071152230 017063 5 ustar irl irl ooniprobe-2.2.0/ooni/ui/web/client/favicons/favicon-96x96.png 0000644 0001750 0001750 00000011306 13071151301 022020 0 ustar irl irl PNG
IHDR ` ` w8 sRGB IDATx]gp>+;B]BM5IpkfJfdH̘$3d'w
1`c0$@$:OVݷ3o{iJ!pSʲ^vIyNrzvB5aSq]YuD[\Q_q:#kSӇ~mĀ9F|PQQ א憪a@L% kfe?q]Wx+^:8vdza̎0
wGg0H;iaVO
)w` ^|ÂQ|TA?jQ?lhuбV:FM}A|T8%DPizH꧐U^9~Y3+:w]hw-;AsGЬT
(ol
Gi2&XtC:M.$i3z;abv?9"CR{@^zko-e^o7osm|Jz ~no_bl7}EM
o#gYi;B Akg
]
#$pL+mw
T7uLJΪA
:07<2vW^%,;(tlr)VfFycQZ<=%9JvqJA0'|UY,J鮒dN`*/9fF:cN]h4*5?MPV{vU]dc1"S9fQ*0vD,X0/Ƞg&_oL~^{8xEH WGW7=ErwkY@/"_[0&"V-ʣ?ޚM6| =b*ΗnʠGǥqc"Ǽ,sm3~w\HO3뉋y bN9ٲU+hf@]UM:7'9G,}mT[sohW$x
qtLWe[yP7<05V䣝|(IiK+ߢ!@#K;"D QO҂FaR(.vzkw=].hﰰ0ՈKXfRn0j=;H7~lΒI/又ÿ[H|芨Qb©SQP+k[V@o2|c;{`TN*o~
A-$ DGmT̷ /f5e
1#HƶҘO̕yQy#dd$F+VP.NҌ4M`!X.N4OJI̿v˻~|u軛Dv[~sZne7
JQEl>A'Ϸ ߱mf$~>DMYE
(/%fP~ˎv\jhg%4^#sxsȧlkY>+RFbe$E~r_mMH
HWJn&>CL*O5C[J]s'T w#_;d@YCC GsϤ,7_m#=5tC? "#psQ*=AHL~\:۟[H0GMȇS̛L%@mL]9^|<$MZz[#*,KA vSy ೪}u5~\KW9x!mM,Ű6)Wj8,rtXm,Wxh\F?".t}xW!+i@l*~Iٔl"*~^M2˖BYK#d;Xژ?ۨ<zC