unicorn-4.7.0/0000755000004100000410000000000012236653132013232 5ustar www-datawww-dataunicorn-4.7.0/.mailmap0000644000004100000410000000332312236653132014654 0ustar www-datawww-data# This list is used by "git shortlog" to fixup the ugly faux email addresses # "" that the "git svn" tool creates by default. # Eric Wong started this .mailmap file (and is the maintainer of it...) Eric Wong normalperson # This also includes all the Mongrel contributors that committed to the # Rubyforge SVN repo. Some real names were looked up on rubyforge.org # (http://rubyforge.org/users/$user), but we're not going expose any email # addresses here without their permission. Austin Godber godber godber Bradley Taylor bktaylor Ezra Zygmuntowicz ezmobius Filipe Lautert filipe Luis Lavena luislavena Matt Pelletier bricolage MenTaLguY mental Nick Sieger nicksieger Rick Olson technoweenie Wayne E. Seguin wayneeseguin Zed A. Shaw why the lucky stiff # Evan had his email address in the git history we branched from anyways Evan Weaver evanweaver unicorn-4.7.0/README0000644000004100000410000001224012236653132014111 0ustar www-datawww-data= Unicorn: Rack HTTP server for fast clients and Unix \Unicorn is an HTTP server for Rack applications designed to only serve fast clients on low-latency, high-bandwidth connections and take advantage of features in Unix/Unix-like kernels. Slow clients should only be served by placing a reverse proxy capable of fully buffering both the the request and response in between \Unicorn and slow clients. == Features * Designed for Rack, Unix, fast clients, and ease-of-debugging. We cut out everything that is better supported by the operating system, {nginx}[http://nginx.net/] or {Rack}[http://rack.rubyforge.org/]. * Compatible with both Ruby 1.8 and 1.9. Rubinius support is in-progress. * Process management: \Unicorn will reap and restart workers that die from broken apps. There is no need to manage multiple processes or ports yourself. \Unicorn can spawn and manage any number of worker processes you choose to scale to your backend. * Load balancing is done entirely by the operating system kernel. Requests never pile up behind a busy worker process. * Does not care if your application is thread-safe or not, workers all run within their own isolated address space and only serve one client at a time for maximum robustness. * Supports all Rack applications, along with pre-Rack versions of Ruby on Rails via a Rack wrapper. * Builtin reopening of all log files in your application via USR1 signal. This allows logrotate to rotate files atomically and quickly via rename instead of the racy and slow copytruncate method. \Unicorn also takes steps to ensure multi-line log entries from one request all stay within the same file. * nginx-style binary upgrades without losing connections. You can upgrade \Unicorn, your entire application, libraries and even your Ruby interpreter without dropping clients. * before_fork and after_fork hooks in case your application has special needs when dealing with forked processes. These should not be needed when the "preload_app" directive is false (the default). * Can be used with copy-on-write-friendly memory management to save memory (by setting "preload_app" to true). * Able to listen on multiple interfaces including UNIX sockets, each worker process can also bind to a private port via the after_fork hook for easy debugging. * Simple and easy Ruby DSL for configuration. * Decodes chunked transfers on-the-fly, thus allowing upload progress notification to be implemented as well as being able to tunnel arbitrary stream-based protocols over HTTP. == License \Unicorn is copyright 2009 by all contributors (see logs in git). It is based on Mongrel 1.1.5. Mongrel is copyright 2007 Zed A. Shaw and contributors. \Unicorn is licensed under (your choice) of the GPLv2 or later (GPLv3+ preferred), or Ruby (1.8)-specific terms. See the included LICENSE file for details. \Unicorn is 100% Free Software. == Install The library consists of a C extension so you'll need a C compiler and Ruby development libraries/headers. You may download the tarball from the Mongrel project page on Rubyforge and run setup.rb after unpacking it: http://rubyforge.org/frs/?group_id=1306 You may also install it via RubyGems on RubyGems.org: gem install unicorn You can get the latest source via git from the following locations (these versions may not be stable): git://bogomips.org/unicorn.git git://repo.or.cz/unicorn.git (mirror) You may browse the code from the web and download the latest snapshot tarballs here: * http://bogomips.org/unicorn.git (cgit) * http://repo.or.cz/w/unicorn.git (gitweb) See the HACKING guide on how to contribute and build prerelease gems from git. == Usage === non-Rails Rack applications In APP_ROOT, run: unicorn === for Rails applications (should work for all 1.2 or later versions) In RAILS_ROOT, run: unicorn_rails \Unicorn will bind to all interfaces on TCP port 8080 by default. You may use the +--listen/-l+ switch to bind to a different address:port or a UNIX socket. === Configuration File(s) \Unicorn will look for the config.ru file used by rackup in APP_ROOT. For deployments, it can use a config file for \Unicorn-specific options specified by the +--config-file/-c+ command-line switch. See Unicorn::Configurator for the syntax of the \Unicorn-specific options. The default settings are designed for maximum out-of-the-box compatibility with existing applications. Most command-line options for other Rack applications (above) are also supported. Run `unicorn -h` or `unicorn_rails -h` to see command-line options. == Disclaimer There is NO WARRANTY whatsoever if anything goes wrong, but {let us know}[link:ISSUES.html] and we'll try our best to fix it. \Unicorn is designed to only serve fast clients either on the local host or a fast LAN. See the PHILOSOPHY and DESIGN documents for more details regarding this. == Contact All feedback (bug reports, user/development dicussion, patches, pull requests) go to the mailing list/newsgroup. See the ISSUES document for information on the {mailing list}[mailto:mongrel-unicorn@rubyforge.org]. For the latest on \Unicorn releases, you may also finger us at unicorn@bogomips.org or check our NEWS page (and subscribe to our Atom feed). unicorn-4.7.0/SIGNALS0000644000004100000410000001142412236653132014257 0ustar www-datawww-data== Signal handling In general, signals need only be sent to the master process. However, the signals Unicorn uses internally to communicate with the worker processes are documented here as well. With the exception of TTIN/TTOU, signal handling matches the behavior of {nginx}[http://nginx.net/] so it should be possible to easily share process management scripts between Unicorn and nginx. === Master Process * HUP - reloads config file and gracefully restart all workers. If the "preload_app" directive is false (the default), then workers will also pick up any application code changes when restarted. If "preload_app" is true, then application code changes will have no effect; USR2 + QUIT (see below) must be used to load newer code in this case. When reloading the application, +Gem.refresh+ will be called so updated code for your application can pick up newly installed RubyGems. It is not recommended that you uninstall libraries your application depends on while Unicorn is running, as respawned workers may enter a spawn loop when they fail to load an uninstalled dependency. * INT/TERM - quick shutdown, kills all workers immediately * QUIT - graceful shutdown, waits for workers to finish their current request before finishing. * USR1 - reopen all logs owned by the master and all workers See Unicorn::Util.reopen_logs for what is considered a log. * USR2 - reexecute the running binary. A separate QUIT should be sent to the original process once the child is verified to be up and running. * WINCH - gracefully stops workers but keep the master running. This will only work for daemonized processes. * TTIN - increment the number of worker processes by one * TTOU - decrement the number of worker processes by one === Worker Processes Sending signals directly to the worker processes should not normally be needed. If the master process is running, any exited worker will be automatically respawned. * INT/TERM - Quick shutdown, immediately exit. Unless WINCH has been sent to the master (or the master is killed), the master process will respawn a worker to replace this one. * QUIT - Gracefully exit after finishing the current request. Unless WINCH has been sent to the master (or the master is killed), the master process will respawn a worker to replace this one. * USR1 - Reopen all logs owned by the worker process. See Unicorn::Util.reopen_logs for what is considered a log. Log files are not reopened until it is done processing the current request, so multiple log lines for one request (as done by Rails) will not be split across multiple logs. It is NOT recommended to send the USR1 signal directly to workers via "killall -USR1 unicorn" if you are using user/group-switching support in your workers. You will encounter incorrect file permissions and workers will need to be respawned. Sending USR1 to the master process first will ensure logs have the correct permissions before the master forwards the USR1 signal to workers. === Procedure to replace a running unicorn executable You may replace a running instance of unicorn with a new one without losing any incoming connections. Doing so will reload all of your application code, Unicorn config, Ruby executable, and all libraries. The only things that will not change (due to OS limitations) are: 1. The path to the unicorn executable script. If you want to change to a different installation of Ruby, you can modify the shebang line to point to your alternative interpreter. The procedure is exactly like that of nginx: 1. Send USR2 to the master process 2. Check your process manager or pid files to see if a new master spawned successfully. If you're using a pid file, the old process will have ".oldbin" appended to its path. You should have two master instances of unicorn running now, both of which will have workers servicing requests. Your process tree should look something like this: unicorn master (old) \_ unicorn worker[0] \_ unicorn worker[1] \_ unicorn worker[2] \_ unicorn worker[3] \_ unicorn master \_ unicorn worker[0] \_ unicorn worker[1] \_ unicorn worker[2] \_ unicorn worker[3] 3. You can now send WINCH to the old master process so only the new workers serve requests. If your unicorn process is bound to an interactive terminal, you can skip this step. Step 5 will be more difficult but you can also skip it if your process is not daemonized. 4. You should now ensure that everything is running correctly with the new workers as the old workers die off. 5. If everything seems ok, then send QUIT to the old master. You're done! If something is broken, then send HUP to the old master to reload the config and restart its workers. Then send QUIT to the new master process. unicorn-4.7.0/FAQ0000644000004100000410000000400412236653132013562 0ustar www-datawww-data= Frequently Asked Questions about Unicorn === I've installed Rack 1.1.x, why can't Unicorn load Rails (2.3.5)? Rails 2.3.5 is not compatible with Rack 1.1.x. Unicorn is compatible with both Rack 1.1.x and Rack 1.0.x, and RubyGems will load the latest version of Rack installed on the system. Uninstalling the Rack 1.1.x gem should solve gem loading issues with Rails 2.3.5. Rails 2.3.6 and later correctly support Rack 1.1.x. === Why are my redirects going to "http" URLs when my site uses https? If your site is entirely behind https, then Rack applications that use "rack.url_scheme" can set the following in the Unicorn config file: HttpRequest::DEFAULTS["rack.url_scheme"] = "https" For frameworks that do not use "rack.url_scheme", you can also try setting one or both of the following: HttpRequest::DEFAULTS["HTTPS"] = "on" HttpRequest::DEFAULTS["HTTP_X_FORWARDED_PROTO"] = "https" Otherwise, you can configure your proxy (nginx) to send the "X-Forwarded-Proto: https" header only for parts of the site that use https. For nginx, you can do it with the following line in appropriate "location" blocks of your nginx config file: proxy_set_header X-Forwarded-Proto https; === Why are log messages from Unicorn are unformatted when using Rails? Current versions of Rails unfortunately overrides the default Logger formatter. You can undo this behavior with the default logger in your Unicorn config file: Configurator::DEFAULTS[:logger].formatter = Logger::Formatter.new Of course you can specify an entirely different logger as well with the "logger" directive described by Unicorn::Configurator. === Why am I getting "connection refused"/502 errors under high load? Short answer: your application cannot keep up. You can increase the size of the :backlog parameter if your kernel supports a larger listen() queue, but keep in mind having a large listen queue makes failover to a different machine more difficult. See the TUNING and Unicorn::Configurator documents for more information on :backlog-related topics. unicorn-4.7.0/ext/0000755000004100000410000000000012236653132014032 5ustar www-datawww-dataunicorn-4.7.0/ext/unicorn_http/0000755000004100000410000000000012236653132016546 5ustar www-datawww-dataunicorn-4.7.0/ext/unicorn_http/extconf.rb0000644000004100000410000000057112236653132020544 0ustar www-datawww-data# -*- encoding: binary -*- require 'mkmf' have_macro("SIZEOF_OFF_T", "ruby.h") or check_sizeof("off_t", "sys/types.h") have_macro("SIZEOF_SIZE_T", "ruby.h") or check_sizeof("size_t", "sys/types.h") have_macro("SIZEOF_LONG", "ruby.h") or check_sizeof("long", "sys/types.h") have_func("rb_str_set_len", "ruby.h") have_func("gmtime_r", "time.h") create_makefile("unicorn_http") unicorn-4.7.0/ext/unicorn_http/common_field_optimization.h0000644000004100000410000000563312236653132024167 0ustar www-datawww-data#ifndef common_field_optimization #define common_field_optimization #include "ruby.h" #include "c_util.h" struct common_field { const signed long len; const char *name; VALUE value; }; /* * A list of common HTTP headers we expect to receive. * This allows us to avoid repeatedly creating identical string * objects to be used with rb_hash_aset(). */ static struct common_field common_http_fields[] = { # define f(N) { (sizeof(N) - 1), N, Qnil } f("ACCEPT"), f("ACCEPT_CHARSET"), f("ACCEPT_ENCODING"), f("ACCEPT_LANGUAGE"), f("ALLOW"), f("AUTHORIZATION"), f("CACHE_CONTROL"), f("CONNECTION"), f("CONTENT_ENCODING"), f("CONTENT_LENGTH"), f("CONTENT_TYPE"), f("COOKIE"), f("DATE"), f("EXPECT"), f("FROM"), f("HOST"), f("IF_MATCH"), f("IF_MODIFIED_SINCE"), f("IF_NONE_MATCH"), f("IF_RANGE"), f("IF_UNMODIFIED_SINCE"), f("KEEP_ALIVE"), /* Firefox sends this */ f("MAX_FORWARDS"), f("PRAGMA"), f("PROXY_AUTHORIZATION"), f("RANGE"), f("REFERER"), f("TE"), f("TRAILER"), f("TRANSFER_ENCODING"), f("UPGRADE"), f("USER_AGENT"), f("VIA"), f("X_FORWARDED_FOR"), /* common for proxies */ f("X_FORWARDED_PROTO"), /* common for proxies */ f("X_REAL_IP"), /* common for proxies */ f("WARNING") # undef f }; #define HTTP_PREFIX "HTTP_" #define HTTP_PREFIX_LEN (sizeof(HTTP_PREFIX) - 1) /* this function is not performance-critical, called only at load time */ static void init_common_fields(void) { int i; struct common_field *cf = common_http_fields; char tmp[64]; memcpy(tmp, HTTP_PREFIX, HTTP_PREFIX_LEN); for(i = ARRAY_SIZE(common_http_fields); --i >= 0; cf++) { /* Rack doesn't like certain headers prefixed with "HTTP_" */ if (!strcmp("CONTENT_LENGTH", cf->name) || !strcmp("CONTENT_TYPE", cf->name)) { cf->value = rb_str_new(cf->name, cf->len); } else { memcpy(tmp + HTTP_PREFIX_LEN, cf->name, cf->len + 1); cf->value = rb_str_new(tmp, HTTP_PREFIX_LEN + cf->len); } cf->value = rb_obj_freeze(cf->value); rb_global_variable(&cf->value); } } /* this function is called for every header set */ static VALUE find_common_field(const char *field, size_t flen) { int i; struct common_field *cf = common_http_fields; for(i = ARRAY_SIZE(common_http_fields); --i >= 0; cf++) { if (cf->len == (long)flen && !memcmp(cf->name, field, flen)) return cf->value; } return Qnil; } /* * We got a strange header that we don't have a memoized value for. * Fallback to creating a new string to use as a hash key. */ static VALUE uncommon_field(const char *field, size_t flen) { VALUE f = rb_str_new(NULL, HTTP_PREFIX_LEN + flen); memcpy(RSTRING_PTR(f), HTTP_PREFIX, HTTP_PREFIX_LEN); memcpy(RSTRING_PTR(f) + HTTP_PREFIX_LEN, field, flen); assert(*(RSTRING_PTR(f) + RSTRING_LEN(f)) == '\0' && "string didn't end with \\0"); /* paranoia */ return rb_obj_freeze(f); } #endif /* common_field_optimization_h */ unicorn-4.7.0/ext/unicorn_http/unicorn_http.rl0000644000004100000410000007106612236653132021633 0ustar www-datawww-data/** * Copyright (c) 2009 Eric Wong (all bugs are Eric's fault) * Copyright (c) 2005 Zed A. Shaw * You can redistribute it and/or modify it under the same terms as Ruby 1.8 or * the GPLv2+ (GPLv3+ preferred) */ #include "ruby.h" #include "ext_help.h" #include #include #include #include "common_field_optimization.h" #include "global_variables.h" #include "c_util.h" void init_unicorn_httpdate(void); #define UH_FL_CHUNKED 0x1 #define UH_FL_HASBODY 0x2 #define UH_FL_INBODY 0x4 #define UH_FL_HASTRAILER 0x8 #define UH_FL_INTRAILER 0x10 #define UH_FL_INCHUNK 0x20 #define UH_FL_REQEOF 0x40 #define UH_FL_KAVERSION 0x80 #define UH_FL_HASHEADER 0x100 #define UH_FL_TO_CLEAR 0x200 /* all of these flags need to be set for keepalive to be supported */ #define UH_FL_KEEPALIVE (UH_FL_KAVERSION | UH_FL_REQEOF | UH_FL_HASHEADER) /* * whether or not to trust X-Forwarded-Proto and X-Forwarded-SSL when * setting rack.url_scheme */ static VALUE trust_x_forward = Qtrue; static unsigned long keepalive_requests = 100; /* same as nginx */ /* * Returns the maximum number of keepalive requests a client may make * before the parser refuses to continue. */ static VALUE ka_req(VALUE self) { return ULONG2NUM(keepalive_requests); } /* * Sets the maximum number of keepalive requests a client may make. * A special value of +nil+ causes this to be the maximum value * possible (this is architecture-dependent). */ static VALUE set_ka_req(VALUE self, VALUE val) { keepalive_requests = NIL_P(val) ? ULONG_MAX : NUM2ULONG(val); return ka_req(self); } /* * Sets whether or not the parser will trust X-Forwarded-Proto and * X-Forwarded-SSL headers and set "rack.url_scheme" to "https" accordingly. * Rainbows!/Zbatery installations facing untrusted clients directly * should set this to +false+ */ static VALUE set_xftrust(VALUE self, VALUE val) { if (Qtrue == val || Qfalse == val) trust_x_forward = val; else rb_raise(rb_eTypeError, "must be true or false"); return val; } /* * returns whether or not the parser will trust X-Forwarded-Proto and * X-Forwarded-SSL headers and set "rack.url_scheme" to "https" accordingly */ static VALUE xftrust(VALUE self) { return trust_x_forward; } static size_t MAX_HEADER_LEN = 1024 * (80 + 32); /* same as Mongrel */ /* this is only intended for use with Rainbows! */ static VALUE set_maxhdrlen(VALUE self, VALUE len) { return SIZET2NUM(MAX_HEADER_LEN = NUM2SIZET(len)); } /* keep this small for Rainbows! since every client has one */ struct http_parser { int cs; /* Ragel internal state */ unsigned int flags; unsigned long nr_requests; size_t mark; size_t offset; union { /* these 2 fields don't nest */ size_t field; size_t query; } start; union { size_t field_len; /* only used during header processing */ size_t dest_offset; /* only used during body processing */ } s; VALUE buf; VALUE env; VALUE cont; /* Qfalse: unset, Qnil: ignored header, T_STRING: append */ union { off_t content; off_t chunk; } len; }; static ID id_clear, id_set_backtrace, id_response_start_sent; static void finalize_header(struct http_parser *hp); static void parser_raise(VALUE klass, const char *msg) { VALUE exc = rb_exc_new2(klass, msg); VALUE bt = rb_ary_new(); rb_funcall(exc, id_set_backtrace, 1, bt); rb_exc_raise(exc); } #define REMAINING (unsigned long)(pe - p) #define LEN(AT, FPC) (FPC - buffer - hp->AT) #define MARK(M,FPC) (hp->M = (FPC) - buffer) #define PTR_TO(F) (buffer + hp->F) #define STR_NEW(M,FPC) rb_str_new(PTR_TO(M), LEN(M, FPC)) #define STRIPPED_STR_NEW(M,FPC) stripped_str_new(PTR_TO(M), LEN(M, FPC)) #define HP_FL_TEST(hp,fl) ((hp)->flags & (UH_FL_##fl)) #define HP_FL_SET(hp,fl) ((hp)->flags |= (UH_FL_##fl)) #define HP_FL_UNSET(hp,fl) ((hp)->flags &= ~(UH_FL_##fl)) #define HP_FL_ALL(hp,fl) (HP_FL_TEST(hp, fl) == (UH_FL_##fl)) static int is_lws(char c) { return (c == ' ' || c == '\t'); } static VALUE stripped_str_new(const char *str, long len) { long end; for (end = len - 1; end >= 0 && is_lws(str[end]); end--); return rb_str_new(str, end + 1); } /* * handles values of the "Connection:" header, keepalive is implied * for HTTP/1.1 but needs to be explicitly enabled with HTTP/1.0 * Additionally, we require GET/HEAD requests to support keepalive. */ static void hp_keepalive_connection(struct http_parser *hp, VALUE val) { if (STR_CSTR_CASE_EQ(val, "keep-alive")) { /* basically have HTTP/1.0 masquerade as HTTP/1.1+ */ HP_FL_SET(hp, KAVERSION); } else if (STR_CSTR_CASE_EQ(val, "close")) { /* * it doesn't matter what HTTP version or request method we have, * if a client says "Connection: close", we disable keepalive */ HP_FL_UNSET(hp, KAVERSION); } else { /* * client could've sent anything, ignore it for now. Maybe * "HP_FL_UNSET(hp, KAVERSION);" just in case? * Raising an exception might be too mean... */ } } static void request_method(struct http_parser *hp, const char *ptr, size_t len) { VALUE v = rb_str_new(ptr, len); rb_hash_aset(hp->env, g_request_method, v); } static void http_version(struct http_parser *hp, const char *ptr, size_t len) { VALUE v; HP_FL_SET(hp, HASHEADER); if (CONST_MEM_EQ("HTTP/1.1", ptr, len)) { /* HTTP/1.1 implies keepalive unless "Connection: close" is set */ HP_FL_SET(hp, KAVERSION); v = g_http_11; } else if (CONST_MEM_EQ("HTTP/1.0", ptr, len)) { v = g_http_10; } else { v = rb_str_new(ptr, len); } rb_hash_aset(hp->env, g_server_protocol, v); rb_hash_aset(hp->env, g_http_version, v); } static inline void hp_invalid_if_trailer(struct http_parser *hp) { if (HP_FL_TEST(hp, INTRAILER)) parser_raise(eHttpParserError, "invalid Trailer"); } static void write_cont_value(struct http_parser *hp, char *buffer, const char *p) { char *vptr; long end; long len = LEN(mark, p); long cont_len; if (hp->cont == Qfalse) parser_raise(eHttpParserError, "invalid continuation line"); if (NIL_P(hp->cont)) return; /* we're ignoring this header (probably Host:) */ assert(TYPE(hp->cont) == T_STRING && "continuation line is not a string"); assert(hp->mark > 0 && "impossible continuation line offset"); if (len == 0) return; cont_len = RSTRING_LEN(hp->cont); if (cont_len > 0) { --hp->mark; len = LEN(mark, p); } vptr = PTR_TO(mark); /* normalize tab to space */ if (cont_len > 0) { assert((' ' == *vptr || '\t' == *vptr) && "invalid leading white space"); *vptr = ' '; } for (end = len - 1; end >= 0 && is_lws(vptr[end]); end--); rb_str_buf_cat(hp->cont, vptr, end + 1); } static void write_value(struct http_parser *hp, const char *buffer, const char *p) { VALUE f = find_common_field(PTR_TO(start.field), hp->s.field_len); VALUE v; VALUE e; VALIDATE_MAX_LENGTH(LEN(mark, p), FIELD_VALUE); v = LEN(mark, p) == 0 ? rb_str_buf_new(128) : STRIPPED_STR_NEW(mark, p); if (NIL_P(f)) { const char *field = PTR_TO(start.field); size_t flen = hp->s.field_len; VALIDATE_MAX_LENGTH(flen, FIELD_NAME); /* * ignore "Version" headers since they conflict with the HTTP_VERSION * rack env variable. */ if (CONST_MEM_EQ("VERSION", field, flen)) { hp->cont = Qnil; return; } f = uncommon_field(field, flen); } else if (f == g_http_connection) { hp_keepalive_connection(hp, v); } else if (f == g_content_length) { hp->len.content = parse_length(RSTRING_PTR(v), RSTRING_LEN(v)); if (hp->len.content < 0) parser_raise(eHttpParserError, "invalid Content-Length"); if (hp->len.content != 0) HP_FL_SET(hp, HASBODY); hp_invalid_if_trailer(hp); } else if (f == g_http_transfer_encoding) { if (STR_CSTR_CASE_EQ(v, "chunked")) { HP_FL_SET(hp, CHUNKED); HP_FL_SET(hp, HASBODY); } hp_invalid_if_trailer(hp); } else if (f == g_http_trailer) { HP_FL_SET(hp, HASTRAILER); hp_invalid_if_trailer(hp); } else { assert(TYPE(f) == T_STRING && "memoized object is not a string"); assert_frozen(f); } e = rb_hash_aref(hp->env, f); if (NIL_P(e)) { hp->cont = rb_hash_aset(hp->env, f, v); } else if (f == g_http_host) { /* * ignored, absolute URLs in REQUEST_URI take precedence over * the Host: header (ref: rfc 2616, section 5.2.1) */ hp->cont = Qnil; } else { rb_str_buf_cat(e, ",", 1); hp->cont = rb_str_buf_append(e, v); } } /** Machine **/ %%{ machine http_parser; action mark {MARK(mark, fpc); } action start_field { MARK(start.field, fpc); } action snake_upcase_field { snake_upcase_char(deconst(fpc)); } action downcase_char { downcase_char(deconst(fpc)); } action write_field { hp->s.field_len = LEN(start.field, fpc); } action start_value { MARK(mark, fpc); } action write_value { write_value(hp, buffer, fpc); } action write_cont_value { write_cont_value(hp, buffer, fpc); } action request_method { request_method(hp, PTR_TO(mark), LEN(mark, fpc)); } action scheme { rb_hash_aset(hp->env, g_rack_url_scheme, STR_NEW(mark, fpc)); } action host { rb_hash_aset(hp->env, g_http_host, STR_NEW(mark, fpc)); } action request_uri { VALUE str; VALIDATE_MAX_URI_LENGTH(LEN(mark, fpc), REQUEST_URI); str = rb_hash_aset(hp->env, g_request_uri, STR_NEW(mark, fpc)); /* * "OPTIONS * HTTP/1.1\r\n" is a valid request, but we can't have '*' * in REQUEST_PATH or PATH_INFO or else Rack::Lint will complain */ if (STR_CSTR_EQ(str, "*")) { str = rb_str_new(NULL, 0); rb_hash_aset(hp->env, g_path_info, str); rb_hash_aset(hp->env, g_request_path, str); } } action fragment { VALIDATE_MAX_URI_LENGTH(LEN(mark, fpc), FRAGMENT); rb_hash_aset(hp->env, g_fragment, STR_NEW(mark, fpc)); } action start_query {MARK(start.query, fpc); } action query_string { VALIDATE_MAX_URI_LENGTH(LEN(start.query, fpc), QUERY_STRING); rb_hash_aset(hp->env, g_query_string, STR_NEW(start.query, fpc)); } action http_version { http_version(hp, PTR_TO(mark), LEN(mark, fpc)); } action request_path { VALUE val; VALIDATE_MAX_URI_LENGTH(LEN(mark, fpc), REQUEST_PATH); val = rb_hash_aset(hp->env, g_request_path, STR_NEW(mark, fpc)); /* rack says PATH_INFO must start with "/" or be empty */ if (!STR_CSTR_EQ(val, "*")) rb_hash_aset(hp->env, g_path_info, val); } action add_to_chunk_size { hp->len.chunk = step_incr(hp->len.chunk, fc, 16); if (hp->len.chunk < 0) parser_raise(eHttpParserError, "invalid chunk size"); } action header_done { finalize_header(hp); cs = http_parser_first_final; if (HP_FL_TEST(hp, HASBODY)) { HP_FL_SET(hp, INBODY); if (HP_FL_TEST(hp, CHUNKED)) cs = http_parser_en_ChunkedBody; } else { HP_FL_SET(hp, REQEOF); assert(!HP_FL_TEST(hp, CHUNKED) && "chunked encoding without body!"); } /* * go back to Ruby so we can call the Rack application, we'll reenter * the parser iff the body needs to be processed. */ goto post_exec; } action end_trailers { cs = http_parser_first_final; goto post_exec; } action end_chunked_body { HP_FL_SET(hp, INTRAILER); cs = http_parser_en_Trailers; ++p; assert(p <= pe && "buffer overflow after chunked body"); goto post_exec; } action skip_chunk_data { skip_chunk_data_hack: { size_t nr = MIN((size_t)hp->len.chunk, REMAINING); memcpy(RSTRING_PTR(hp->cont) + hp->s.dest_offset, fpc, nr); hp->s.dest_offset += nr; hp->len.chunk -= nr; p += nr; assert(hp->len.chunk >= 0 && "negative chunk length"); if ((size_t)hp->len.chunk > REMAINING) { HP_FL_SET(hp, INCHUNK); goto post_exec; } else { fhold; fgoto chunk_end; } }} include unicorn_http_common "unicorn_http_common.rl"; }%% /** Data **/ %% write data; static void http_parser_init(struct http_parser *hp) { int cs = 0; hp->flags = 0; hp->mark = 0; hp->offset = 0; hp->start.field = 0; hp->s.field_len = 0; hp->len.content = 0; hp->cont = Qfalse; /* zero on MRI, should be optimized away by above */ %% write init; hp->cs = cs; } /** exec **/ static void http_parser_execute(struct http_parser *hp, char *buffer, size_t len) { const char *p, *pe; int cs = hp->cs; size_t off = hp->offset; if (cs == http_parser_first_final) return; assert(off <= len && "offset past end of buffer"); p = buffer+off; pe = buffer+len; assert((void *)(pe - p) == (void *)(len - off) && "pointers aren't same distance"); if (HP_FL_TEST(hp, INCHUNK)) { HP_FL_UNSET(hp, INCHUNK); goto skip_chunk_data_hack; } %% write exec; post_exec: /* "_out:" also goes here */ if (hp->cs != http_parser_error) hp->cs = cs; hp->offset = p - buffer; assert(p <= pe && "buffer overflow after parsing execute"); assert(hp->offset <= len && "offset longer than length"); } static struct http_parser *data_get(VALUE self) { struct http_parser *hp; Data_Get_Struct(self, struct http_parser, hp); assert(hp && "failed to extract http_parser struct"); return hp; } /* * set rack.url_scheme to "https" or "http", no others are allowed by Rack * this resembles the Rack::Request#scheme method as of rack commit * 35bb5ba6746b5d346de9202c004cc926039650c7 */ static void set_url_scheme(VALUE env, VALUE *server_port) { VALUE scheme = rb_hash_aref(env, g_rack_url_scheme); if (NIL_P(scheme)) { if (trust_x_forward == Qfalse) { scheme = g_http; } else { scheme = rb_hash_aref(env, g_http_x_forwarded_ssl); if (!NIL_P(scheme) && STR_CSTR_EQ(scheme, "on")) { *server_port = g_port_443; scheme = g_https; } else { scheme = rb_hash_aref(env, g_http_x_forwarded_proto); if (NIL_P(scheme)) { scheme = g_http; } else { long len = RSTRING_LEN(scheme); if (len >= 5 && !memcmp(RSTRING_PTR(scheme), "https", 5)) { if (len != 5) scheme = g_https; *server_port = g_port_443; } else { scheme = g_http; } } } } rb_hash_aset(env, g_rack_url_scheme, scheme); } else if (STR_CSTR_EQ(scheme, "https")) { *server_port = g_port_443; } else { assert(*server_port == g_port_80 && "server_port not set"); } } /* * Parse and set the SERVER_NAME and SERVER_PORT variables * Not supporting X-Forwarded-Host/X-Forwarded-Port in here since * anybody who needs them is using an unsupported configuration and/or * incompetent. Rack::Request will handle X-Forwarded-{Port,Host} just * fine. */ static void set_server_vars(VALUE env, VALUE *server_port) { VALUE server_name = g_localhost; VALUE host = rb_hash_aref(env, g_http_host); if (!NIL_P(host)) { char *host_ptr = RSTRING_PTR(host); long host_len = RSTRING_LEN(host); char *colon; if (*host_ptr == '[') { /* ipv6 address format */ char *rbracket = memchr(host_ptr + 1, ']', host_len - 1); if (rbracket) colon = (rbracket[1] == ':') ? rbracket + 1 : NULL; else colon = memchr(host_ptr + 1, ':', host_len - 1); } else { colon = memchr(host_ptr, ':', host_len); } if (colon) { long port_start = colon - host_ptr + 1; server_name = rb_str_substr(host, 0, colon - host_ptr); if ((host_len - port_start) > 0) *server_port = rb_str_substr(host, port_start, host_len); } else { server_name = host; } } rb_hash_aset(env, g_server_name, server_name); rb_hash_aset(env, g_server_port, *server_port); } static void finalize_header(struct http_parser *hp) { VALUE server_port = g_port_80; set_url_scheme(hp->env, &server_port); set_server_vars(hp->env, &server_port); if (!HP_FL_TEST(hp, HASHEADER)) rb_hash_aset(hp->env, g_server_protocol, g_http_09); /* rack requires QUERY_STRING */ if (NIL_P(rb_hash_aref(hp->env, g_query_string))) rb_hash_aset(hp->env, g_query_string, rb_str_new(NULL, 0)); } static void hp_mark(void *ptr) { struct http_parser *hp = ptr; rb_gc_mark(hp->buf); rb_gc_mark(hp->env); rb_gc_mark(hp->cont); } static VALUE HttpParser_alloc(VALUE klass) { struct http_parser *hp; return Data_Make_Struct(klass, struct http_parser, hp_mark, -1, hp); } /** * call-seq: * parser.new => parser * * Creates a new parser. */ static VALUE HttpParser_init(VALUE self) { struct http_parser *hp = data_get(self); http_parser_init(hp); hp->buf = rb_str_new(NULL, 0); hp->env = rb_hash_new(); hp->nr_requests = keepalive_requests; return self; } /** * call-seq: * parser.clear => parser * * Resets the parser to it's initial state so that you can reuse it * rather than making new ones. */ static VALUE HttpParser_clear(VALUE self) { struct http_parser *hp = data_get(self); http_parser_init(hp); rb_funcall(hp->env, id_clear, 0); rb_ivar_set(self, id_response_start_sent, Qfalse); return self; } /** * call-seq: * parser.dechunk! => parser * * Resets the parser to a state suitable for dechunking response bodies * */ static VALUE HttpParser_dechunk_bang(VALUE self) { struct http_parser *hp = data_get(self); http_parser_init(hp); /* * we don't care about trailers in dechunk-only mode, * but if we did we'd set UH_FL_HASTRAILER and clear hp->env */ if (0) { rb_funcall(hp->env, id_clear, 0); hp->flags = UH_FL_HASTRAILER; } hp->flags |= UH_FL_HASBODY | UH_FL_INBODY | UH_FL_CHUNKED; hp->cs = http_parser_en_ChunkedBody; return self; } /** * call-seq: * parser.reset => nil * * Resets the parser to it's initial state so that you can reuse it * rather than making new ones. * * This method is deprecated and to be removed in Unicorn 4.x */ static VALUE HttpParser_reset(VALUE self) { static int warned; if (!warned) { rb_warn("Unicorn::HttpParser#reset is deprecated; " "use Unicorn::HttpParser#clear instead"); } HttpParser_clear(self); return Qnil; } static void advance_str(VALUE str, off_t nr) { long len = RSTRING_LEN(str); if (len == 0) return; rb_str_modify(str); assert(nr <= len && "trying to advance past end of buffer"); len -= nr; if (len > 0) /* unlikely, len is usually 0 */ memmove(RSTRING_PTR(str), RSTRING_PTR(str) + nr, len); rb_str_set_len(str, len); } /** * call-seq: * parser.content_length => nil or Integer * * Returns the number of bytes left to run through HttpParser#filter_body. * This will initially be the value of the "Content-Length" HTTP header * after header parsing is complete and will decrease in value as * HttpParser#filter_body is called for each chunk. This should return * zero for requests with no body. * * This will return nil on "Transfer-Encoding: chunked" requests. */ static VALUE HttpParser_content_length(VALUE self) { struct http_parser *hp = data_get(self); return HP_FL_TEST(hp, CHUNKED) ? Qnil : OFFT2NUM(hp->len.content); } /** * Document-method: parse * call-seq: * parser.parse => env or nil * * Takes a Hash and a String of data, parses the String of data filling * in the Hash returning the Hash if parsing is finished, nil otherwise * When returning the env Hash, it may modify data to point to where * body processing should begin. * * Raises HttpParserError if there are parsing errors. */ static VALUE HttpParser_parse(VALUE self) { struct http_parser *hp = data_get(self); VALUE data = hp->buf; if (HP_FL_TEST(hp, TO_CLEAR)) HttpParser_clear(self); http_parser_execute(hp, RSTRING_PTR(data), RSTRING_LEN(data)); if (hp->offset > MAX_HEADER_LEN) parser_raise(e413, "HTTP header is too large"); if (hp->cs == http_parser_first_final || hp->cs == http_parser_en_ChunkedBody) { advance_str(data, hp->offset + 1); hp->offset = 0; if (HP_FL_TEST(hp, INTRAILER)) HP_FL_SET(hp, REQEOF); return hp->env; } if (hp->cs == http_parser_error) parser_raise(eHttpParserError, "Invalid HTTP format, parsing fails."); return Qnil; } /** * Document-method: parse * call-seq: * parser.add_parse(buffer) => env or nil * * adds the contents of +buffer+ to the internal buffer and attempts to * continue parsing. Returns the +env+ Hash on success or nil if more * data is needed. * * Raises HttpParserError if there are parsing errors. */ static VALUE HttpParser_add_parse(VALUE self, VALUE buffer) { struct http_parser *hp = data_get(self); Check_Type(buffer, T_STRING); rb_str_buf_append(hp->buf, buffer); return HttpParser_parse(self); } /** * Document-method: trailers * call-seq: * parser.trailers(req, data) => req or nil * * This is an alias for HttpParser#headers */ /** * Document-method: headers */ static VALUE HttpParser_headers(VALUE self, VALUE env, VALUE buf) { struct http_parser *hp = data_get(self); hp->env = env; hp->buf = buf; return HttpParser_parse(self); } static int chunked_eof(struct http_parser *hp) { return ((hp->cs == http_parser_first_final) || HP_FL_TEST(hp, INTRAILER)); } /** * call-seq: * parser.body_eof? => true or false * * Detects if we're done filtering the body or not. This can be used * to detect when to stop calling HttpParser#filter_body. */ static VALUE HttpParser_body_eof(VALUE self) { struct http_parser *hp = data_get(self); if (HP_FL_TEST(hp, CHUNKED)) return chunked_eof(hp) ? Qtrue : Qfalse; return hp->len.content == 0 ? Qtrue : Qfalse; } /** * call-seq: * parser.keepalive? => true or false * * This should be used to detect if a request can really handle * keepalives and pipelining. Currently, the rules are: * * 1. MUST be a GET or HEAD request * 2. MUST be HTTP/1.1 +or+ HTTP/1.0 with "Connection: keep-alive" * 3. MUST NOT have "Connection: close" set */ static VALUE HttpParser_keepalive(VALUE self) { struct http_parser *hp = data_get(self); return HP_FL_ALL(hp, KEEPALIVE) ? Qtrue : Qfalse; } /** * call-seq: * parser.next? => true or false * * Exactly like HttpParser#keepalive?, except it will reset the internal * parser state on next parse if it returns true. It will also respect * the maximum *keepalive_requests* value and return false if that is * reached. */ static VALUE HttpParser_next(VALUE self) { struct http_parser *hp = data_get(self); if ((HP_FL_ALL(hp, KEEPALIVE)) && (hp->nr_requests-- != 0)) { HP_FL_SET(hp, TO_CLEAR); return Qtrue; } return Qfalse; } /** * call-seq: * parser.headers? => true or false * * This should be used to detect if a request has headers (and if * the response will have headers as well). HTTP/0.9 requests * should return false, all subsequent HTTP versions will return true */ static VALUE HttpParser_has_headers(VALUE self) { struct http_parser *hp = data_get(self); return HP_FL_TEST(hp, HASHEADER) ? Qtrue : Qfalse; } static VALUE HttpParser_buf(VALUE self) { return data_get(self)->buf; } static VALUE HttpParser_env(VALUE self) { return data_get(self)->env; } /** * call-seq: * parser.filter_body(dst, src) => nil/src * * Takes a String of +src+, will modify data if dechunking is done. * Returns +nil+ if there is more data left to process. Returns * +src+ if body processing is complete. When returning +src+, * it may modify +src+ so the start of the string points to where * the body ended so that trailer processing can begin. * * Raises HttpParserError if there are dechunking errors. * Basically this is a glorified memcpy(3) that copies +src+ * into +buf+ while filtering it through the dechunker. */ static VALUE HttpParser_filter_body(VALUE self, VALUE dst, VALUE src) { struct http_parser *hp = data_get(self); char *srcptr; long srclen; srcptr = RSTRING_PTR(src); srclen = RSTRING_LEN(src); StringValue(dst); if (HP_FL_TEST(hp, CHUNKED)) { if (!chunked_eof(hp)) { rb_str_modify(dst); rb_str_resize(dst, srclen); /* we can never copy more than srclen bytes */ hp->s.dest_offset = 0; hp->cont = dst; hp->buf = src; http_parser_execute(hp, srcptr, srclen); if (hp->cs == http_parser_error) parser_raise(eHttpParserError, "Invalid HTTP format, parsing fails."); assert(hp->s.dest_offset <= hp->offset && "destination buffer overflow"); advance_str(src, hp->offset); rb_str_set_len(dst, hp->s.dest_offset); if (RSTRING_LEN(dst) == 0 && chunked_eof(hp)) { assert(hp->len.chunk == 0 && "chunk at EOF but more to parse"); } else { src = Qnil; } } } else { /* no need to enter the Ragel machine for unchunked transfers */ assert(hp->len.content >= 0 && "negative Content-Length"); if (hp->len.content > 0) { long nr = MIN(srclen, hp->len.content); rb_str_modify(dst); rb_str_resize(dst, nr); /* * using rb_str_replace() to avoid memcpy() doesn't help in * most cases because a GC-aware programmer will pass an explicit * buffer to env["rack.input"].read and reuse the buffer in a loop. * This causes copy-on-write behavior to be triggered anyways * when the +src+ buffer is modified (when reading off the socket). */ hp->buf = src; memcpy(RSTRING_PTR(dst), srcptr, nr); hp->len.content -= nr; if (hp->len.content == 0) { HP_FL_SET(hp, REQEOF); hp->cs = http_parser_first_final; } advance_str(src, nr); src = Qnil; } } hp->offset = 0; /* for trailer parsing */ return src; } #define SET_GLOBAL(var,str) do { \ var = find_common_field(str, sizeof(str) - 1); \ assert(!NIL_P(var) && "missed global field"); \ } while (0) void Init_unicorn_http(void) { VALUE mUnicorn, cHttpParser; mUnicorn = rb_const_get(rb_cObject, rb_intern("Unicorn")); cHttpParser = rb_define_class_under(mUnicorn, "HttpParser", rb_cObject); eHttpParserError = rb_define_class_under(mUnicorn, "HttpParserError", rb_eIOError); e413 = rb_define_class_under(mUnicorn, "RequestEntityTooLargeError", eHttpParserError); e414 = rb_define_class_under(mUnicorn, "RequestURITooLongError", eHttpParserError); init_globals(); rb_define_alloc_func(cHttpParser, HttpParser_alloc); rb_define_method(cHttpParser, "initialize", HttpParser_init, 0); rb_define_method(cHttpParser, "clear", HttpParser_clear, 0); rb_define_method(cHttpParser, "reset", HttpParser_reset, 0); rb_define_method(cHttpParser, "dechunk!", HttpParser_dechunk_bang, 0); rb_define_method(cHttpParser, "parse", HttpParser_parse, 0); rb_define_method(cHttpParser, "add_parse", HttpParser_add_parse, 1); rb_define_method(cHttpParser, "headers", HttpParser_headers, 2); rb_define_method(cHttpParser, "trailers", HttpParser_headers, 2); rb_define_method(cHttpParser, "filter_body", HttpParser_filter_body, 2); rb_define_method(cHttpParser, "content_length", HttpParser_content_length, 0); rb_define_method(cHttpParser, "body_eof?", HttpParser_body_eof, 0); rb_define_method(cHttpParser, "keepalive?", HttpParser_keepalive, 0); rb_define_method(cHttpParser, "headers?", HttpParser_has_headers, 0); rb_define_method(cHttpParser, "next?", HttpParser_next, 0); rb_define_method(cHttpParser, "buf", HttpParser_buf, 0); rb_define_method(cHttpParser, "env", HttpParser_env, 0); /* * The maximum size a single chunk when using chunked transfer encoding. * This is only a theoretical maximum used to detect errors in clients, * it is highly unlikely to encounter clients that send more than * several kilobytes at once. */ rb_define_const(cHttpParser, "CHUNK_MAX", OFFT2NUM(UH_OFF_T_MAX)); /* * The maximum size of the body as specified by Content-Length. * This is only a theoretical maximum, the actual limit is subject * to the limits of the file system used for +Dir.tmpdir+. */ rb_define_const(cHttpParser, "LENGTH_MAX", OFFT2NUM(UH_OFF_T_MAX)); /* default value for keepalive_requests */ rb_define_const(cHttpParser, "KEEPALIVE_REQUESTS_DEFAULT", ULONG2NUM(keepalive_requests)); rb_define_singleton_method(cHttpParser, "keepalive_requests", ka_req, 0); rb_define_singleton_method(cHttpParser, "keepalive_requests=", set_ka_req, 1); rb_define_singleton_method(cHttpParser, "trust_x_forwarded=", set_xftrust, 1); rb_define_singleton_method(cHttpParser, "trust_x_forwarded?", xftrust, 0); rb_define_singleton_method(cHttpParser, "max_header_len=", set_maxhdrlen, 1); init_common_fields(); SET_GLOBAL(g_http_host, "HOST"); SET_GLOBAL(g_http_trailer, "TRAILER"); SET_GLOBAL(g_http_transfer_encoding, "TRANSFER_ENCODING"); SET_GLOBAL(g_content_length, "CONTENT_LENGTH"); SET_GLOBAL(g_http_connection, "CONNECTION"); id_clear = rb_intern("clear"); id_set_backtrace = rb_intern("set_backtrace"); id_response_start_sent = rb_intern("@response_start_sent"); init_unicorn_httpdate(); } #undef SET_GLOBAL unicorn-4.7.0/ext/unicorn_http/global_variables.h0000644000004100000410000000612012236653132022206 0ustar www-datawww-data#ifndef global_variables_h #define global_variables_h static VALUE eHttpParserError; static VALUE e413; static VALUE e414; static VALUE g_rack_url_scheme; static VALUE g_request_method; static VALUE g_request_uri; static VALUE g_fragment; static VALUE g_query_string; static VALUE g_http_version; static VALUE g_request_path; static VALUE g_path_info; static VALUE g_server_name; static VALUE g_server_port; static VALUE g_server_protocol; static VALUE g_http_host; static VALUE g_http_x_forwarded_proto; static VALUE g_http_x_forwarded_ssl; static VALUE g_http_transfer_encoding; static VALUE g_content_length; static VALUE g_http_trailer; static VALUE g_http_connection; static VALUE g_port_80; static VALUE g_port_443; static VALUE g_localhost; static VALUE g_http; static VALUE g_https; static VALUE g_http_09; static VALUE g_http_10; static VALUE g_http_11; /** Defines common length and error messages for input length validation. */ #define DEF_MAX_LENGTH(N, length) \ static const size_t MAX_##N##_LENGTH = length; \ static const char * const MAX_##N##_LENGTH_ERR = \ "HTTP element " # N " is longer than the " # length " allowed length." NORETURN(static void parser_raise(VALUE klass, const char *)); /** * Validates the max length of given input and throws an HttpParserError * exception if over. */ #define VALIDATE_MAX_LENGTH(len, N) do { \ if (len > MAX_##N##_LENGTH) \ parser_raise(eHttpParserError, MAX_##N##_LENGTH_ERR); \ } while (0) #define VALIDATE_MAX_URI_LENGTH(len, N) do { \ if (len > MAX_##N##_LENGTH) \ parser_raise(e414, MAX_##N##_LENGTH_ERR); \ } while (0) /** Defines global strings in the init method. */ #define DEF_GLOBAL(N, val) do { \ g_##N = rb_obj_freeze(rb_str_new(val, sizeof(val) - 1)); \ rb_global_variable(&g_##N); \ } while (0) /* Defines the maximum allowed lengths for various input elements.*/ DEF_MAX_LENGTH(FIELD_NAME, 256); DEF_MAX_LENGTH(FIELD_VALUE, 80 * 1024); DEF_MAX_LENGTH(REQUEST_URI, 1024 * 15); DEF_MAX_LENGTH(FRAGMENT, 1024); /* Don't know if this length is specified somewhere or not */ DEF_MAX_LENGTH(REQUEST_PATH, 4096); /* common PATH_MAX on modern systems */ DEF_MAX_LENGTH(QUERY_STRING, (1024 * 10)); static void init_globals(void) { DEF_GLOBAL(rack_url_scheme, "rack.url_scheme"); DEF_GLOBAL(request_method, "REQUEST_METHOD"); DEF_GLOBAL(request_uri, "REQUEST_URI"); DEF_GLOBAL(fragment, "FRAGMENT"); DEF_GLOBAL(query_string, "QUERY_STRING"); DEF_GLOBAL(http_version, "HTTP_VERSION"); DEF_GLOBAL(request_path, "REQUEST_PATH"); DEF_GLOBAL(path_info, "PATH_INFO"); DEF_GLOBAL(server_name, "SERVER_NAME"); DEF_GLOBAL(server_port, "SERVER_PORT"); DEF_GLOBAL(server_protocol, "SERVER_PROTOCOL"); DEF_GLOBAL(http_x_forwarded_proto, "HTTP_X_FORWARDED_PROTO"); DEF_GLOBAL(http_x_forwarded_ssl, "HTTP_X_FORWARDED_SSL"); DEF_GLOBAL(port_80, "80"); DEF_GLOBAL(port_443, "443"); DEF_GLOBAL(localhost, "localhost"); DEF_GLOBAL(http, "http"); DEF_GLOBAL(https, "https"); DEF_GLOBAL(http_11, "HTTP/1.1"); DEF_GLOBAL(http_10, "HTTP/1.0"); DEF_GLOBAL(http_09, "HTTP/0.9"); } #undef DEF_GLOBAL #endif /* global_variables_h */ unicorn-4.7.0/ext/unicorn_http/unicorn_http_common.rl0000644000004100000410000000552212236653132023175 0ustar www-datawww-data%%{ machine unicorn_http_common; #### HTTP PROTOCOL GRAMMAR # line endings CRLF = "\r\n"; # character types CTL = (cntrl | 127); safe = ("$" | "-" | "_" | "."); extra = ("!" | "*" | "'" | "(" | ")" | ","); reserved = (";" | "/" | "?" | ":" | "@" | "&" | "=" | "+"); sorta_safe = ("\"" | "<" | ">"); unsafe = (CTL | " " | "#" | "%" | sorta_safe); national = any -- (alpha | digit | reserved | extra | safe | unsafe); unreserved = (alpha | digit | safe | extra | national); escape = ("%" xdigit xdigit); uchar = (unreserved | escape | sorta_safe); pchar = (uchar | ":" | "@" | "&" | "=" | "+"); tspecials = ("(" | ")" | "<" | ">" | "@" | "," | ";" | ":" | "\\" | "\"" | "/" | "[" | "]" | "?" | "=" | "{" | "}" | " " | "\t"); lws = (" " | "\t"); content = ((any -- CTL) | lws); # elements token = (ascii -- (CTL | tspecials)); # URI schemes and absolute paths scheme = ( "http"i ("s"i)? ) $downcase_char >mark %scheme; hostname = ((alnum | "-" | "." | "_")+ | ("[" (":" | xdigit)+ "]")); host_with_port = (hostname (":" digit*)?) >mark %host; userinfo = ((unreserved | escape | ";" | ":" | "&" | "=" | "+")+ "@")*; path = ( pchar+ ( "/" pchar* )* ) ; query = ( uchar | reserved )* %query_string ; param = ( pchar | "/" )* ; params = ( param ( ";" param )* ) ; rel_path = (path? (";" params)? %request_path) ("?" %start_query query)?; absolute_path = ( "/"+ rel_path ); path_uri = absolute_path > mark %request_uri; Absolute_URI = (scheme "://" userinfo host_with_port path_uri); Request_URI = ((absolute_path | "*") >mark %request_uri) | Absolute_URI; Fragment = ( uchar | reserved )* >mark %fragment; Method = (token){1,20} >mark %request_method; GetOnly = "GET" >mark %request_method; http_number = ( digit+ "." digit+ ) ; HTTP_Version = ( "HTTP/" http_number ) >mark %http_version ; Request_Line = ( Method " " Request_URI ("#" Fragment){0,1} " " HTTP_Version CRLF ) ; field_name = ( token -- ":" )+ >start_field $snake_upcase_field %write_field; field_value = content* >start_value %write_value; value_cont = lws+ content* >start_value %write_cont_value; message_header = ((field_name ":" lws* field_value)|value_cont) :> CRLF; chunk_ext_val = token*; chunk_ext_name = token*; chunk_extension = ( ";" " "* chunk_ext_name ("=" chunk_ext_val)? )*; last_chunk = "0"+ chunk_extension CRLF; chunk_size = (xdigit* [1-9a-fA-F] xdigit*) $add_to_chunk_size; chunk_end = CRLF; chunk_body = any >skip_chunk_data; chunk_begin = chunk_size chunk_extension CRLF; chunk = chunk_begin chunk_body chunk_end; ChunkedBody := chunk* last_chunk @end_chunked_body; Trailers := (message_header)* CRLF @end_trailers; FullRequest = Request_Line (message_header)* CRLF @header_done; SimpleRequest = GetOnly " " Request_URI ("#"Fragment){0,1} CRLF @header_done; main := FullRequest | SimpleRequest; }%% unicorn-4.7.0/ext/unicorn_http/httpdate.c0000644000004100000410000000405412236653132020532 0ustar www-datawww-data#include #include #include static const size_t buf_capa = sizeof("Thu, 01 Jan 1970 00:00:00 GMT"); static VALUE buf; static char *buf_ptr; static const char week[] = "Sun\0Mon\0Tue\0Wed\0Thu\0Fri\0Sat"; static const char months[] = "Jan\0Feb\0Mar\0Apr\0May\0Jun\0" "Jul\0Aug\0Sep\0Oct\0Nov\0Dec"; /* for people on wonky systems only */ #ifndef HAVE_GMTIME_R static struct tm * my_gmtime_r(time_t *now, struct tm *tm) { struct tm *global = gmtime(now); if (global) *tm = *global; return tm; } # define gmtime_r my_gmtime_r #endif /* * Returns a string which represents the time as rfc1123-date of HTTP-date * defined by RFC 2616: * * day-of-week, DD month-name CCYY hh:mm:ss GMT * * Note that the result is always GMT. * * This method is identical to Time#httpdate in the Ruby standard library, * except it is implemented in C for performance. We always saw * Time#httpdate at or near the top of the profiler output so we * decided to rewrite this in C. * * Caveats: it relies on a Ruby implementation with the global VM lock, * a thread-safe version will be provided when a Unix-only, GVL-free Ruby * implementation becomes viable. */ static VALUE httpdate(VALUE self) { static time_t last; time_t now = time(NULL); /* not a syscall on modern 64-bit systems */ struct tm tm; if (last == now) return buf; last = now; gmtime_r(&now, &tm); /* we can make this thread-safe later if our Ruby loses the GVL */ snprintf(buf_ptr, buf_capa, "%s, %02d %s %4d %02d:%02d:%02d GMT", week + (tm.tm_wday * 4), tm.tm_mday, months + (tm.tm_mon * 4), tm.tm_year + 1900, tm.tm_hour, tm.tm_min, tm.tm_sec); return buf; } void init_unicorn_httpdate(void) { VALUE mod = rb_const_get(rb_cObject, rb_intern("Unicorn")); mod = rb_define_module_under(mod, "HttpResponse"); buf = rb_str_new(0, buf_capa - 1); rb_global_variable(&buf); buf_ptr = RSTRING_PTR(buf); httpdate(Qnil); rb_define_method(mod, "httpdate", httpdate, 0); } unicorn-4.7.0/ext/unicorn_http/unicorn_http.c0000644000004100000410000025775012236653132021446 0ustar www-datawww-data #line 1 "unicorn_http.rl" /** * Copyright (c) 2009 Eric Wong (all bugs are Eric's fault) * Copyright (c) 2005 Zed A. Shaw * You can redistribute it and/or modify it under the same terms as Ruby 1.8 or * the GPLv2+ (GPLv3+ preferred) */ #include "ruby.h" #include "ext_help.h" #include #include #include #include "common_field_optimization.h" #include "global_variables.h" #include "c_util.h" void init_unicorn_httpdate(void); #define UH_FL_CHUNKED 0x1 #define UH_FL_HASBODY 0x2 #define UH_FL_INBODY 0x4 #define UH_FL_HASTRAILER 0x8 #define UH_FL_INTRAILER 0x10 #define UH_FL_INCHUNK 0x20 #define UH_FL_REQEOF 0x40 #define UH_FL_KAVERSION 0x80 #define UH_FL_HASHEADER 0x100 #define UH_FL_TO_CLEAR 0x200 /* all of these flags need to be set for keepalive to be supported */ #define UH_FL_KEEPALIVE (UH_FL_KAVERSION | UH_FL_REQEOF | UH_FL_HASHEADER) /* * whether or not to trust X-Forwarded-Proto and X-Forwarded-SSL when * setting rack.url_scheme */ static VALUE trust_x_forward = Qtrue; static unsigned long keepalive_requests = 100; /* same as nginx */ /* * Returns the maximum number of keepalive requests a client may make * before the parser refuses to continue. */ static VALUE ka_req(VALUE self) { return ULONG2NUM(keepalive_requests); } /* * Sets the maximum number of keepalive requests a client may make. * A special value of +nil+ causes this to be the maximum value * possible (this is architecture-dependent). */ static VALUE set_ka_req(VALUE self, VALUE val) { keepalive_requests = NIL_P(val) ? ULONG_MAX : NUM2ULONG(val); return ka_req(self); } /* * Sets whether or not the parser will trust X-Forwarded-Proto and * X-Forwarded-SSL headers and set "rack.url_scheme" to "https" accordingly. * Rainbows!/Zbatery installations facing untrusted clients directly * should set this to +false+ */ static VALUE set_xftrust(VALUE self, VALUE val) { if (Qtrue == val || Qfalse == val) trust_x_forward = val; else rb_raise(rb_eTypeError, "must be true or false"); return val; } /* * returns whether or not the parser will trust X-Forwarded-Proto and * X-Forwarded-SSL headers and set "rack.url_scheme" to "https" accordingly */ static VALUE xftrust(VALUE self) { return trust_x_forward; } static size_t MAX_HEADER_LEN = 1024 * (80 + 32); /* same as Mongrel */ /* this is only intended for use with Rainbows! */ static VALUE set_maxhdrlen(VALUE self, VALUE len) { return SIZET2NUM(MAX_HEADER_LEN = NUM2SIZET(len)); } /* keep this small for Rainbows! since every client has one */ struct http_parser { int cs; /* Ragel internal state */ unsigned int flags; unsigned long nr_requests; size_t mark; size_t offset; union { /* these 2 fields don't nest */ size_t field; size_t query; } start; union { size_t field_len; /* only used during header processing */ size_t dest_offset; /* only used during body processing */ } s; VALUE buf; VALUE env; VALUE cont; /* Qfalse: unset, Qnil: ignored header, T_STRING: append */ union { off_t content; off_t chunk; } len; }; static ID id_clear, id_set_backtrace, id_response_start_sent; static void finalize_header(struct http_parser *hp); static void parser_raise(VALUE klass, const char *msg) { VALUE exc = rb_exc_new2(klass, msg); VALUE bt = rb_ary_new(); rb_funcall(exc, id_set_backtrace, 1, bt); rb_exc_raise(exc); } #define REMAINING (unsigned long)(pe - p) #define LEN(AT, FPC) (FPC - buffer - hp->AT) #define MARK(M,FPC) (hp->M = (FPC) - buffer) #define PTR_TO(F) (buffer + hp->F) #define STR_NEW(M,FPC) rb_str_new(PTR_TO(M), LEN(M, FPC)) #define STRIPPED_STR_NEW(M,FPC) stripped_str_new(PTR_TO(M), LEN(M, FPC)) #define HP_FL_TEST(hp,fl) ((hp)->flags & (UH_FL_##fl)) #define HP_FL_SET(hp,fl) ((hp)->flags |= (UH_FL_##fl)) #define HP_FL_UNSET(hp,fl) ((hp)->flags &= ~(UH_FL_##fl)) #define HP_FL_ALL(hp,fl) (HP_FL_TEST(hp, fl) == (UH_FL_##fl)) static int is_lws(char c) { return (c == ' ' || c == '\t'); } static VALUE stripped_str_new(const char *str, long len) { long end; for (end = len - 1; end >= 0 && is_lws(str[end]); end--); return rb_str_new(str, end + 1); } /* * handles values of the "Connection:" header, keepalive is implied * for HTTP/1.1 but needs to be explicitly enabled with HTTP/1.0 * Additionally, we require GET/HEAD requests to support keepalive. */ static void hp_keepalive_connection(struct http_parser *hp, VALUE val) { if (STR_CSTR_CASE_EQ(val, "keep-alive")) { /* basically have HTTP/1.0 masquerade as HTTP/1.1+ */ HP_FL_SET(hp, KAVERSION); } else if (STR_CSTR_CASE_EQ(val, "close")) { /* * it doesn't matter what HTTP version or request method we have, * if a client says "Connection: close", we disable keepalive */ HP_FL_UNSET(hp, KAVERSION); } else { /* * client could've sent anything, ignore it for now. Maybe * "HP_FL_UNSET(hp, KAVERSION);" just in case? * Raising an exception might be too mean... */ } } static void request_method(struct http_parser *hp, const char *ptr, size_t len) { VALUE v = rb_str_new(ptr, len); rb_hash_aset(hp->env, g_request_method, v); } static void http_version(struct http_parser *hp, const char *ptr, size_t len) { VALUE v; HP_FL_SET(hp, HASHEADER); if (CONST_MEM_EQ("HTTP/1.1", ptr, len)) { /* HTTP/1.1 implies keepalive unless "Connection: close" is set */ HP_FL_SET(hp, KAVERSION); v = g_http_11; } else if (CONST_MEM_EQ("HTTP/1.0", ptr, len)) { v = g_http_10; } else { v = rb_str_new(ptr, len); } rb_hash_aset(hp->env, g_server_protocol, v); rb_hash_aset(hp->env, g_http_version, v); } static inline void hp_invalid_if_trailer(struct http_parser *hp) { if (HP_FL_TEST(hp, INTRAILER)) parser_raise(eHttpParserError, "invalid Trailer"); } static void write_cont_value(struct http_parser *hp, char *buffer, const char *p) { char *vptr; long end; long len = LEN(mark, p); long cont_len; if (hp->cont == Qfalse) parser_raise(eHttpParserError, "invalid continuation line"); if (NIL_P(hp->cont)) return; /* we're ignoring this header (probably Host:) */ assert(TYPE(hp->cont) == T_STRING && "continuation line is not a string"); assert(hp->mark > 0 && "impossible continuation line offset"); if (len == 0) return; cont_len = RSTRING_LEN(hp->cont); if (cont_len > 0) { --hp->mark; len = LEN(mark, p); } vptr = PTR_TO(mark); /* normalize tab to space */ if (cont_len > 0) { assert((' ' == *vptr || '\t' == *vptr) && "invalid leading white space"); *vptr = ' '; } for (end = len - 1; end >= 0 && is_lws(vptr[end]); end--); rb_str_buf_cat(hp->cont, vptr, end + 1); } static void write_value(struct http_parser *hp, const char *buffer, const char *p) { VALUE f = find_common_field(PTR_TO(start.field), hp->s.field_len); VALUE v; VALUE e; VALIDATE_MAX_LENGTH(LEN(mark, p), FIELD_VALUE); v = LEN(mark, p) == 0 ? rb_str_buf_new(128) : STRIPPED_STR_NEW(mark, p); if (NIL_P(f)) { const char *field = PTR_TO(start.field); size_t flen = hp->s.field_len; VALIDATE_MAX_LENGTH(flen, FIELD_NAME); /* * ignore "Version" headers since they conflict with the HTTP_VERSION * rack env variable. */ if (CONST_MEM_EQ("VERSION", field, flen)) { hp->cont = Qnil; return; } f = uncommon_field(field, flen); } else if (f == g_http_connection) { hp_keepalive_connection(hp, v); } else if (f == g_content_length) { hp->len.content = parse_length(RSTRING_PTR(v), RSTRING_LEN(v)); if (hp->len.content < 0) parser_raise(eHttpParserError, "invalid Content-Length"); if (hp->len.content != 0) HP_FL_SET(hp, HASBODY); hp_invalid_if_trailer(hp); } else if (f == g_http_transfer_encoding) { if (STR_CSTR_CASE_EQ(v, "chunked")) { HP_FL_SET(hp, CHUNKED); HP_FL_SET(hp, HASBODY); } hp_invalid_if_trailer(hp); } else if (f == g_http_trailer) { HP_FL_SET(hp, HASTRAILER); hp_invalid_if_trailer(hp); } else { assert(TYPE(f) == T_STRING && "memoized object is not a string"); assert_frozen(f); } e = rb_hash_aref(hp->env, f); if (NIL_P(e)) { hp->cont = rb_hash_aset(hp->env, f, v); } else if (f == g_http_host) { /* * ignored, absolute URLs in REQUEST_URI take precedence over * the Host: header (ref: rfc 2616, section 5.2.1) */ hp->cont = Qnil; } else { rb_str_buf_cat(e, ",", 1); hp->cont = rb_str_buf_append(e, v); } } /** Machine **/ #line 423 "unicorn_http.rl" /** Data **/ #line 325 "unicorn_http.c" static const int http_parser_start = 1; static const int http_parser_first_final = 122; static const int http_parser_error = 0; static const int http_parser_en_ChunkedBody = 100; static const int http_parser_en_ChunkedBody_chunk_chunk_end = 106; static const int http_parser_en_Trailers = 114; static const int http_parser_en_main = 1; #line 427 "unicorn_http.rl" static void http_parser_init(struct http_parser *hp) { int cs = 0; hp->flags = 0; hp->mark = 0; hp->offset = 0; hp->start.field = 0; hp->s.field_len = 0; hp->len.content = 0; hp->cont = Qfalse; /* zero on MRI, should be optimized away by above */ #line 349 "unicorn_http.c" { cs = http_parser_start; } #line 439 "unicorn_http.rl" hp->cs = cs; } /** exec **/ static void http_parser_execute(struct http_parser *hp, char *buffer, size_t len) { const char *p, *pe; int cs = hp->cs; size_t off = hp->offset; if (cs == http_parser_first_final) return; assert(off <= len && "offset past end of buffer"); p = buffer+off; pe = buffer+len; assert((void *)(pe - p) == (void *)(len - off) && "pointers aren't same distance"); if (HP_FL_TEST(hp, INCHUNK)) { HP_FL_UNSET(hp, INCHUNK); goto skip_chunk_data_hack; } #line 382 "unicorn_http.c" { if ( p == pe ) goto _test_eof; switch ( cs ) { case 1: switch( (*p) ) { case 33: goto tr0; case 71: goto tr2; case 124: goto tr0; case 126: goto tr0; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto tr0; } else if ( (*p) >= 35 ) goto tr0; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto tr0; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto tr0; } else goto tr0; } else goto tr0; goto st0; st0: cs = 0; goto _out; tr0: #line 319 "unicorn_http.rl" {MARK(mark, p); } goto st2; st2: if ( ++p == pe ) goto _test_eof2; case 2: #line 424 "unicorn_http.c" switch( (*p) ) { case 32: goto tr3; case 33: goto st49; case 124: goto st49; case 126: goto st49; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto st49; } else if ( (*p) >= 35 ) goto st49; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st49; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto st49; } else goto st49; } else goto st49; goto st0; tr3: #line 328 "unicorn_http.rl" { request_method(hp, PTR_TO(mark), LEN(mark, p)); } goto st3; st3: if ( ++p == pe ) goto _test_eof3; case 3: #line 457 "unicorn_http.c" switch( (*p) ) { case 42: goto tr5; case 47: goto tr6; case 72: goto tr7; case 104: goto tr7; } goto st0; tr5: #line 319 "unicorn_http.rl" {MARK(mark, p); } goto st4; st4: if ( ++p == pe ) goto _test_eof4; case 4: #line 473 "unicorn_http.c" switch( (*p) ) { case 32: goto tr8; case 35: goto tr9; } goto st0; tr8: #line 333 "unicorn_http.rl" { VALUE str; VALIDATE_MAX_URI_LENGTH(LEN(mark, p), REQUEST_URI); str = rb_hash_aset(hp->env, g_request_uri, STR_NEW(mark, p)); /* * "OPTIONS * HTTP/1.1\r\n" is a valid request, but we can't have '*' * in REQUEST_PATH or PATH_INFO or else Rack::Lint will complain */ if (STR_CSTR_EQ(str, "*")) { str = rb_str_new(NULL, 0); rb_hash_aset(hp->env, g_path_info, str); rb_hash_aset(hp->env, g_request_path, str); } } goto st5; tr37: #line 319 "unicorn_http.rl" {MARK(mark, p); } #line 348 "unicorn_http.rl" { VALIDATE_MAX_URI_LENGTH(LEN(mark, p), FRAGMENT); rb_hash_aset(hp->env, g_fragment, STR_NEW(mark, p)); } goto st5; tr40: #line 348 "unicorn_http.rl" { VALIDATE_MAX_URI_LENGTH(LEN(mark, p), FRAGMENT); rb_hash_aset(hp->env, g_fragment, STR_NEW(mark, p)); } goto st5; tr44: #line 358 "unicorn_http.rl" { VALUE val; VALIDATE_MAX_URI_LENGTH(LEN(mark, p), REQUEST_PATH); val = rb_hash_aset(hp->env, g_request_path, STR_NEW(mark, p)); /* rack says PATH_INFO must start with "/" or be empty */ if (!STR_CSTR_EQ(val, "*")) rb_hash_aset(hp->env, g_path_info, val); } #line 333 "unicorn_http.rl" { VALUE str; VALIDATE_MAX_URI_LENGTH(LEN(mark, p), REQUEST_URI); str = rb_hash_aset(hp->env, g_request_uri, STR_NEW(mark, p)); /* * "OPTIONS * HTTP/1.1\r\n" is a valid request, but we can't have '*' * in REQUEST_PATH or PATH_INFO or else Rack::Lint will complain */ if (STR_CSTR_EQ(str, "*")) { str = rb_str_new(NULL, 0); rb_hash_aset(hp->env, g_path_info, str); rb_hash_aset(hp->env, g_request_path, str); } } goto st5; tr50: #line 352 "unicorn_http.rl" {MARK(start.query, p); } #line 353 "unicorn_http.rl" { VALIDATE_MAX_URI_LENGTH(LEN(start.query, p), QUERY_STRING); rb_hash_aset(hp->env, g_query_string, STR_NEW(start.query, p)); } #line 333 "unicorn_http.rl" { VALUE str; VALIDATE_MAX_URI_LENGTH(LEN(mark, p), REQUEST_URI); str = rb_hash_aset(hp->env, g_request_uri, STR_NEW(mark, p)); /* * "OPTIONS * HTTP/1.1\r\n" is a valid request, but we can't have '*' * in REQUEST_PATH or PATH_INFO or else Rack::Lint will complain */ if (STR_CSTR_EQ(str, "*")) { str = rb_str_new(NULL, 0); rb_hash_aset(hp->env, g_path_info, str); rb_hash_aset(hp->env, g_request_path, str); } } goto st5; tr54: #line 353 "unicorn_http.rl" { VALIDATE_MAX_URI_LENGTH(LEN(start.query, p), QUERY_STRING); rb_hash_aset(hp->env, g_query_string, STR_NEW(start.query, p)); } #line 333 "unicorn_http.rl" { VALUE str; VALIDATE_MAX_URI_LENGTH(LEN(mark, p), REQUEST_URI); str = rb_hash_aset(hp->env, g_request_uri, STR_NEW(mark, p)); /* * "OPTIONS * HTTP/1.1\r\n" is a valid request, but we can't have '*' * in REQUEST_PATH or PATH_INFO or else Rack::Lint will complain */ if (STR_CSTR_EQ(str, "*")) { str = rb_str_new(NULL, 0); rb_hash_aset(hp->env, g_path_info, str); rb_hash_aset(hp->env, g_request_path, str); } } goto st5; st5: if ( ++p == pe ) goto _test_eof5; case 5: #line 594 "unicorn_http.c" if ( (*p) == 72 ) goto tr10; goto st0; tr10: #line 319 "unicorn_http.rl" {MARK(mark, p); } goto st6; st6: if ( ++p == pe ) goto _test_eof6; case 6: #line 606 "unicorn_http.c" if ( (*p) == 84 ) goto st7; goto st0; st7: if ( ++p == pe ) goto _test_eof7; case 7: if ( (*p) == 84 ) goto st8; goto st0; st8: if ( ++p == pe ) goto _test_eof8; case 8: if ( (*p) == 80 ) goto st9; goto st0; st9: if ( ++p == pe ) goto _test_eof9; case 9: if ( (*p) == 47 ) goto st10; goto st0; st10: if ( ++p == pe ) goto _test_eof10; case 10: if ( 48 <= (*p) && (*p) <= 57 ) goto st11; goto st0; st11: if ( ++p == pe ) goto _test_eof11; case 11: if ( (*p) == 46 ) goto st12; if ( 48 <= (*p) && (*p) <= 57 ) goto st11; goto st0; st12: if ( ++p == pe ) goto _test_eof12; case 12: if ( 48 <= (*p) && (*p) <= 57 ) goto st13; goto st0; st13: if ( ++p == pe ) goto _test_eof13; case 13: if ( (*p) == 13 ) goto tr18; if ( 48 <= (*p) && (*p) <= 57 ) goto st13; goto st0; tr18: #line 357 "unicorn_http.rl" { http_version(hp, PTR_TO(mark), LEN(mark, p)); } goto st14; tr25: #line 325 "unicorn_http.rl" { MARK(mark, p); } #line 327 "unicorn_http.rl" { write_cont_value(hp, buffer, p); } goto st14; tr27: #line 327 "unicorn_http.rl" { write_cont_value(hp, buffer, p); } goto st14; tr33: #line 325 "unicorn_http.rl" { MARK(mark, p); } #line 326 "unicorn_http.rl" { write_value(hp, buffer, p); } goto st14; tr35: #line 326 "unicorn_http.rl" { write_value(hp, buffer, p); } goto st14; st14: if ( ++p == pe ) goto _test_eof14; case 14: #line 691 "unicorn_http.c" if ( (*p) == 10 ) goto st15; goto st0; st15: if ( ++p == pe ) goto _test_eof15; case 15: switch( (*p) ) { case 9: goto st16; case 13: goto st18; case 32: goto st16; case 33: goto tr22; case 124: goto tr22; case 126: goto tr22; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto tr22; } else if ( (*p) >= 35 ) goto tr22; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto tr22; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto tr22; } else goto tr22; } else goto tr22; goto st0; tr24: #line 325 "unicorn_http.rl" { MARK(mark, p); } goto st16; st16: if ( ++p == pe ) goto _test_eof16; case 16: #line 733 "unicorn_http.c" switch( (*p) ) { case 9: goto tr24; case 13: goto tr25; case 32: goto tr24; case 127: goto st0; } if ( 0 <= (*p) && (*p) <= 31 ) goto st0; goto tr23; tr23: #line 325 "unicorn_http.rl" { MARK(mark, p); } goto st17; st17: if ( ++p == pe ) goto _test_eof17; case 17: #line 751 "unicorn_http.c" switch( (*p) ) { case 13: goto tr27; case 127: goto st0; } if ( (*p) > 8 ) { if ( 10 <= (*p) && (*p) <= 31 ) goto st0; } else if ( (*p) >= 0 ) goto st0; goto st17; tr99: #line 333 "unicorn_http.rl" { VALUE str; VALIDATE_MAX_URI_LENGTH(LEN(mark, p), REQUEST_URI); str = rb_hash_aset(hp->env, g_request_uri, STR_NEW(mark, p)); /* * "OPTIONS * HTTP/1.1\r\n" is a valid request, but we can't have '*' * in REQUEST_PATH or PATH_INFO or else Rack::Lint will complain */ if (STR_CSTR_EQ(str, "*")) { str = rb_str_new(NULL, 0); rb_hash_aset(hp->env, g_path_info, str); rb_hash_aset(hp->env, g_request_path, str); } } goto st18; tr102: #line 319 "unicorn_http.rl" {MARK(mark, p); } #line 348 "unicorn_http.rl" { VALIDATE_MAX_URI_LENGTH(LEN(mark, p), FRAGMENT); rb_hash_aset(hp->env, g_fragment, STR_NEW(mark, p)); } goto st18; tr105: #line 348 "unicorn_http.rl" { VALIDATE_MAX_URI_LENGTH(LEN(mark, p), FRAGMENT); rb_hash_aset(hp->env, g_fragment, STR_NEW(mark, p)); } goto st18; tr109: #line 358 "unicorn_http.rl" { VALUE val; VALIDATE_MAX_URI_LENGTH(LEN(mark, p), REQUEST_PATH); val = rb_hash_aset(hp->env, g_request_path, STR_NEW(mark, p)); /* rack says PATH_INFO must start with "/" or be empty */ if (!STR_CSTR_EQ(val, "*")) rb_hash_aset(hp->env, g_path_info, val); } #line 333 "unicorn_http.rl" { VALUE str; VALIDATE_MAX_URI_LENGTH(LEN(mark, p), REQUEST_URI); str = rb_hash_aset(hp->env, g_request_uri, STR_NEW(mark, p)); /* * "OPTIONS * HTTP/1.1\r\n" is a valid request, but we can't have '*' * in REQUEST_PATH or PATH_INFO or else Rack::Lint will complain */ if (STR_CSTR_EQ(str, "*")) { str = rb_str_new(NULL, 0); rb_hash_aset(hp->env, g_path_info, str); rb_hash_aset(hp->env, g_request_path, str); } } goto st18; tr115: #line 352 "unicorn_http.rl" {MARK(start.query, p); } #line 353 "unicorn_http.rl" { VALIDATE_MAX_URI_LENGTH(LEN(start.query, p), QUERY_STRING); rb_hash_aset(hp->env, g_query_string, STR_NEW(start.query, p)); } #line 333 "unicorn_http.rl" { VALUE str; VALIDATE_MAX_URI_LENGTH(LEN(mark, p), REQUEST_URI); str = rb_hash_aset(hp->env, g_request_uri, STR_NEW(mark, p)); /* * "OPTIONS * HTTP/1.1\r\n" is a valid request, but we can't have '*' * in REQUEST_PATH or PATH_INFO or else Rack::Lint will complain */ if (STR_CSTR_EQ(str, "*")) { str = rb_str_new(NULL, 0); rb_hash_aset(hp->env, g_path_info, str); rb_hash_aset(hp->env, g_request_path, str); } } goto st18; tr119: #line 353 "unicorn_http.rl" { VALIDATE_MAX_URI_LENGTH(LEN(start.query, p), QUERY_STRING); rb_hash_aset(hp->env, g_query_string, STR_NEW(start.query, p)); } #line 333 "unicorn_http.rl" { VALUE str; VALIDATE_MAX_URI_LENGTH(LEN(mark, p), REQUEST_URI); str = rb_hash_aset(hp->env, g_request_uri, STR_NEW(mark, p)); /* * "OPTIONS * HTTP/1.1\r\n" is a valid request, but we can't have '*' * in REQUEST_PATH or PATH_INFO or else Rack::Lint will complain */ if (STR_CSTR_EQ(str, "*")) { str = rb_str_new(NULL, 0); rb_hash_aset(hp->env, g_path_info, str); rb_hash_aset(hp->env, g_request_path, str); } } goto st18; st18: if ( ++p == pe ) goto _test_eof18; case 18: #line 877 "unicorn_http.c" if ( (*p) == 10 ) goto tr28; goto st0; tr28: #line 373 "unicorn_http.rl" { finalize_header(hp); cs = http_parser_first_final; if (HP_FL_TEST(hp, HASBODY)) { HP_FL_SET(hp, INBODY); if (HP_FL_TEST(hp, CHUNKED)) cs = http_parser_en_ChunkedBody; } else { HP_FL_SET(hp, REQEOF); assert(!HP_FL_TEST(hp, CHUNKED) && "chunked encoding without body!"); } /* * go back to Ruby so we can call the Rack application, we'll reenter * the parser iff the body needs to be processed. */ goto post_exec; } goto st122; st122: if ( ++p == pe ) goto _test_eof122; case 122: #line 906 "unicorn_http.c" goto st0; tr22: #line 321 "unicorn_http.rl" { MARK(start.field, p); } #line 322 "unicorn_http.rl" { snake_upcase_char(deconst(p)); } goto st19; tr29: #line 322 "unicorn_http.rl" { snake_upcase_char(deconst(p)); } goto st19; st19: if ( ++p == pe ) goto _test_eof19; case 19: #line 922 "unicorn_http.c" switch( (*p) ) { case 33: goto tr29; case 58: goto tr30; case 124: goto tr29; case 126: goto tr29; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto tr29; } else if ( (*p) >= 35 ) goto tr29; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto tr29; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto tr29; } else goto tr29; } else goto tr29; goto st0; tr32: #line 325 "unicorn_http.rl" { MARK(mark, p); } goto st20; tr30: #line 324 "unicorn_http.rl" { hp->s.field_len = LEN(start.field, p); } goto st20; st20: if ( ++p == pe ) goto _test_eof20; case 20: #line 959 "unicorn_http.c" switch( (*p) ) { case 9: goto tr32; case 13: goto tr33; case 32: goto tr32; case 127: goto st0; } if ( 0 <= (*p) && (*p) <= 31 ) goto st0; goto tr31; tr31: #line 325 "unicorn_http.rl" { MARK(mark, p); } goto st21; st21: if ( ++p == pe ) goto _test_eof21; case 21: #line 977 "unicorn_http.c" switch( (*p) ) { case 13: goto tr35; case 127: goto st0; } if ( (*p) > 8 ) { if ( 10 <= (*p) && (*p) <= 31 ) goto st0; } else if ( (*p) >= 0 ) goto st0; goto st21; tr9: #line 333 "unicorn_http.rl" { VALUE str; VALIDATE_MAX_URI_LENGTH(LEN(mark, p), REQUEST_URI); str = rb_hash_aset(hp->env, g_request_uri, STR_NEW(mark, p)); /* * "OPTIONS * HTTP/1.1\r\n" is a valid request, but we can't have '*' * in REQUEST_PATH or PATH_INFO or else Rack::Lint will complain */ if (STR_CSTR_EQ(str, "*")) { str = rb_str_new(NULL, 0); rb_hash_aset(hp->env, g_path_info, str); rb_hash_aset(hp->env, g_request_path, str); } } goto st22; tr45: #line 358 "unicorn_http.rl" { VALUE val; VALIDATE_MAX_URI_LENGTH(LEN(mark, p), REQUEST_PATH); val = rb_hash_aset(hp->env, g_request_path, STR_NEW(mark, p)); /* rack says PATH_INFO must start with "/" or be empty */ if (!STR_CSTR_EQ(val, "*")) rb_hash_aset(hp->env, g_path_info, val); } #line 333 "unicorn_http.rl" { VALUE str; VALIDATE_MAX_URI_LENGTH(LEN(mark, p), REQUEST_URI); str = rb_hash_aset(hp->env, g_request_uri, STR_NEW(mark, p)); /* * "OPTIONS * HTTP/1.1\r\n" is a valid request, but we can't have '*' * in REQUEST_PATH or PATH_INFO or else Rack::Lint will complain */ if (STR_CSTR_EQ(str, "*")) { str = rb_str_new(NULL, 0); rb_hash_aset(hp->env, g_path_info, str); rb_hash_aset(hp->env, g_request_path, str); } } goto st22; tr51: #line 352 "unicorn_http.rl" {MARK(start.query, p); } #line 353 "unicorn_http.rl" { VALIDATE_MAX_URI_LENGTH(LEN(start.query, p), QUERY_STRING); rb_hash_aset(hp->env, g_query_string, STR_NEW(start.query, p)); } #line 333 "unicorn_http.rl" { VALUE str; VALIDATE_MAX_URI_LENGTH(LEN(mark, p), REQUEST_URI); str = rb_hash_aset(hp->env, g_request_uri, STR_NEW(mark, p)); /* * "OPTIONS * HTTP/1.1\r\n" is a valid request, but we can't have '*' * in REQUEST_PATH or PATH_INFO or else Rack::Lint will complain */ if (STR_CSTR_EQ(str, "*")) { str = rb_str_new(NULL, 0); rb_hash_aset(hp->env, g_path_info, str); rb_hash_aset(hp->env, g_request_path, str); } } goto st22; tr55: #line 353 "unicorn_http.rl" { VALIDATE_MAX_URI_LENGTH(LEN(start.query, p), QUERY_STRING); rb_hash_aset(hp->env, g_query_string, STR_NEW(start.query, p)); } #line 333 "unicorn_http.rl" { VALUE str; VALIDATE_MAX_URI_LENGTH(LEN(mark, p), REQUEST_URI); str = rb_hash_aset(hp->env, g_request_uri, STR_NEW(mark, p)); /* * "OPTIONS * HTTP/1.1\r\n" is a valid request, but we can't have '*' * in REQUEST_PATH or PATH_INFO or else Rack::Lint will complain */ if (STR_CSTR_EQ(str, "*")) { str = rb_str_new(NULL, 0); rb_hash_aset(hp->env, g_path_info, str); rb_hash_aset(hp->env, g_request_path, str); } } goto st22; st22: if ( ++p == pe ) goto _test_eof22; case 22: #line 1087 "unicorn_http.c" switch( (*p) ) { case 32: goto tr37; case 35: goto st0; case 37: goto tr38; case 127: goto st0; } if ( 0 <= (*p) && (*p) <= 31 ) goto st0; goto tr36; tr36: #line 319 "unicorn_http.rl" {MARK(mark, p); } goto st23; st23: if ( ++p == pe ) goto _test_eof23; case 23: #line 1105 "unicorn_http.c" switch( (*p) ) { case 32: goto tr40; case 35: goto st0; case 37: goto st24; case 127: goto st0; } if ( 0 <= (*p) && (*p) <= 31 ) goto st0; goto st23; tr38: #line 319 "unicorn_http.rl" {MARK(mark, p); } goto st24; st24: if ( ++p == pe ) goto _test_eof24; case 24: #line 1123 "unicorn_http.c" if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st25; } else if ( (*p) > 70 ) { if ( 97 <= (*p) && (*p) <= 102 ) goto st25; } else goto st25; goto st0; st25: if ( ++p == pe ) goto _test_eof25; case 25: if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st23; } else if ( (*p) > 70 ) { if ( 97 <= (*p) && (*p) <= 102 ) goto st23; } else goto st23; goto st0; tr6: #line 319 "unicorn_http.rl" {MARK(mark, p); } goto st26; tr71: #line 332 "unicorn_http.rl" { rb_hash_aset(hp->env, g_http_host, STR_NEW(mark, p)); } #line 319 "unicorn_http.rl" {MARK(mark, p); } goto st26; st26: if ( ++p == pe ) goto _test_eof26; case 26: #line 1160 "unicorn_http.c" switch( (*p) ) { case 32: goto tr44; case 35: goto tr45; case 37: goto st27; case 63: goto tr47; case 127: goto st0; } if ( 0 <= (*p) && (*p) <= 31 ) goto st0; goto st26; st27: if ( ++p == pe ) goto _test_eof27; case 27: if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st28; } else if ( (*p) > 70 ) { if ( 97 <= (*p) && (*p) <= 102 ) goto st28; } else goto st28; goto st0; st28: if ( ++p == pe ) goto _test_eof28; case 28: if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st26; } else if ( (*p) > 70 ) { if ( 97 <= (*p) && (*p) <= 102 ) goto st26; } else goto st26; goto st0; tr47: #line 358 "unicorn_http.rl" { VALUE val; VALIDATE_MAX_URI_LENGTH(LEN(mark, p), REQUEST_PATH); val = rb_hash_aset(hp->env, g_request_path, STR_NEW(mark, p)); /* rack says PATH_INFO must start with "/" or be empty */ if (!STR_CSTR_EQ(val, "*")) rb_hash_aset(hp->env, g_path_info, val); } goto st29; st29: if ( ++p == pe ) goto _test_eof29; case 29: #line 1214 "unicorn_http.c" switch( (*p) ) { case 32: goto tr50; case 35: goto tr51; case 37: goto tr52; case 127: goto st0; } if ( 0 <= (*p) && (*p) <= 31 ) goto st0; goto tr49; tr49: #line 352 "unicorn_http.rl" {MARK(start.query, p); } goto st30; st30: if ( ++p == pe ) goto _test_eof30; case 30: #line 1232 "unicorn_http.c" switch( (*p) ) { case 32: goto tr54; case 35: goto tr55; case 37: goto st31; case 127: goto st0; } if ( 0 <= (*p) && (*p) <= 31 ) goto st0; goto st30; tr52: #line 352 "unicorn_http.rl" {MARK(start.query, p); } goto st31; st31: if ( ++p == pe ) goto _test_eof31; case 31: #line 1250 "unicorn_http.c" if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st32; } else if ( (*p) > 70 ) { if ( 97 <= (*p) && (*p) <= 102 ) goto st32; } else goto st32; goto st0; st32: if ( ++p == pe ) goto _test_eof32; case 32: if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st30; } else if ( (*p) > 70 ) { if ( 97 <= (*p) && (*p) <= 102 ) goto st30; } else goto st30; goto st0; tr7: #line 319 "unicorn_http.rl" {MARK(mark, p); } #line 323 "unicorn_http.rl" { downcase_char(deconst(p)); } goto st33; st33: if ( ++p == pe ) goto _test_eof33; case 33: #line 1283 "unicorn_http.c" switch( (*p) ) { case 84: goto tr58; case 116: goto tr58; } goto st0; tr58: #line 323 "unicorn_http.rl" { downcase_char(deconst(p)); } goto st34; st34: if ( ++p == pe ) goto _test_eof34; case 34: #line 1297 "unicorn_http.c" switch( (*p) ) { case 84: goto tr59; case 116: goto tr59; } goto st0; tr59: #line 323 "unicorn_http.rl" { downcase_char(deconst(p)); } goto st35; st35: if ( ++p == pe ) goto _test_eof35; case 35: #line 1311 "unicorn_http.c" switch( (*p) ) { case 80: goto tr60; case 112: goto tr60; } goto st0; tr60: #line 323 "unicorn_http.rl" { downcase_char(deconst(p)); } goto st36; st36: if ( ++p == pe ) goto _test_eof36; case 36: #line 1325 "unicorn_http.c" switch( (*p) ) { case 58: goto tr61; case 83: goto tr62; case 115: goto tr62; } goto st0; tr61: #line 329 "unicorn_http.rl" { rb_hash_aset(hp->env, g_rack_url_scheme, STR_NEW(mark, p)); } goto st37; st37: if ( ++p == pe ) goto _test_eof37; case 37: #line 1342 "unicorn_http.c" if ( (*p) == 47 ) goto st38; goto st0; st38: if ( ++p == pe ) goto _test_eof38; case 38: if ( (*p) == 47 ) goto st39; goto st0; st39: if ( ++p == pe ) goto _test_eof39; case 39: switch( (*p) ) { case 37: goto st41; case 47: goto st0; case 60: goto st0; case 91: goto tr68; case 95: goto tr67; case 127: goto st0; } if ( (*p) < 45 ) { if ( (*p) > 32 ) { if ( 34 <= (*p) && (*p) <= 35 ) goto st0; } else if ( (*p) >= 0 ) goto st0; } else if ( (*p) > 57 ) { if ( (*p) < 65 ) { if ( 62 <= (*p) && (*p) <= 64 ) goto st0; } else if ( (*p) > 90 ) { if ( 97 <= (*p) && (*p) <= 122 ) goto tr67; } else goto tr67; } else goto tr67; goto st40; st40: if ( ++p == pe ) goto _test_eof40; case 40: switch( (*p) ) { case 37: goto st41; case 47: goto st0; case 60: goto st0; case 64: goto st39; case 127: goto st0; } if ( (*p) < 34 ) { if ( 0 <= (*p) && (*p) <= 32 ) goto st0; } else if ( (*p) > 35 ) { if ( 62 <= (*p) && (*p) <= 63 ) goto st0; } else goto st0; goto st40; st41: if ( ++p == pe ) goto _test_eof41; case 41: if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st42; } else if ( (*p) > 70 ) { if ( 97 <= (*p) && (*p) <= 102 ) goto st42; } else goto st42; goto st0; st42: if ( ++p == pe ) goto _test_eof42; case 42: if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st40; } else if ( (*p) > 70 ) { if ( 97 <= (*p) && (*p) <= 102 ) goto st40; } else goto st40; goto st0; tr67: #line 319 "unicorn_http.rl" {MARK(mark, p); } goto st43; st43: if ( ++p == pe ) goto _test_eof43; case 43: #line 1437 "unicorn_http.c" switch( (*p) ) { case 37: goto st41; case 47: goto tr71; case 58: goto st44; case 60: goto st0; case 64: goto st39; case 95: goto st43; case 127: goto st0; } if ( (*p) < 45 ) { if ( (*p) > 32 ) { if ( 34 <= (*p) && (*p) <= 35 ) goto st0; } else if ( (*p) >= 0 ) goto st0; } else if ( (*p) > 57 ) { if ( (*p) < 65 ) { if ( 62 <= (*p) && (*p) <= 63 ) goto st0; } else if ( (*p) > 90 ) { if ( 97 <= (*p) && (*p) <= 122 ) goto st43; } else goto st43; } else goto st43; goto st40; st44: if ( ++p == pe ) goto _test_eof44; case 44: switch( (*p) ) { case 37: goto st41; case 47: goto tr71; case 60: goto st0; case 64: goto st39; case 127: goto st0; } if ( (*p) < 34 ) { if ( 0 <= (*p) && (*p) <= 32 ) goto st0; } else if ( (*p) > 35 ) { if ( (*p) > 57 ) { if ( 62 <= (*p) && (*p) <= 63 ) goto st0; } else if ( (*p) >= 48 ) goto st44; } else goto st0; goto st40; tr68: #line 319 "unicorn_http.rl" {MARK(mark, p); } goto st45; st45: if ( ++p == pe ) goto _test_eof45; case 45: #line 1496 "unicorn_http.c" switch( (*p) ) { case 37: goto st41; case 47: goto st0; case 60: goto st0; case 64: goto st39; case 127: goto st0; } if ( (*p) < 48 ) { if ( (*p) > 32 ) { if ( 34 <= (*p) && (*p) <= 35 ) goto st0; } else if ( (*p) >= 0 ) goto st0; } else if ( (*p) > 58 ) { if ( (*p) < 65 ) { if ( 62 <= (*p) && (*p) <= 63 ) goto st0; } else if ( (*p) > 70 ) { if ( 97 <= (*p) && (*p) <= 102 ) goto st46; } else goto st46; } else goto st46; goto st40; st46: if ( ++p == pe ) goto _test_eof46; case 46: switch( (*p) ) { case 37: goto st41; case 47: goto st0; case 60: goto st0; case 64: goto st39; case 93: goto st47; case 127: goto st0; } if ( (*p) < 48 ) { if ( (*p) > 32 ) { if ( 34 <= (*p) && (*p) <= 35 ) goto st0; } else if ( (*p) >= 0 ) goto st0; } else if ( (*p) > 58 ) { if ( (*p) < 65 ) { if ( 62 <= (*p) && (*p) <= 63 ) goto st0; } else if ( (*p) > 70 ) { if ( 97 <= (*p) && (*p) <= 102 ) goto st46; } else goto st46; } else goto st46; goto st40; st47: if ( ++p == pe ) goto _test_eof47; case 47: switch( (*p) ) { case 37: goto st41; case 47: goto tr71; case 58: goto st44; case 60: goto st0; case 64: goto st39; case 127: goto st0; } if ( (*p) < 34 ) { if ( 0 <= (*p) && (*p) <= 32 ) goto st0; } else if ( (*p) > 35 ) { if ( 62 <= (*p) && (*p) <= 63 ) goto st0; } else goto st0; goto st40; tr62: #line 323 "unicorn_http.rl" { downcase_char(deconst(p)); } goto st48; st48: if ( ++p == pe ) goto _test_eof48; case 48: #line 1581 "unicorn_http.c" if ( (*p) == 58 ) goto tr61; goto st0; st49: if ( ++p == pe ) goto _test_eof49; case 49: switch( (*p) ) { case 32: goto tr3; case 33: goto st50; case 124: goto st50; case 126: goto st50; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto st50; } else if ( (*p) >= 35 ) goto st50; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st50; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto st50; } else goto st50; } else goto st50; goto st0; st50: if ( ++p == pe ) goto _test_eof50; case 50: switch( (*p) ) { case 32: goto tr3; case 33: goto st51; case 124: goto st51; case 126: goto st51; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto st51; } else if ( (*p) >= 35 ) goto st51; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st51; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto st51; } else goto st51; } else goto st51; goto st0; st51: if ( ++p == pe ) goto _test_eof51; case 51: switch( (*p) ) { case 32: goto tr3; case 33: goto st52; case 124: goto st52; case 126: goto st52; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto st52; } else if ( (*p) >= 35 ) goto st52; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st52; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto st52; } else goto st52; } else goto st52; goto st0; st52: if ( ++p == pe ) goto _test_eof52; case 52: switch( (*p) ) { case 32: goto tr3; case 33: goto st53; case 124: goto st53; case 126: goto st53; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto st53; } else if ( (*p) >= 35 ) goto st53; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st53; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto st53; } else goto st53; } else goto st53; goto st0; st53: if ( ++p == pe ) goto _test_eof53; case 53: switch( (*p) ) { case 32: goto tr3; case 33: goto st54; case 124: goto st54; case 126: goto st54; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto st54; } else if ( (*p) >= 35 ) goto st54; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st54; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto st54; } else goto st54; } else goto st54; goto st0; st54: if ( ++p == pe ) goto _test_eof54; case 54: switch( (*p) ) { case 32: goto tr3; case 33: goto st55; case 124: goto st55; case 126: goto st55; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto st55; } else if ( (*p) >= 35 ) goto st55; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st55; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto st55; } else goto st55; } else goto st55; goto st0; st55: if ( ++p == pe ) goto _test_eof55; case 55: switch( (*p) ) { case 32: goto tr3; case 33: goto st56; case 124: goto st56; case 126: goto st56; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto st56; } else if ( (*p) >= 35 ) goto st56; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st56; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto st56; } else goto st56; } else goto st56; goto st0; st56: if ( ++p == pe ) goto _test_eof56; case 56: switch( (*p) ) { case 32: goto tr3; case 33: goto st57; case 124: goto st57; case 126: goto st57; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto st57; } else if ( (*p) >= 35 ) goto st57; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st57; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto st57; } else goto st57; } else goto st57; goto st0; st57: if ( ++p == pe ) goto _test_eof57; case 57: switch( (*p) ) { case 32: goto tr3; case 33: goto st58; case 124: goto st58; case 126: goto st58; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto st58; } else if ( (*p) >= 35 ) goto st58; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st58; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto st58; } else goto st58; } else goto st58; goto st0; st58: if ( ++p == pe ) goto _test_eof58; case 58: switch( (*p) ) { case 32: goto tr3; case 33: goto st59; case 124: goto st59; case 126: goto st59; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto st59; } else if ( (*p) >= 35 ) goto st59; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st59; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto st59; } else goto st59; } else goto st59; goto st0; st59: if ( ++p == pe ) goto _test_eof59; case 59: switch( (*p) ) { case 32: goto tr3; case 33: goto st60; case 124: goto st60; case 126: goto st60; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto st60; } else if ( (*p) >= 35 ) goto st60; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st60; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto st60; } else goto st60; } else goto st60; goto st0; st60: if ( ++p == pe ) goto _test_eof60; case 60: switch( (*p) ) { case 32: goto tr3; case 33: goto st61; case 124: goto st61; case 126: goto st61; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto st61; } else if ( (*p) >= 35 ) goto st61; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st61; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto st61; } else goto st61; } else goto st61; goto st0; st61: if ( ++p == pe ) goto _test_eof61; case 61: switch( (*p) ) { case 32: goto tr3; case 33: goto st62; case 124: goto st62; case 126: goto st62; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto st62; } else if ( (*p) >= 35 ) goto st62; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st62; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto st62; } else goto st62; } else goto st62; goto st0; st62: if ( ++p == pe ) goto _test_eof62; case 62: switch( (*p) ) { case 32: goto tr3; case 33: goto st63; case 124: goto st63; case 126: goto st63; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto st63; } else if ( (*p) >= 35 ) goto st63; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st63; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto st63; } else goto st63; } else goto st63; goto st0; st63: if ( ++p == pe ) goto _test_eof63; case 63: switch( (*p) ) { case 32: goto tr3; case 33: goto st64; case 124: goto st64; case 126: goto st64; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto st64; } else if ( (*p) >= 35 ) goto st64; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st64; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto st64; } else goto st64; } else goto st64; goto st0; st64: if ( ++p == pe ) goto _test_eof64; case 64: switch( (*p) ) { case 32: goto tr3; case 33: goto st65; case 124: goto st65; case 126: goto st65; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto st65; } else if ( (*p) >= 35 ) goto st65; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st65; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto st65; } else goto st65; } else goto st65; goto st0; st65: if ( ++p == pe ) goto _test_eof65; case 65: switch( (*p) ) { case 32: goto tr3; case 33: goto st66; case 124: goto st66; case 126: goto st66; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto st66; } else if ( (*p) >= 35 ) goto st66; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st66; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto st66; } else goto st66; } else goto st66; goto st0; st66: if ( ++p == pe ) goto _test_eof66; case 66: switch( (*p) ) { case 32: goto tr3; case 33: goto st67; case 124: goto st67; case 126: goto st67; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto st67; } else if ( (*p) >= 35 ) goto st67; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st67; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto st67; } else goto st67; } else goto st67; goto st0; st67: if ( ++p == pe ) goto _test_eof67; case 67: if ( (*p) == 32 ) goto tr3; goto st0; tr2: #line 319 "unicorn_http.rl" {MARK(mark, p); } goto st68; st68: if ( ++p == pe ) goto _test_eof68; case 68: #line 2104 "unicorn_http.c" switch( (*p) ) { case 32: goto tr3; case 33: goto st49; case 69: goto st69; case 124: goto st49; case 126: goto st49; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto st49; } else if ( (*p) >= 35 ) goto st49; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st49; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto st49; } else goto st49; } else goto st49; goto st0; st69: if ( ++p == pe ) goto _test_eof69; case 69: switch( (*p) ) { case 32: goto tr3; case 33: goto st50; case 84: goto st70; case 124: goto st50; case 126: goto st50; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto st50; } else if ( (*p) >= 35 ) goto st50; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st50; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto st50; } else goto st50; } else goto st50; goto st0; st70: if ( ++p == pe ) goto _test_eof70; case 70: switch( (*p) ) { case 32: goto tr95; case 33: goto st51; case 124: goto st51; case 126: goto st51; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto st51; } else if ( (*p) >= 35 ) goto st51; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st51; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto st51; } else goto st51; } else goto st51; goto st0; tr95: #line 328 "unicorn_http.rl" { request_method(hp, PTR_TO(mark), LEN(mark, p)); } goto st71; st71: if ( ++p == pe ) goto _test_eof71; case 71: #line 2195 "unicorn_http.c" switch( (*p) ) { case 42: goto tr96; case 47: goto tr97; case 72: goto tr98; case 104: goto tr98; } goto st0; tr96: #line 319 "unicorn_http.rl" {MARK(mark, p); } goto st72; st72: if ( ++p == pe ) goto _test_eof72; case 72: #line 2211 "unicorn_http.c" switch( (*p) ) { case 13: goto tr99; case 32: goto tr8; case 35: goto tr100; } goto st0; tr100: #line 333 "unicorn_http.rl" { VALUE str; VALIDATE_MAX_URI_LENGTH(LEN(mark, p), REQUEST_URI); str = rb_hash_aset(hp->env, g_request_uri, STR_NEW(mark, p)); /* * "OPTIONS * HTTP/1.1\r\n" is a valid request, but we can't have '*' * in REQUEST_PATH or PATH_INFO or else Rack::Lint will complain */ if (STR_CSTR_EQ(str, "*")) { str = rb_str_new(NULL, 0); rb_hash_aset(hp->env, g_path_info, str); rb_hash_aset(hp->env, g_request_path, str); } } goto st73; tr110: #line 358 "unicorn_http.rl" { VALUE val; VALIDATE_MAX_URI_LENGTH(LEN(mark, p), REQUEST_PATH); val = rb_hash_aset(hp->env, g_request_path, STR_NEW(mark, p)); /* rack says PATH_INFO must start with "/" or be empty */ if (!STR_CSTR_EQ(val, "*")) rb_hash_aset(hp->env, g_path_info, val); } #line 333 "unicorn_http.rl" { VALUE str; VALIDATE_MAX_URI_LENGTH(LEN(mark, p), REQUEST_URI); str = rb_hash_aset(hp->env, g_request_uri, STR_NEW(mark, p)); /* * "OPTIONS * HTTP/1.1\r\n" is a valid request, but we can't have '*' * in REQUEST_PATH or PATH_INFO or else Rack::Lint will complain */ if (STR_CSTR_EQ(str, "*")) { str = rb_str_new(NULL, 0); rb_hash_aset(hp->env, g_path_info, str); rb_hash_aset(hp->env, g_request_path, str); } } goto st73; tr116: #line 352 "unicorn_http.rl" {MARK(start.query, p); } #line 353 "unicorn_http.rl" { VALIDATE_MAX_URI_LENGTH(LEN(start.query, p), QUERY_STRING); rb_hash_aset(hp->env, g_query_string, STR_NEW(start.query, p)); } #line 333 "unicorn_http.rl" { VALUE str; VALIDATE_MAX_URI_LENGTH(LEN(mark, p), REQUEST_URI); str = rb_hash_aset(hp->env, g_request_uri, STR_NEW(mark, p)); /* * "OPTIONS * HTTP/1.1\r\n" is a valid request, but we can't have '*' * in REQUEST_PATH or PATH_INFO or else Rack::Lint will complain */ if (STR_CSTR_EQ(str, "*")) { str = rb_str_new(NULL, 0); rb_hash_aset(hp->env, g_path_info, str); rb_hash_aset(hp->env, g_request_path, str); } } goto st73; tr120: #line 353 "unicorn_http.rl" { VALIDATE_MAX_URI_LENGTH(LEN(start.query, p), QUERY_STRING); rb_hash_aset(hp->env, g_query_string, STR_NEW(start.query, p)); } #line 333 "unicorn_http.rl" { VALUE str; VALIDATE_MAX_URI_LENGTH(LEN(mark, p), REQUEST_URI); str = rb_hash_aset(hp->env, g_request_uri, STR_NEW(mark, p)); /* * "OPTIONS * HTTP/1.1\r\n" is a valid request, but we can't have '*' * in REQUEST_PATH or PATH_INFO or else Rack::Lint will complain */ if (STR_CSTR_EQ(str, "*")) { str = rb_str_new(NULL, 0); rb_hash_aset(hp->env, g_path_info, str); rb_hash_aset(hp->env, g_request_path, str); } } goto st73; st73: if ( ++p == pe ) goto _test_eof73; case 73: #line 2317 "unicorn_http.c" switch( (*p) ) { case 13: goto tr102; case 32: goto tr37; case 35: goto st0; case 37: goto tr103; case 127: goto st0; } if ( 0 <= (*p) && (*p) <= 31 ) goto st0; goto tr101; tr101: #line 319 "unicorn_http.rl" {MARK(mark, p); } goto st74; st74: if ( ++p == pe ) goto _test_eof74; case 74: #line 2336 "unicorn_http.c" switch( (*p) ) { case 13: goto tr105; case 32: goto tr40; case 35: goto st0; case 37: goto st75; case 127: goto st0; } if ( 0 <= (*p) && (*p) <= 31 ) goto st0; goto st74; tr103: #line 319 "unicorn_http.rl" {MARK(mark, p); } goto st75; st75: if ( ++p == pe ) goto _test_eof75; case 75: #line 2355 "unicorn_http.c" if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st76; } else if ( (*p) > 70 ) { if ( 97 <= (*p) && (*p) <= 102 ) goto st76; } else goto st76; goto st0; st76: if ( ++p == pe ) goto _test_eof76; case 76: if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st74; } else if ( (*p) > 70 ) { if ( 97 <= (*p) && (*p) <= 102 ) goto st74; } else goto st74; goto st0; tr97: #line 319 "unicorn_http.rl" {MARK(mark, p); } goto st77; tr136: #line 332 "unicorn_http.rl" { rb_hash_aset(hp->env, g_http_host, STR_NEW(mark, p)); } #line 319 "unicorn_http.rl" {MARK(mark, p); } goto st77; st77: if ( ++p == pe ) goto _test_eof77; case 77: #line 2392 "unicorn_http.c" switch( (*p) ) { case 13: goto tr109; case 32: goto tr44; case 35: goto tr110; case 37: goto st78; case 63: goto tr112; case 127: goto st0; } if ( 0 <= (*p) && (*p) <= 31 ) goto st0; goto st77; st78: if ( ++p == pe ) goto _test_eof78; case 78: if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st79; } else if ( (*p) > 70 ) { if ( 97 <= (*p) && (*p) <= 102 ) goto st79; } else goto st79; goto st0; st79: if ( ++p == pe ) goto _test_eof79; case 79: if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st77; } else if ( (*p) > 70 ) { if ( 97 <= (*p) && (*p) <= 102 ) goto st77; } else goto st77; goto st0; tr112: #line 358 "unicorn_http.rl" { VALUE val; VALIDATE_MAX_URI_LENGTH(LEN(mark, p), REQUEST_PATH); val = rb_hash_aset(hp->env, g_request_path, STR_NEW(mark, p)); /* rack says PATH_INFO must start with "/" or be empty */ if (!STR_CSTR_EQ(val, "*")) rb_hash_aset(hp->env, g_path_info, val); } goto st80; st80: if ( ++p == pe ) goto _test_eof80; case 80: #line 2447 "unicorn_http.c" switch( (*p) ) { case 13: goto tr115; case 32: goto tr50; case 35: goto tr116; case 37: goto tr117; case 127: goto st0; } if ( 0 <= (*p) && (*p) <= 31 ) goto st0; goto tr114; tr114: #line 352 "unicorn_http.rl" {MARK(start.query, p); } goto st81; st81: if ( ++p == pe ) goto _test_eof81; case 81: #line 2466 "unicorn_http.c" switch( (*p) ) { case 13: goto tr119; case 32: goto tr54; case 35: goto tr120; case 37: goto st82; case 127: goto st0; } if ( 0 <= (*p) && (*p) <= 31 ) goto st0; goto st81; tr117: #line 352 "unicorn_http.rl" {MARK(start.query, p); } goto st82; st82: if ( ++p == pe ) goto _test_eof82; case 82: #line 2485 "unicorn_http.c" if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st83; } else if ( (*p) > 70 ) { if ( 97 <= (*p) && (*p) <= 102 ) goto st83; } else goto st83; goto st0; st83: if ( ++p == pe ) goto _test_eof83; case 83: if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st81; } else if ( (*p) > 70 ) { if ( 97 <= (*p) && (*p) <= 102 ) goto st81; } else goto st81; goto st0; tr98: #line 319 "unicorn_http.rl" {MARK(mark, p); } #line 323 "unicorn_http.rl" { downcase_char(deconst(p)); } goto st84; st84: if ( ++p == pe ) goto _test_eof84; case 84: #line 2518 "unicorn_http.c" switch( (*p) ) { case 84: goto tr123; case 116: goto tr123; } goto st0; tr123: #line 323 "unicorn_http.rl" { downcase_char(deconst(p)); } goto st85; st85: if ( ++p == pe ) goto _test_eof85; case 85: #line 2532 "unicorn_http.c" switch( (*p) ) { case 84: goto tr124; case 116: goto tr124; } goto st0; tr124: #line 323 "unicorn_http.rl" { downcase_char(deconst(p)); } goto st86; st86: if ( ++p == pe ) goto _test_eof86; case 86: #line 2546 "unicorn_http.c" switch( (*p) ) { case 80: goto tr125; case 112: goto tr125; } goto st0; tr125: #line 323 "unicorn_http.rl" { downcase_char(deconst(p)); } goto st87; st87: if ( ++p == pe ) goto _test_eof87; case 87: #line 2560 "unicorn_http.c" switch( (*p) ) { case 58: goto tr126; case 83: goto tr127; case 115: goto tr127; } goto st0; tr126: #line 329 "unicorn_http.rl" { rb_hash_aset(hp->env, g_rack_url_scheme, STR_NEW(mark, p)); } goto st88; st88: if ( ++p == pe ) goto _test_eof88; case 88: #line 2577 "unicorn_http.c" if ( (*p) == 47 ) goto st89; goto st0; st89: if ( ++p == pe ) goto _test_eof89; case 89: if ( (*p) == 47 ) goto st90; goto st0; st90: if ( ++p == pe ) goto _test_eof90; case 90: switch( (*p) ) { case 37: goto st92; case 47: goto st0; case 60: goto st0; case 91: goto tr133; case 95: goto tr132; case 127: goto st0; } if ( (*p) < 45 ) { if ( (*p) > 32 ) { if ( 34 <= (*p) && (*p) <= 35 ) goto st0; } else if ( (*p) >= 0 ) goto st0; } else if ( (*p) > 57 ) { if ( (*p) < 65 ) { if ( 62 <= (*p) && (*p) <= 64 ) goto st0; } else if ( (*p) > 90 ) { if ( 97 <= (*p) && (*p) <= 122 ) goto tr132; } else goto tr132; } else goto tr132; goto st91; st91: if ( ++p == pe ) goto _test_eof91; case 91: switch( (*p) ) { case 37: goto st92; case 47: goto st0; case 60: goto st0; case 64: goto st90; case 127: goto st0; } if ( (*p) < 34 ) { if ( 0 <= (*p) && (*p) <= 32 ) goto st0; } else if ( (*p) > 35 ) { if ( 62 <= (*p) && (*p) <= 63 ) goto st0; } else goto st0; goto st91; st92: if ( ++p == pe ) goto _test_eof92; case 92: if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st93; } else if ( (*p) > 70 ) { if ( 97 <= (*p) && (*p) <= 102 ) goto st93; } else goto st93; goto st0; st93: if ( ++p == pe ) goto _test_eof93; case 93: if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st91; } else if ( (*p) > 70 ) { if ( 97 <= (*p) && (*p) <= 102 ) goto st91; } else goto st91; goto st0; tr132: #line 319 "unicorn_http.rl" {MARK(mark, p); } goto st94; st94: if ( ++p == pe ) goto _test_eof94; case 94: #line 2672 "unicorn_http.c" switch( (*p) ) { case 37: goto st92; case 47: goto tr136; case 58: goto st95; case 60: goto st0; case 64: goto st90; case 95: goto st94; case 127: goto st0; } if ( (*p) < 45 ) { if ( (*p) > 32 ) { if ( 34 <= (*p) && (*p) <= 35 ) goto st0; } else if ( (*p) >= 0 ) goto st0; } else if ( (*p) > 57 ) { if ( (*p) < 65 ) { if ( 62 <= (*p) && (*p) <= 63 ) goto st0; } else if ( (*p) > 90 ) { if ( 97 <= (*p) && (*p) <= 122 ) goto st94; } else goto st94; } else goto st94; goto st91; st95: if ( ++p == pe ) goto _test_eof95; case 95: switch( (*p) ) { case 37: goto st92; case 47: goto tr136; case 60: goto st0; case 64: goto st90; case 127: goto st0; } if ( (*p) < 34 ) { if ( 0 <= (*p) && (*p) <= 32 ) goto st0; } else if ( (*p) > 35 ) { if ( (*p) > 57 ) { if ( 62 <= (*p) && (*p) <= 63 ) goto st0; } else if ( (*p) >= 48 ) goto st95; } else goto st0; goto st91; tr133: #line 319 "unicorn_http.rl" {MARK(mark, p); } goto st96; st96: if ( ++p == pe ) goto _test_eof96; case 96: #line 2731 "unicorn_http.c" switch( (*p) ) { case 37: goto st92; case 47: goto st0; case 60: goto st0; case 64: goto st90; case 127: goto st0; } if ( (*p) < 48 ) { if ( (*p) > 32 ) { if ( 34 <= (*p) && (*p) <= 35 ) goto st0; } else if ( (*p) >= 0 ) goto st0; } else if ( (*p) > 58 ) { if ( (*p) < 65 ) { if ( 62 <= (*p) && (*p) <= 63 ) goto st0; } else if ( (*p) > 70 ) { if ( 97 <= (*p) && (*p) <= 102 ) goto st97; } else goto st97; } else goto st97; goto st91; st97: if ( ++p == pe ) goto _test_eof97; case 97: switch( (*p) ) { case 37: goto st92; case 47: goto st0; case 60: goto st0; case 64: goto st90; case 93: goto st98; case 127: goto st0; } if ( (*p) < 48 ) { if ( (*p) > 32 ) { if ( 34 <= (*p) && (*p) <= 35 ) goto st0; } else if ( (*p) >= 0 ) goto st0; } else if ( (*p) > 58 ) { if ( (*p) < 65 ) { if ( 62 <= (*p) && (*p) <= 63 ) goto st0; } else if ( (*p) > 70 ) { if ( 97 <= (*p) && (*p) <= 102 ) goto st97; } else goto st97; } else goto st97; goto st91; st98: if ( ++p == pe ) goto _test_eof98; case 98: switch( (*p) ) { case 37: goto st92; case 47: goto tr136; case 58: goto st95; case 60: goto st0; case 64: goto st90; case 127: goto st0; } if ( (*p) < 34 ) { if ( 0 <= (*p) && (*p) <= 32 ) goto st0; } else if ( (*p) > 35 ) { if ( 62 <= (*p) && (*p) <= 63 ) goto st0; } else goto st0; goto st91; tr127: #line 323 "unicorn_http.rl" { downcase_char(deconst(p)); } goto st99; st99: if ( ++p == pe ) goto _test_eof99; case 99: #line 2816 "unicorn_http.c" if ( (*p) == 58 ) goto tr126; goto st0; st100: if ( ++p == pe ) goto _test_eof100; case 100: if ( (*p) == 48 ) goto tr140; if ( (*p) < 65 ) { if ( 49 <= (*p) && (*p) <= 57 ) goto tr141; } else if ( (*p) > 70 ) { if ( 97 <= (*p) && (*p) <= 102 ) goto tr141; } else goto tr141; goto st0; tr140: #line 368 "unicorn_http.rl" { hp->len.chunk = step_incr(hp->len.chunk, (*p), 16); if (hp->len.chunk < 0) parser_raise(eHttpParserError, "invalid chunk size"); } goto st101; st101: if ( ++p == pe ) goto _test_eof101; case 101: #line 2847 "unicorn_http.c" switch( (*p) ) { case 13: goto st102; case 48: goto tr140; case 59: goto st111; } if ( (*p) < 65 ) { if ( 49 <= (*p) && (*p) <= 57 ) goto tr141; } else if ( (*p) > 70 ) { if ( 97 <= (*p) && (*p) <= 102 ) goto tr141; } else goto tr141; goto st0; st102: if ( ++p == pe ) goto _test_eof102; case 102: if ( (*p) == 10 ) goto tr144; goto st0; tr144: #line 397 "unicorn_http.rl" { HP_FL_SET(hp, INTRAILER); cs = http_parser_en_Trailers; ++p; assert(p <= pe && "buffer overflow after chunked body"); goto post_exec; } goto st123; st123: if ( ++p == pe ) goto _test_eof123; case 123: #line 2883 "unicorn_http.c" goto st0; tr141: #line 368 "unicorn_http.rl" { hp->len.chunk = step_incr(hp->len.chunk, (*p), 16); if (hp->len.chunk < 0) parser_raise(eHttpParserError, "invalid chunk size"); } goto st103; st103: if ( ++p == pe ) goto _test_eof103; case 103: #line 2897 "unicorn_http.c" switch( (*p) ) { case 13: goto st104; case 59: goto st108; } if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto tr141; } else if ( (*p) > 70 ) { if ( 97 <= (*p) && (*p) <= 102 ) goto tr141; } else goto tr141; goto st0; st104: if ( ++p == pe ) goto _test_eof104; case 104: if ( (*p) == 10 ) goto st105; goto st0; st105: if ( ++p == pe ) goto _test_eof105; case 105: goto tr148; tr148: #line 405 "unicorn_http.rl" { skip_chunk_data_hack: { size_t nr = MIN((size_t)hp->len.chunk, REMAINING); memcpy(RSTRING_PTR(hp->cont) + hp->s.dest_offset, p, nr); hp->s.dest_offset += nr; hp->len.chunk -= nr; p += nr; assert(hp->len.chunk >= 0 && "negative chunk length"); if ((size_t)hp->len.chunk > REMAINING) { HP_FL_SET(hp, INCHUNK); goto post_exec; } else { p--; {goto st106;} } }} goto st106; st106: if ( ++p == pe ) goto _test_eof106; case 106: #line 2946 "unicorn_http.c" if ( (*p) == 13 ) goto st107; goto st0; st107: if ( ++p == pe ) goto _test_eof107; case 107: if ( (*p) == 10 ) goto st100; goto st0; st108: if ( ++p == pe ) goto _test_eof108; case 108: switch( (*p) ) { case 13: goto st104; case 32: goto st108; case 33: goto st109; case 59: goto st108; case 61: goto st110; case 124: goto st109; case 126: goto st109; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto st109; } else if ( (*p) >= 35 ) goto st109; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st109; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto st109; } else goto st109; } else goto st109; goto st0; st109: if ( ++p == pe ) goto _test_eof109; case 109: switch( (*p) ) { case 13: goto st104; case 33: goto st109; case 59: goto st108; case 61: goto st110; case 124: goto st109; case 126: goto st109; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto st109; } else if ( (*p) >= 35 ) goto st109; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st109; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto st109; } else goto st109; } else goto st109; goto st0; st110: if ( ++p == pe ) goto _test_eof110; case 110: switch( (*p) ) { case 13: goto st104; case 33: goto st110; case 59: goto st108; case 124: goto st110; case 126: goto st110; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto st110; } else if ( (*p) >= 35 ) goto st110; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st110; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto st110; } else goto st110; } else goto st110; goto st0; st111: if ( ++p == pe ) goto _test_eof111; case 111: switch( (*p) ) { case 13: goto st102; case 32: goto st111; case 33: goto st112; case 59: goto st111; case 61: goto st113; case 124: goto st112; case 126: goto st112; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto st112; } else if ( (*p) >= 35 ) goto st112; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st112; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto st112; } else goto st112; } else goto st112; goto st0; st112: if ( ++p == pe ) goto _test_eof112; case 112: switch( (*p) ) { case 13: goto st102; case 33: goto st112; case 59: goto st111; case 61: goto st113; case 124: goto st112; case 126: goto st112; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto st112; } else if ( (*p) >= 35 ) goto st112; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st112; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto st112; } else goto st112; } else goto st112; goto st0; st113: if ( ++p == pe ) goto _test_eof113; case 113: switch( (*p) ) { case 13: goto st102; case 33: goto st113; case 59: goto st111; case 124: goto st113; case 126: goto st113; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto st113; } else if ( (*p) >= 35 ) goto st113; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto st113; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto st113; } else goto st113; } else goto st113; goto st0; st114: if ( ++p == pe ) goto _test_eof114; case 114: switch( (*p) ) { case 9: goto st115; case 13: goto st118; case 32: goto st115; case 33: goto tr157; case 124: goto tr157; case 126: goto tr157; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto tr157; } else if ( (*p) >= 35 ) goto tr157; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto tr157; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto tr157; } else goto tr157; } else goto tr157; goto st0; tr159: #line 325 "unicorn_http.rl" { MARK(mark, p); } goto st115; st115: if ( ++p == pe ) goto _test_eof115; case 115: #line 3175 "unicorn_http.c" switch( (*p) ) { case 9: goto tr159; case 13: goto tr160; case 32: goto tr159; case 127: goto st0; } if ( 0 <= (*p) && (*p) <= 31 ) goto st0; goto tr158; tr158: #line 325 "unicorn_http.rl" { MARK(mark, p); } goto st116; st116: if ( ++p == pe ) goto _test_eof116; case 116: #line 3193 "unicorn_http.c" switch( (*p) ) { case 13: goto tr162; case 127: goto st0; } if ( (*p) > 8 ) { if ( 10 <= (*p) && (*p) <= 31 ) goto st0; } else if ( (*p) >= 0 ) goto st0; goto st116; tr160: #line 325 "unicorn_http.rl" { MARK(mark, p); } #line 327 "unicorn_http.rl" { write_cont_value(hp, buffer, p); } goto st117; tr162: #line 327 "unicorn_http.rl" { write_cont_value(hp, buffer, p); } goto st117; tr169: #line 325 "unicorn_http.rl" { MARK(mark, p); } #line 326 "unicorn_http.rl" { write_value(hp, buffer, p); } goto st117; tr171: #line 326 "unicorn_http.rl" { write_value(hp, buffer, p); } goto st117; st117: if ( ++p == pe ) goto _test_eof117; case 117: #line 3228 "unicorn_http.c" if ( (*p) == 10 ) goto st114; goto st0; st118: if ( ++p == pe ) goto _test_eof118; case 118: if ( (*p) == 10 ) goto tr164; goto st0; tr164: #line 392 "unicorn_http.rl" { cs = http_parser_first_final; goto post_exec; } goto st124; st124: if ( ++p == pe ) goto _test_eof124; case 124: #line 3250 "unicorn_http.c" goto st0; tr157: #line 321 "unicorn_http.rl" { MARK(start.field, p); } #line 322 "unicorn_http.rl" { snake_upcase_char(deconst(p)); } goto st119; tr165: #line 322 "unicorn_http.rl" { snake_upcase_char(deconst(p)); } goto st119; st119: if ( ++p == pe ) goto _test_eof119; case 119: #line 3266 "unicorn_http.c" switch( (*p) ) { case 33: goto tr165; case 58: goto tr166; case 124: goto tr165; case 126: goto tr165; } if ( (*p) < 45 ) { if ( (*p) > 39 ) { if ( 42 <= (*p) && (*p) <= 43 ) goto tr165; } else if ( (*p) >= 35 ) goto tr165; } else if ( (*p) > 46 ) { if ( (*p) < 65 ) { if ( 48 <= (*p) && (*p) <= 57 ) goto tr165; } else if ( (*p) > 90 ) { if ( 94 <= (*p) && (*p) <= 122 ) goto tr165; } else goto tr165; } else goto tr165; goto st0; tr168: #line 325 "unicorn_http.rl" { MARK(mark, p); } goto st120; tr166: #line 324 "unicorn_http.rl" { hp->s.field_len = LEN(start.field, p); } goto st120; st120: if ( ++p == pe ) goto _test_eof120; case 120: #line 3303 "unicorn_http.c" switch( (*p) ) { case 9: goto tr168; case 13: goto tr169; case 32: goto tr168; case 127: goto st0; } if ( 0 <= (*p) && (*p) <= 31 ) goto st0; goto tr167; tr167: #line 325 "unicorn_http.rl" { MARK(mark, p); } goto st121; st121: if ( ++p == pe ) goto _test_eof121; case 121: #line 3321 "unicorn_http.c" switch( (*p) ) { case 13: goto tr171; case 127: goto st0; } if ( (*p) > 8 ) { if ( 10 <= (*p) && (*p) <= 31 ) goto st0; } else if ( (*p) >= 0 ) goto st0; goto st121; } _test_eof2: cs = 2; goto _test_eof; _test_eof3: cs = 3; goto _test_eof; _test_eof4: cs = 4; goto _test_eof; _test_eof5: cs = 5; goto _test_eof; _test_eof6: cs = 6; goto _test_eof; _test_eof7: cs = 7; goto _test_eof; _test_eof8: cs = 8; goto _test_eof; _test_eof9: cs = 9; goto _test_eof; _test_eof10: cs = 10; goto _test_eof; _test_eof11: cs = 11; goto _test_eof; _test_eof12: cs = 12; goto _test_eof; _test_eof13: cs = 13; goto _test_eof; _test_eof14: cs = 14; goto _test_eof; _test_eof15: cs = 15; goto _test_eof; _test_eof16: cs = 16; goto _test_eof; _test_eof17: cs = 17; goto _test_eof; _test_eof18: cs = 18; goto _test_eof; _test_eof122: cs = 122; goto _test_eof; _test_eof19: cs = 19; goto _test_eof; _test_eof20: cs = 20; goto _test_eof; _test_eof21: cs = 21; goto _test_eof; _test_eof22: cs = 22; goto _test_eof; _test_eof23: cs = 23; goto _test_eof; _test_eof24: cs = 24; goto _test_eof; _test_eof25: cs = 25; goto _test_eof; _test_eof26: cs = 26; goto _test_eof; _test_eof27: cs = 27; goto _test_eof; _test_eof28: cs = 28; goto _test_eof; _test_eof29: cs = 29; goto _test_eof; _test_eof30: cs = 30; goto _test_eof; _test_eof31: cs = 31; goto _test_eof; _test_eof32: cs = 32; goto _test_eof; _test_eof33: cs = 33; goto _test_eof; _test_eof34: cs = 34; goto _test_eof; _test_eof35: cs = 35; goto _test_eof; _test_eof36: cs = 36; goto _test_eof; _test_eof37: cs = 37; goto _test_eof; _test_eof38: cs = 38; goto _test_eof; _test_eof39: cs = 39; goto _test_eof; _test_eof40: cs = 40; goto _test_eof; _test_eof41: cs = 41; goto _test_eof; _test_eof42: cs = 42; goto _test_eof; _test_eof43: cs = 43; goto _test_eof; _test_eof44: cs = 44; goto _test_eof; _test_eof45: cs = 45; goto _test_eof; _test_eof46: cs = 46; goto _test_eof; _test_eof47: cs = 47; goto _test_eof; _test_eof48: cs = 48; goto _test_eof; _test_eof49: cs = 49; goto _test_eof; _test_eof50: cs = 50; goto _test_eof; _test_eof51: cs = 51; goto _test_eof; _test_eof52: cs = 52; goto _test_eof; _test_eof53: cs = 53; goto _test_eof; _test_eof54: cs = 54; goto _test_eof; _test_eof55: cs = 55; goto _test_eof; _test_eof56: cs = 56; goto _test_eof; _test_eof57: cs = 57; goto _test_eof; _test_eof58: cs = 58; goto _test_eof; _test_eof59: cs = 59; goto _test_eof; _test_eof60: cs = 60; goto _test_eof; _test_eof61: cs = 61; goto _test_eof; _test_eof62: cs = 62; goto _test_eof; _test_eof63: cs = 63; goto _test_eof; _test_eof64: cs = 64; goto _test_eof; _test_eof65: cs = 65; goto _test_eof; _test_eof66: cs = 66; goto _test_eof; _test_eof67: cs = 67; goto _test_eof; _test_eof68: cs = 68; goto _test_eof; _test_eof69: cs = 69; goto _test_eof; _test_eof70: cs = 70; goto _test_eof; _test_eof71: cs = 71; goto _test_eof; _test_eof72: cs = 72; goto _test_eof; _test_eof73: cs = 73; goto _test_eof; _test_eof74: cs = 74; goto _test_eof; _test_eof75: cs = 75; goto _test_eof; _test_eof76: cs = 76; goto _test_eof; _test_eof77: cs = 77; goto _test_eof; _test_eof78: cs = 78; goto _test_eof; _test_eof79: cs = 79; goto _test_eof; _test_eof80: cs = 80; goto _test_eof; _test_eof81: cs = 81; goto _test_eof; _test_eof82: cs = 82; goto _test_eof; _test_eof83: cs = 83; goto _test_eof; _test_eof84: cs = 84; goto _test_eof; _test_eof85: cs = 85; goto _test_eof; _test_eof86: cs = 86; goto _test_eof; _test_eof87: cs = 87; goto _test_eof; _test_eof88: cs = 88; goto _test_eof; _test_eof89: cs = 89; goto _test_eof; _test_eof90: cs = 90; goto _test_eof; _test_eof91: cs = 91; goto _test_eof; _test_eof92: cs = 92; goto _test_eof; _test_eof93: cs = 93; goto _test_eof; _test_eof94: cs = 94; goto _test_eof; _test_eof95: cs = 95; goto _test_eof; _test_eof96: cs = 96; goto _test_eof; _test_eof97: cs = 97; goto _test_eof; _test_eof98: cs = 98; goto _test_eof; _test_eof99: cs = 99; goto _test_eof; _test_eof100: cs = 100; goto _test_eof; _test_eof101: cs = 101; goto _test_eof; _test_eof102: cs = 102; goto _test_eof; _test_eof123: cs = 123; goto _test_eof; _test_eof103: cs = 103; goto _test_eof; _test_eof104: cs = 104; goto _test_eof; _test_eof105: cs = 105; goto _test_eof; _test_eof106: cs = 106; goto _test_eof; _test_eof107: cs = 107; goto _test_eof; _test_eof108: cs = 108; goto _test_eof; _test_eof109: cs = 109; goto _test_eof; _test_eof110: cs = 110; goto _test_eof; _test_eof111: cs = 111; goto _test_eof; _test_eof112: cs = 112; goto _test_eof; _test_eof113: cs = 113; goto _test_eof; _test_eof114: cs = 114; goto _test_eof; _test_eof115: cs = 115; goto _test_eof; _test_eof116: cs = 116; goto _test_eof; _test_eof117: cs = 117; goto _test_eof; _test_eof118: cs = 118; goto _test_eof; _test_eof124: cs = 124; goto _test_eof; _test_eof119: cs = 119; goto _test_eof; _test_eof120: cs = 120; goto _test_eof; _test_eof121: cs = 121; goto _test_eof; _test_eof: {} _out: {} } #line 466 "unicorn_http.rl" post_exec: /* "_out:" also goes here */ if (hp->cs != http_parser_error) hp->cs = cs; hp->offset = p - buffer; assert(p <= pe && "buffer overflow after parsing execute"); assert(hp->offset <= len && "offset longer than length"); } static struct http_parser *data_get(VALUE self) { struct http_parser *hp; Data_Get_Struct(self, struct http_parser, hp); assert(hp && "failed to extract http_parser struct"); return hp; } /* * set rack.url_scheme to "https" or "http", no others are allowed by Rack * this resembles the Rack::Request#scheme method as of rack commit * 35bb5ba6746b5d346de9202c004cc926039650c7 */ static void set_url_scheme(VALUE env, VALUE *server_port) { VALUE scheme = rb_hash_aref(env, g_rack_url_scheme); if (NIL_P(scheme)) { if (trust_x_forward == Qfalse) { scheme = g_http; } else { scheme = rb_hash_aref(env, g_http_x_forwarded_ssl); if (!NIL_P(scheme) && STR_CSTR_EQ(scheme, "on")) { *server_port = g_port_443; scheme = g_https; } else { scheme = rb_hash_aref(env, g_http_x_forwarded_proto); if (NIL_P(scheme)) { scheme = g_http; } else { long len = RSTRING_LEN(scheme); if (len >= 5 && !memcmp(RSTRING_PTR(scheme), "https", 5)) { if (len != 5) scheme = g_https; *server_port = g_port_443; } else { scheme = g_http; } } } } rb_hash_aset(env, g_rack_url_scheme, scheme); } else if (STR_CSTR_EQ(scheme, "https")) { *server_port = g_port_443; } else { assert(*server_port == g_port_80 && "server_port not set"); } } /* * Parse and set the SERVER_NAME and SERVER_PORT variables * Not supporting X-Forwarded-Host/X-Forwarded-Port in here since * anybody who needs them is using an unsupported configuration and/or * incompetent. Rack::Request will handle X-Forwarded-{Port,Host} just * fine. */ static void set_server_vars(VALUE env, VALUE *server_port) { VALUE server_name = g_localhost; VALUE host = rb_hash_aref(env, g_http_host); if (!NIL_P(host)) { char *host_ptr = RSTRING_PTR(host); long host_len = RSTRING_LEN(host); char *colon; if (*host_ptr == '[') { /* ipv6 address format */ char *rbracket = memchr(host_ptr + 1, ']', host_len - 1); if (rbracket) colon = (rbracket[1] == ':') ? rbracket + 1 : NULL; else colon = memchr(host_ptr + 1, ':', host_len - 1); } else { colon = memchr(host_ptr, ':', host_len); } if (colon) { long port_start = colon - host_ptr + 1; server_name = rb_str_substr(host, 0, colon - host_ptr); if ((host_len - port_start) > 0) *server_port = rb_str_substr(host, port_start, host_len); } else { server_name = host; } } rb_hash_aset(env, g_server_name, server_name); rb_hash_aset(env, g_server_port, *server_port); } static void finalize_header(struct http_parser *hp) { VALUE server_port = g_port_80; set_url_scheme(hp->env, &server_port); set_server_vars(hp->env, &server_port); if (!HP_FL_TEST(hp, HASHEADER)) rb_hash_aset(hp->env, g_server_protocol, g_http_09); /* rack requires QUERY_STRING */ if (NIL_P(rb_hash_aref(hp->env, g_query_string))) rb_hash_aset(hp->env, g_query_string, rb_str_new(NULL, 0)); } static void hp_mark(void *ptr) { struct http_parser *hp = ptr; rb_gc_mark(hp->buf); rb_gc_mark(hp->env); rb_gc_mark(hp->cont); } static VALUE HttpParser_alloc(VALUE klass) { struct http_parser *hp; return Data_Make_Struct(klass, struct http_parser, hp_mark, -1, hp); } /** * call-seq: * parser.new => parser * * Creates a new parser. */ static VALUE HttpParser_init(VALUE self) { struct http_parser *hp = data_get(self); http_parser_init(hp); hp->buf = rb_str_new(NULL, 0); hp->env = rb_hash_new(); hp->nr_requests = keepalive_requests; return self; } /** * call-seq: * parser.clear => parser * * Resets the parser to it's initial state so that you can reuse it * rather than making new ones. */ static VALUE HttpParser_clear(VALUE self) { struct http_parser *hp = data_get(self); http_parser_init(hp); rb_funcall(hp->env, id_clear, 0); rb_ivar_set(self, id_response_start_sent, Qfalse); return self; } /** * call-seq: * parser.dechunk! => parser * * Resets the parser to a state suitable for dechunking response bodies * */ static VALUE HttpParser_dechunk_bang(VALUE self) { struct http_parser *hp = data_get(self); http_parser_init(hp); /* * we don't care about trailers in dechunk-only mode, * but if we did we'd set UH_FL_HASTRAILER and clear hp->env */ if (0) { rb_funcall(hp->env, id_clear, 0); hp->flags = UH_FL_HASTRAILER; } hp->flags |= UH_FL_HASBODY | UH_FL_INBODY | UH_FL_CHUNKED; hp->cs = http_parser_en_ChunkedBody; return self; } /** * call-seq: * parser.reset => nil * * Resets the parser to it's initial state so that you can reuse it * rather than making new ones. * * This method is deprecated and to be removed in Unicorn 4.x */ static VALUE HttpParser_reset(VALUE self) { static int warned; if (!warned) { rb_warn("Unicorn::HttpParser#reset is deprecated; " "use Unicorn::HttpParser#clear instead"); } HttpParser_clear(self); return Qnil; } static void advance_str(VALUE str, off_t nr) { long len = RSTRING_LEN(str); if (len == 0) return; rb_str_modify(str); assert(nr <= len && "trying to advance past end of buffer"); len -= nr; if (len > 0) /* unlikely, len is usually 0 */ memmove(RSTRING_PTR(str), RSTRING_PTR(str) + nr, len); rb_str_set_len(str, len); } /** * call-seq: * parser.content_length => nil or Integer * * Returns the number of bytes left to run through HttpParser#filter_body. * This will initially be the value of the "Content-Length" HTTP header * after header parsing is complete and will decrease in value as * HttpParser#filter_body is called for each chunk. This should return * zero for requests with no body. * * This will return nil on "Transfer-Encoding: chunked" requests. */ static VALUE HttpParser_content_length(VALUE self) { struct http_parser *hp = data_get(self); return HP_FL_TEST(hp, CHUNKED) ? Qnil : OFFT2NUM(hp->len.content); } /** * Document-method: parse * call-seq: * parser.parse => env or nil * * Takes a Hash and a String of data, parses the String of data filling * in the Hash returning the Hash if parsing is finished, nil otherwise * When returning the env Hash, it may modify data to point to where * body processing should begin. * * Raises HttpParserError if there are parsing errors. */ static VALUE HttpParser_parse(VALUE self) { struct http_parser *hp = data_get(self); VALUE data = hp->buf; if (HP_FL_TEST(hp, TO_CLEAR)) HttpParser_clear(self); http_parser_execute(hp, RSTRING_PTR(data), RSTRING_LEN(data)); if (hp->offset > MAX_HEADER_LEN) parser_raise(e413, "HTTP header is too large"); if (hp->cs == http_parser_first_final || hp->cs == http_parser_en_ChunkedBody) { advance_str(data, hp->offset + 1); hp->offset = 0; if (HP_FL_TEST(hp, INTRAILER)) HP_FL_SET(hp, REQEOF); return hp->env; } if (hp->cs == http_parser_error) parser_raise(eHttpParserError, "Invalid HTTP format, parsing fails."); return Qnil; } /** * Document-method: parse * call-seq: * parser.add_parse(buffer) => env or nil * * adds the contents of +buffer+ to the internal buffer and attempts to * continue parsing. Returns the +env+ Hash on success or nil if more * data is needed. * * Raises HttpParserError if there are parsing errors. */ static VALUE HttpParser_add_parse(VALUE self, VALUE buffer) { struct http_parser *hp = data_get(self); Check_Type(buffer, T_STRING); rb_str_buf_append(hp->buf, buffer); return HttpParser_parse(self); } /** * Document-method: trailers * call-seq: * parser.trailers(req, data) => req or nil * * This is an alias for HttpParser#headers */ /** * Document-method: headers */ static VALUE HttpParser_headers(VALUE self, VALUE env, VALUE buf) { struct http_parser *hp = data_get(self); hp->env = env; hp->buf = buf; return HttpParser_parse(self); } static int chunked_eof(struct http_parser *hp) { return ((hp->cs == http_parser_first_final) || HP_FL_TEST(hp, INTRAILER)); } /** * call-seq: * parser.body_eof? => true or false * * Detects if we're done filtering the body or not. This can be used * to detect when to stop calling HttpParser#filter_body. */ static VALUE HttpParser_body_eof(VALUE self) { struct http_parser *hp = data_get(self); if (HP_FL_TEST(hp, CHUNKED)) return chunked_eof(hp) ? Qtrue : Qfalse; return hp->len.content == 0 ? Qtrue : Qfalse; } /** * call-seq: * parser.keepalive? => true or false * * This should be used to detect if a request can really handle * keepalives and pipelining. Currently, the rules are: * * 1. MUST be a GET or HEAD request * 2. MUST be HTTP/1.1 +or+ HTTP/1.0 with "Connection: keep-alive" * 3. MUST NOT have "Connection: close" set */ static VALUE HttpParser_keepalive(VALUE self) { struct http_parser *hp = data_get(self); return HP_FL_ALL(hp, KEEPALIVE) ? Qtrue : Qfalse; } /** * call-seq: * parser.next? => true or false * * Exactly like HttpParser#keepalive?, except it will reset the internal * parser state on next parse if it returns true. It will also respect * the maximum *keepalive_requests* value and return false if that is * reached. */ static VALUE HttpParser_next(VALUE self) { struct http_parser *hp = data_get(self); if ((HP_FL_ALL(hp, KEEPALIVE)) && (hp->nr_requests-- != 0)) { HP_FL_SET(hp, TO_CLEAR); return Qtrue; } return Qfalse; } /** * call-seq: * parser.headers? => true or false * * This should be used to detect if a request has headers (and if * the response will have headers as well). HTTP/0.9 requests * should return false, all subsequent HTTP versions will return true */ static VALUE HttpParser_has_headers(VALUE self) { struct http_parser *hp = data_get(self); return HP_FL_TEST(hp, HASHEADER) ? Qtrue : Qfalse; } static VALUE HttpParser_buf(VALUE self) { return data_get(self)->buf; } static VALUE HttpParser_env(VALUE self) { return data_get(self)->env; } /** * call-seq: * parser.filter_body(dst, src) => nil/src * * Takes a String of +src+, will modify data if dechunking is done. * Returns +nil+ if there is more data left to process. Returns * +src+ if body processing is complete. When returning +src+, * it may modify +src+ so the start of the string points to where * the body ended so that trailer processing can begin. * * Raises HttpParserError if there are dechunking errors. * Basically this is a glorified memcpy(3) that copies +src+ * into +buf+ while filtering it through the dechunker. */ static VALUE HttpParser_filter_body(VALUE self, VALUE dst, VALUE src) { struct http_parser *hp = data_get(self); char *srcptr; long srclen; srcptr = RSTRING_PTR(src); srclen = RSTRING_LEN(src); StringValue(dst); if (HP_FL_TEST(hp, CHUNKED)) { if (!chunked_eof(hp)) { rb_str_modify(dst); rb_str_resize(dst, srclen); /* we can never copy more than srclen bytes */ hp->s.dest_offset = 0; hp->cont = dst; hp->buf = src; http_parser_execute(hp, srcptr, srclen); if (hp->cs == http_parser_error) parser_raise(eHttpParserError, "Invalid HTTP format, parsing fails."); assert(hp->s.dest_offset <= hp->offset && "destination buffer overflow"); advance_str(src, hp->offset); rb_str_set_len(dst, hp->s.dest_offset); if (RSTRING_LEN(dst) == 0 && chunked_eof(hp)) { assert(hp->len.chunk == 0 && "chunk at EOF but more to parse"); } else { src = Qnil; } } } else { /* no need to enter the Ragel machine for unchunked transfers */ assert(hp->len.content >= 0 && "negative Content-Length"); if (hp->len.content > 0) { long nr = MIN(srclen, hp->len.content); rb_str_modify(dst); rb_str_resize(dst, nr); /* * using rb_str_replace() to avoid memcpy() doesn't help in * most cases because a GC-aware programmer will pass an explicit * buffer to env["rack.input"].read and reuse the buffer in a loop. * This causes copy-on-write behavior to be triggered anyways * when the +src+ buffer is modified (when reading off the socket). */ hp->buf = src; memcpy(RSTRING_PTR(dst), srcptr, nr); hp->len.content -= nr; if (hp->len.content == 0) { HP_FL_SET(hp, REQEOF); hp->cs = http_parser_first_final; } advance_str(src, nr); src = Qnil; } } hp->offset = 0; /* for trailer parsing */ return src; } #define SET_GLOBAL(var,str) do { \ var = find_common_field(str, sizeof(str) - 1); \ assert(!NIL_P(var) && "missed global field"); \ } while (0) void Init_unicorn_http(void) { VALUE mUnicorn, cHttpParser; mUnicorn = rb_const_get(rb_cObject, rb_intern("Unicorn")); cHttpParser = rb_define_class_under(mUnicorn, "HttpParser", rb_cObject); eHttpParserError = rb_define_class_under(mUnicorn, "HttpParserError", rb_eIOError); e413 = rb_define_class_under(mUnicorn, "RequestEntityTooLargeError", eHttpParserError); e414 = rb_define_class_under(mUnicorn, "RequestURITooLongError", eHttpParserError); init_globals(); rb_define_alloc_func(cHttpParser, HttpParser_alloc); rb_define_method(cHttpParser, "initialize", HttpParser_init, 0); rb_define_method(cHttpParser, "clear", HttpParser_clear, 0); rb_define_method(cHttpParser, "reset", HttpParser_reset, 0); rb_define_method(cHttpParser, "dechunk!", HttpParser_dechunk_bang, 0); rb_define_method(cHttpParser, "parse", HttpParser_parse, 0); rb_define_method(cHttpParser, "add_parse", HttpParser_add_parse, 1); rb_define_method(cHttpParser, "headers", HttpParser_headers, 2); rb_define_method(cHttpParser, "trailers", HttpParser_headers, 2); rb_define_method(cHttpParser, "filter_body", HttpParser_filter_body, 2); rb_define_method(cHttpParser, "content_length", HttpParser_content_length, 0); rb_define_method(cHttpParser, "body_eof?", HttpParser_body_eof, 0); rb_define_method(cHttpParser, "keepalive?", HttpParser_keepalive, 0); rb_define_method(cHttpParser, "headers?", HttpParser_has_headers, 0); rb_define_method(cHttpParser, "next?", HttpParser_next, 0); rb_define_method(cHttpParser, "buf", HttpParser_buf, 0); rb_define_method(cHttpParser, "env", HttpParser_env, 0); /* * The maximum size a single chunk when using chunked transfer encoding. * This is only a theoretical maximum used to detect errors in clients, * it is highly unlikely to encounter clients that send more than * several kilobytes at once. */ rb_define_const(cHttpParser, "CHUNK_MAX", OFFT2NUM(UH_OFF_T_MAX)); /* * The maximum size of the body as specified by Content-Length. * This is only a theoretical maximum, the actual limit is subject * to the limits of the file system used for +Dir.tmpdir+. */ rb_define_const(cHttpParser, "LENGTH_MAX", OFFT2NUM(UH_OFF_T_MAX)); /* default value for keepalive_requests */ rb_define_const(cHttpParser, "KEEPALIVE_REQUESTS_DEFAULT", ULONG2NUM(keepalive_requests)); rb_define_singleton_method(cHttpParser, "keepalive_requests", ka_req, 0); rb_define_singleton_method(cHttpParser, "keepalive_requests=", set_ka_req, 1); rb_define_singleton_method(cHttpParser, "trust_x_forwarded=", set_xftrust, 1); rb_define_singleton_method(cHttpParser, "trust_x_forwarded?", xftrust, 0); rb_define_singleton_method(cHttpParser, "max_header_len=", set_maxhdrlen, 1); init_common_fields(); SET_GLOBAL(g_http_host, "HOST"); SET_GLOBAL(g_http_trailer, "TRAILER"); SET_GLOBAL(g_http_transfer_encoding, "TRANSFER_ENCODING"); SET_GLOBAL(g_content_length, "CONTENT_LENGTH"); SET_GLOBAL(g_http_connection, "CONNECTION"); id_clear = rb_intern("clear"); id_set_backtrace = rb_intern("set_backtrace"); id_response_start_sent = rb_intern("@response_start_sent"); init_unicorn_httpdate(); } #undef SET_GLOBAL unicorn-4.7.0/ext/unicorn_http/CFLAGS0000644000004100000410000000066012236653132017432 0ustar www-datawww-data# CFLAGS used for development (gcc-dependent) # source this file if you want/need them CFLAGS= CFLAGS="$CFLAGS -Wall" CFLAGS="$CFLAGS -Wwrite-strings" CFLAGS="$CFLAGS -Wdeclaration-after-statement" CFLAGS="$CFLAGS -Wcast-qual" CFLAGS="$CFLAGS -Wstrict-prototypes" CFLAGS="$CFLAGS -Wshadow" CFLAGS="$CFLAGS -Wextra" CFLAGS="$CFLAGS -Wno-deprecated-declarations" CFLAGS="$CFLAGS -Waggregate-return" CFLAGS="$CFLAGS -Wchar-subscripts" unicorn-4.7.0/ext/unicorn_http/c_util.h0000644000004100000410000000555512236653132020210 0ustar www-datawww-data/* * Generic C functions and macros go here, there are no dependencies * on Unicorn internal structures or the Ruby C API in here. */ #ifndef UH_util_h #define UH_util_h #include #include #define MIN(a,b) (a < b ? a : b) #define ARRAY_SIZE(x) (sizeof(x)/sizeof(x[0])) #ifndef SIZEOF_OFF_T # define SIZEOF_OFF_T 4 # warning SIZEOF_OFF_T not defined, guessing 4. Did you run extconf.rb? #endif #if SIZEOF_OFF_T == 4 # define UH_OFF_T_MAX 0x7fffffff #elif SIZEOF_OFF_T == 8 # if SIZEOF_LONG == 4 # define UH_OFF_T_MAX 0x7fffffffffffffffLL # else # define UH_OFF_T_MAX 0x7fffffffffffffff # endif #else # error off_t size unknown for this platform! #endif /* SIZEOF_OFF_T check */ /* * ragel enforces fpc as a const, and merely casting can make picky * compilers unhappy, so we have this little helper do our dirty work */ static inline void *deconst(const void *in) { union { const void *in; void *out; } tmp; tmp.in = in; return tmp.out; } /* * capitalizes all lower-case ASCII characters and converts dashes * to underscores for HTTP headers. Locale-agnostic. */ static void snake_upcase_char(char *c) { if (*c >= 'a' && *c <= 'z') *c &= ~0x20; else if (*c == '-') *c = '_'; } /* Downcases a single ASCII character. Locale-agnostic. */ static void downcase_char(char *c) { if (*c >= 'A' && *c <= 'Z') *c |= 0x20; } static int hexchar2int(int xdigit) { if (xdigit >= 'A' && xdigit <= 'F') return xdigit - 'A' + 10; if (xdigit >= 'a' && xdigit <= 'f') return xdigit - 'a' + 10; /* Ragel already does runtime range checking for us in Unicorn: */ assert(xdigit >= '0' && xdigit <= '9' && "invalid digit character"); return xdigit - '0'; } /* * multiplies +i+ by +base+ and increments the result by the parsed * integer value of +xdigit+. +xdigit+ is a character byte * representing a number the range of 0..(base-1) * returns the new value of +i+ on success * returns -1 on errors (including overflow) */ static off_t step_incr(off_t i, int xdigit, const int base) { static const off_t max = UH_OFF_T_MAX; const off_t next_max = (max - (max % base)) / base; off_t offset = hexchar2int(xdigit); if (offset > (base - 1)) return -1; if (i > next_max) return -1; i *= base; if ((offset > (base - 1)) || ((max - i) < offset)) return -1; return i + offset; } /* * parses a non-negative length according to base-10 and * returns it as an off_t value. Returns -1 on errors * (including overflow). */ static off_t parse_length(const char *value, size_t length) { off_t rv; for (rv = 0; length-- && rv >= 0; ++value) { if (*value >= '0' && *value <= '9') rv = step_incr(rv, *value, 10); else return -1; } return rv; } #define CONST_MEM_EQ(const_p, buf, len) \ ((sizeof(const_p) - 1) == len && !memcmp(const_p, buf, sizeof(const_p) - 1)) #endif /* UH_util_h */ unicorn-4.7.0/ext/unicorn_http/ext_help.h0000644000004100000410000000420612236653132020531 0ustar www-datawww-data#ifndef ext_help_h #define ext_help_h #ifndef RSTRING_PTR #define RSTRING_PTR(s) (RSTRING(s)->ptr) #endif /* !defined(RSTRING_PTR) */ #ifndef RSTRING_LEN #define RSTRING_LEN(s) (RSTRING(s)->len) #endif /* !defined(RSTRING_LEN) */ #ifndef HAVE_RB_STR_SET_LEN # ifdef RUBINIUS # error we should never get here with current Rubinius (1.x) # endif /* this is taken from Ruby 1.8.7, 1.8.6 may not have it */ static void rb_18_str_set_len(VALUE str, long len) { RSTRING(str)->len = len; RSTRING(str)->ptr[len] = '\0'; } # define rb_str_set_len(str,len) rb_18_str_set_len(str,len) #endif /* !defined(HAVE_RB_STR_SET_LEN) */ /* not all Ruby implementations support frozen objects (Rubinius does not) */ #if defined(OBJ_FROZEN) # define assert_frozen(f) assert(OBJ_FROZEN(f) && "unfrozen object") #else # define assert_frozen(f) do {} while (0) #endif /* !defined(OBJ_FROZEN) */ #if !defined(OFFT2NUM) # if SIZEOF_OFF_T == SIZEOF_LONG # define OFFT2NUM(n) LONG2NUM(n) # else # define OFFT2NUM(n) LL2NUM(n) # endif #endif /* ! defined(OFFT2NUM) */ #if !defined(SIZET2NUM) # if SIZEOF_SIZE_T == SIZEOF_LONG # define SIZET2NUM(n) ULONG2NUM(n) # else # define SIZET2NUM(n) ULL2NUM(n) # endif #endif /* ! defined(SIZET2NUM) */ #if !defined(NUM2SIZET) # if SIZEOF_SIZE_T == SIZEOF_LONG # define NUM2SIZET(n) ((size_t)NUM2ULONG(n)) # else # define NUM2SIZET(n) ((size_t)NUM2ULL(n)) # endif #endif /* ! defined(NUM2SIZET) */ static inline int str_cstr_eq(VALUE val, const char *ptr, long len) { return (RSTRING_LEN(val) == len && !memcmp(ptr, RSTRING_PTR(val), len)); } #define STR_CSTR_EQ(val, const_str) \ str_cstr_eq(val, const_str, sizeof(const_str) - 1) /* strcasecmp isn't locale independent */ static int str_cstr_case_eq(VALUE val, const char *ptr, long len) { if (RSTRING_LEN(val) == len) { const char *v = RSTRING_PTR(val); for (; len--; ++ptr, ++v) { if ((*ptr == *v) || (*v >= 'A' && *v <= 'Z' && (*v | 0x20) == *ptr)) continue; return 0; } return 1; } return 0; } #define STR_CSTR_CASE_EQ(val, const_str) \ str_cstr_case_eq(val, const_str, sizeof(const_str) - 1) #endif /* ext_help_h */ unicorn-4.7.0/CONTRIBUTORS0000644000004100000410000000155212236653132015115 0ustar www-datawww-dataUnicorn developers (let us know if we forgot you): * Eric Wong (BDFL, BOFH) * Suraj N. Kurapati * Andrey Stikheev * Wayne Larsen * IƱaki Baz Castillo * Augusto Becciu * Hongli Lai * ... (help wanted) We would like to thank following folks for helping make Unicorn possible: * Ezra Zygmuntowicz - for helping Eric decide on a sane configuration format and reasonable defaults. * Christian Neukirchen - for Rack, which let us put more focus on the server and drastically cut down on the amount of code we have to maintain. * Zed A. Shaw - for Mongrel, without which Unicorn would not be possible The original Mongrel contributors: * Luis Lavena * Wilson Bilkovich * why the lucky stiff * Dan Kubb * MenTaLguY * Filipe Lautert * Rick Olson * Wayne E. Seguin * Kirk Haines * Bradley Taylor * Matt Pelletier * Ry Dahl * Nick Sieger * Evan Weaver * Marc-AndrƩ Cournoyer unicorn-4.7.0/test/0000755000004100000410000000000012236653132014211 5ustar www-datawww-dataunicorn-4.7.0/test/aggregate.rb0000755000004100000410000000060112236653132016464 0ustar www-datawww-data#!/usr/bin/ruby -n # -*- encoding: binary -*- BEGIN { $tests = $assertions = $failures = $errors = 0 } $_ =~ /(\d+) tests, (\d+) assertions, (\d+) failures, (\d+) errors/ or next $tests += $1.to_i $assertions += $2.to_i $failures += $3.to_i $errors += $4.to_i END { printf("\n%d tests, %d assertions, %d failures, %d errors\n", $tests, $assertions, $failures, $errors) } unicorn-4.7.0/test/test_helper.rb0000644000004100000410000001720412236653132017060 0ustar www-datawww-data# -*- encoding: binary -*- # Copyright (c) 2005 Zed A. Shaw # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or # the GPLv2+ (GPLv3+ preferred) # # Additional work donated by contributors. See http://mongrel.rubyforge.org/attributions.html # for more information. STDIN.sync = STDOUT.sync = STDERR.sync = true # buffering makes debugging hard # FIXME: move curl-dependent tests into t/ ENV['NO_PROXY'] ||= ENV['UNICORN_TEST_ADDR'] || '127.0.0.1' # Some tests watch a log file or a pid file to spring up to check state # Can't rely on inotify on non-Linux and logging to a pipe makes things # more complicated DEFAULT_TRIES = 1000 DEFAULT_RES = 0.2 require 'test/unit' require 'net/http' require 'digest/sha1' require 'uri' require 'stringio' require 'pathname' require 'tempfile' require 'fileutils' require 'logger' require 'unicorn' if ENV['DEBUG'] require 'ruby-debug' Debugger.start end def redirect_test_io orig_err = STDERR.dup orig_out = STDOUT.dup STDERR.reopen("test_stderr.#{$$}.log", "a") STDOUT.reopen("test_stdout.#{$$}.log", "a") STDERR.sync = STDOUT.sync = true at_exit do File.unlink("test_stderr.#{$$}.log") rescue nil File.unlink("test_stdout.#{$$}.log") rescue nil end begin yield ensure STDERR.reopen(orig_err) STDOUT.reopen(orig_out) end end # which(1) exit codes cannot be trusted on some systems # We use UNIX shell utilities in some tests because we don't trust # ourselves to write Ruby 100% correctly :) def which(bin) ex = ENV['PATH'].split(/:/).detect do |x| x << "/#{bin}" File.executable?(x) end or warn "`#{bin}' not found in PATH=#{ENV['PATH']}" ex end # Either takes a string to do a get request against, or a tuple of [URI, HTTP] where # HTTP is some kind of Net::HTTP request object (POST, HEAD, etc.) def hit(uris) results = [] uris.each do |u| res = nil if u.kind_of? String u = 'http://127.0.0.1:8080/' if u == 'http://0.0.0.0:8080/' res = Net::HTTP.get(URI.parse(u)) else url = URI.parse(u[0]) res = Net::HTTP.new(url.host, url.port).start {|h| h.request(u[1]) } end assert res != nil, "Didn't get a response: #{u}" results << res end return results end # unused_port provides an unused port on +addr+ usable for TCP that is # guaranteed to be unused across all unicorn builds on that system. It # prevents race conditions by using a lock file other unicorn builds # will see. This is required if you perform several builds in parallel # with a continuous integration system or run tests in parallel via # gmake. This is NOT guaranteed to be race-free if you run other # processes that bind to random ports for testing (but the window # for a race condition is very small). You may also set UNICORN_TEST_ADDR # to override the default test address (127.0.0.1). def unused_port(addr = '127.0.0.1') retries = 100 base = 5000 port = sock = nil begin begin port = base + rand(32768 - base) while port == Unicorn::Const::DEFAULT_PORT port = base + rand(32768 - base) end sock = Socket.new(Socket::AF_INET, Socket::SOCK_STREAM, 0) sock.bind(Socket.pack_sockaddr_in(port, addr)) sock.listen(5) rescue Errno::EADDRINUSE, Errno::EACCES sock.close rescue nil retry if (retries -= 1) >= 0 end # since we'll end up closing the random port we just got, there's a race # condition could allow the random port we just chose to reselect itself # when running tests in parallel with gmake. Create a lock file while # we have the port here to ensure that does not happen . lock_path = "#{Dir::tmpdir}/unicorn_test.#{addr}:#{port}.lock" File.open(lock_path, File::WRONLY|File::CREAT|File::EXCL, 0600).close at_exit { File.unlink(lock_path) rescue nil } rescue Errno::EEXIST sock.close rescue nil retry end sock.close rescue nil port end def try_require(lib) begin require lib true rescue LoadError false end end # sometimes the server may not come up right away def retry_hit(uris = []) tries = DEFAULT_TRIES begin hit(uris) rescue Errno::EINVAL, Errno::ECONNREFUSED => err if (tries -= 1) > 0 sleep DEFAULT_RES retry end raise err end end def assert_shutdown(pid) wait_master_ready("test_stderr.#{pid}.log") Process.kill(:QUIT, pid) pid, status = Process.waitpid2(pid) assert status.success?, "exited successfully" end def wait_workers_ready(path, nr_workers) tries = DEFAULT_TRIES lines = [] while (tries -= 1) > 0 begin lines = File.readlines(path).grep(/worker=\d+ ready/) lines.size == nr_workers and return rescue Errno::ENOENT end sleep DEFAULT_RES end raise "#{nr_workers} workers never became ready:" \ "\n\t#{lines.join("\n\t")}\n" end def wait_master_ready(master_log) tries = DEFAULT_TRIES while (tries -= 1) > 0 begin File.readlines(master_log).grep(/master process ready/)[0] and return rescue Errno::ENOENT end sleep DEFAULT_RES end raise "master process never became ready" end def reexec_usr2_quit_test(pid, pid_file) assert File.exist?(pid_file), "pid file OK" assert ! File.exist?("#{pid_file}.oldbin"), "oldbin pid file" Process.kill(:USR2, pid) retry_hit(["http://#{@addr}:#{@port}/"]) wait_for_file("#{pid_file}.oldbin") wait_for_file(pid_file) old_pid = File.read("#{pid_file}.oldbin").to_i new_pid = File.read(pid_file).to_i # kill old master process assert_not_equal pid, new_pid assert_equal pid, old_pid Process.kill(:QUIT, old_pid) retry_hit(["http://#{@addr}:#{@port}/"]) wait_for_death(old_pid) assert_equal new_pid, File.read(pid_file).to_i retry_hit(["http://#{@addr}:#{@port}/"]) Process.kill(:QUIT, new_pid) end def reexec_basic_test(pid, pid_file) results = retry_hit(["http://#{@addr}:#{@port}/"]) assert_equal String, results[0].class Process.kill(0, pid) master_log = "#{@tmpdir}/test_stderr.#{pid}.log" wait_master_ready(master_log) File.truncate(master_log, 0) nr = 50 kill_point = 2 nr.times do |i| hit(["http://#{@addr}:#{@port}/#{i}"]) i == kill_point and Process.kill(:HUP, pid) end wait_master_ready(master_log) assert File.exist?(pid_file), "pid=#{pid_file} exists" new_pid = File.read(pid_file).to_i assert_not_equal pid, new_pid Process.kill(0, new_pid) Process.kill(:QUIT, new_pid) end def wait_for_file(path) tries = DEFAULT_TRIES while (tries -= 1) > 0 && ! File.exist?(path) sleep DEFAULT_RES end assert File.exist?(path), "path=#{path} exists #{caller.inspect}" end def xfork(&block) fork do ObjectSpace.each_object(Tempfile) do |tmp| ObjectSpace.undefine_finalizer(tmp) end yield end end # can't waitpid on detached processes def wait_for_death(pid) tries = DEFAULT_TRIES while (tries -= 1) > 0 begin Process.kill(0, pid) begin Process.waitpid(pid, Process::WNOHANG) rescue Errno::ECHILD end sleep(DEFAULT_RES) rescue Errno::ESRCH return end end raise "PID:#{pid} never died!" end # executes +cmd+ and chunks its STDOUT def chunked_spawn(stdout, *cmd) fork { crd, cwr = IO.pipe crd.binmode cwr.binmode crd.sync = cwr.sync = true pid = fork { STDOUT.reopen(cwr) crd.close cwr.close exec(*cmd) } cwr.close begin buf = crd.readpartial(16384) stdout.write("#{'%x' % buf.size}\r\n#{buf}") rescue EOFError stdout.write("0\r\n") pid, status = Process.waitpid(pid) exit status.exitstatus end while true } end def reset_sig_handlers sigs = %w(CHLD).concat(Unicorn::HttpServer::QUEUE_SIGS) sigs.each { |sig| trap(sig, "DEFAULT") } end unicorn-4.7.0/test/unit/0000755000004100000410000000000012236653132015170 5ustar www-datawww-dataunicorn-4.7.0/test/unit/test_http_parser.rb0000644000004100000410000007255512236653132021125 0ustar www-datawww-data# -*- encoding: binary -*- # Copyright (c) 2005 Zed A. Shaw # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or # the GPLv2+ (GPLv3+ preferred) # # Additional work donated by contributors. See http://mongrel.rubyforge.org/attributions.html # for more information. require 'test/test_helper' include Unicorn class HttpParserTest < Test::Unit::TestCase def test_parse_simple parser = HttpParser.new req = parser.env http = parser.buf http << "GET / HTTP/1.1\r\n\r\n" assert_equal req, parser.parse assert_equal '', http assert_equal 'HTTP/1.1', req['SERVER_PROTOCOL'] assert_equal '/', req['REQUEST_PATH'] assert_equal 'HTTP/1.1', req['HTTP_VERSION'] assert_equal '/', req['REQUEST_URI'] assert_equal 'GET', req['REQUEST_METHOD'] assert_nil req['FRAGMENT'] assert_equal '', req['QUERY_STRING'] assert parser.keepalive? parser.clear req.clear http << "G" assert_nil parser.parse assert_equal "G", http assert req.empty? # try parsing again to ensure we were reset correctly http << "ET /hello-world HTTP/1.1\r\n\r\n" assert parser.parse assert_equal 'HTTP/1.1', req['SERVER_PROTOCOL'] assert_equal '/hello-world', req['REQUEST_PATH'] assert_equal 'HTTP/1.1', req['HTTP_VERSION'] assert_equal '/hello-world', req['REQUEST_URI'] assert_equal 'GET', req['REQUEST_METHOD'] assert_nil req['FRAGMENT'] assert_equal '', req['QUERY_STRING'] assert_equal '', http assert parser.keepalive? end def test_tab_lws parser = HttpParser.new req = parser.env parser.buf << "GET / HTTP/1.1\r\nHost:\tfoo.bar\r\n\r\n" assert_equal req.object_id, parser.parse.object_id assert_equal "foo.bar", req['HTTP_HOST'] end def test_connection_close_no_ka parser = HttpParser.new req = parser.env parser.buf << "GET / HTTP/1.1\r\nConnection: close\r\n\r\n" assert_equal req.object_id, parser.parse.object_id assert_equal "GET", req['REQUEST_METHOD'] assert ! parser.keepalive? end def test_connection_keep_alive_ka parser = HttpParser.new req = parser.env parser.buf << "HEAD / HTTP/1.1\r\nConnection: keep-alive\r\n\r\n" assert_equal req.object_id, parser.parse.object_id assert parser.keepalive? end def test_connection_keep_alive_no_body parser = HttpParser.new req = parser.env parser.buf << "POST / HTTP/1.1\r\nConnection: keep-alive\r\n\r\n" assert_equal req.object_id, parser.parse.object_id assert parser.keepalive? end def test_connection_keep_alive_no_body_empty parser = HttpParser.new req = parser.env parser.buf << "POST / HTTP/1.1\r\n" \ "Content-Length: 0\r\n" \ "Connection: keep-alive\r\n\r\n" assert_equal req.object_id, parser.parse.object_id assert parser.keepalive? end def test_connection_keep_alive_ka_bad_version parser = HttpParser.new req = parser.env parser.buf << "GET / HTTP/1.0\r\nConnection: keep-alive\r\n\r\n" assert_equal req.object_id, parser.parse.object_id assert parser.keepalive? end def test_parse_server_host_default_port parser = HttpParser.new req = parser.env parser.buf << "GET / HTTP/1.1\r\nHost: foo\r\n\r\n" assert_equal req, parser.parse assert_equal 'foo', req['SERVER_NAME'] assert_equal '80', req['SERVER_PORT'] assert_equal '', parser.buf assert parser.keepalive? end def test_parse_server_host_alt_port parser = HttpParser.new req = parser.env parser.buf << "GET / HTTP/1.1\r\nHost: foo:999\r\n\r\n" assert_equal req, parser.parse assert_equal 'foo', req['SERVER_NAME'] assert_equal '999', req['SERVER_PORT'] assert_equal '', parser.buf assert parser.keepalive? end def test_parse_server_host_empty_port parser = HttpParser.new req = parser.env parser.buf << "GET / HTTP/1.1\r\nHost: foo:\r\n\r\n" assert_equal req, parser.parse assert_equal 'foo', req['SERVER_NAME'] assert_equal '80', req['SERVER_PORT'] assert_equal '', parser.buf assert parser.keepalive? end def test_parse_server_host_xfp_https parser = HttpParser.new req = parser.env parser.buf << "GET / HTTP/1.1\r\nHost: foo:\r\n" \ "X-Forwarded-Proto: https\r\n\r\n" assert_equal req, parser.parse assert_equal 'foo', req['SERVER_NAME'] assert_equal '443', req['SERVER_PORT'] assert_equal '', parser.buf assert parser.keepalive? end def test_parse_xfp_https_chained parser = HttpParser.new req = parser.env parser.buf << "GET / HTTP/1.0\r\n" \ "X-Forwarded-Proto: https,http\r\n\r\n" assert_equal req, parser.parse assert_equal '443', req['SERVER_PORT'], req.inspect assert_equal 'https', req['rack.url_scheme'], req.inspect assert_equal '', parser.buf end def test_parse_xfp_https_chained_backwards parser = HttpParser.new req = parser.env parser.buf << "GET / HTTP/1.0\r\n" \ "X-Forwarded-Proto: http,https\r\n\r\n" assert_equal req, parser.parse assert_equal '80', req['SERVER_PORT'], req.inspect assert_equal 'http', req['rack.url_scheme'], req.inspect assert_equal '', parser.buf end def test_parse_xfp_gopher_is_ignored parser = HttpParser.new req = parser.env parser.buf << "GET / HTTP/1.0\r\n" \ "X-Forwarded-Proto: gopher\r\n\r\n" assert_equal req, parser.parse assert_equal '80', req['SERVER_PORT'], req.inspect assert_equal 'http', req['rack.url_scheme'], req.inspect assert_equal '', parser.buf end def test_parse_x_forwarded_ssl_on parser = HttpParser.new req = parser.env parser.buf << "GET / HTTP/1.0\r\n" \ "X-Forwarded-Ssl: on\r\n\r\n" assert_equal req, parser.parse assert_equal '443', req['SERVER_PORT'], req.inspect assert_equal 'https', req['rack.url_scheme'], req.inspect assert_equal '', parser.buf end def test_parse_x_forwarded_ssl_off parser = HttpParser.new req = parser.env parser.buf << "GET / HTTP/1.0\r\nX-Forwarded-Ssl: off\r\n\r\n" assert_equal req, parser.parse assert_equal '80', req['SERVER_PORT'], req.inspect assert_equal 'http', req['rack.url_scheme'], req.inspect assert_equal '', parser.buf end def test_parse_strange_headers parser = HttpParser.new req = parser.env should_be_good = "GET / HTTP/1.1\r\naaaaaaaaaaaaa:++++++++++\r\n\r\n" parser.buf << should_be_good assert_equal req, parser.parse assert_equal '', parser.buf assert parser.keepalive? end # legacy test case from Mongrel that we never supported before... # I still consider Pound irrelevant, unfortunately stupid clients that # send extremely big headers do exist and they've managed to find Unicorn... def test_nasty_pound_header parser = HttpParser.new nasty_pound_header = "GET / HTTP/1.1\r\nX-SSL-Bullshit: -----BEGIN CERTIFICATE-----\r\n\tMIIFbTCCBFWgAwIBAgICH4cwDQYJKoZIhvcNAQEFBQAwcDELMAkGA1UEBhMCVUsx\r\n\tETAPBgNVBAoTCGVTY2llbmNlMRIwEAYDVQQLEwlBdXRob3JpdHkxCzAJBgNVBAMT\r\n\tAkNBMS0wKwYJKoZIhvcNAQkBFh5jYS1vcGVyYXRvckBncmlkLXN1cHBvcnQuYWMu\r\n\tdWswHhcNMDYwNzI3MTQxMzI4WhcNMDcwNzI3MTQxMzI4WjBbMQswCQYDVQQGEwJV\r\n\tSzERMA8GA1UEChMIZVNjaWVuY2UxEzARBgNVBAsTCk1hbmNoZXN0ZXIxCzAJBgNV\r\n\tBAcTmrsogriqMWLAk1DMRcwFQYDVQQDEw5taWNoYWVsIHBhcmQYJKoZIhvcNAQEB\r\n\tBQADggEPADCCAQoCggEBANPEQBgl1IaKdSS1TbhF3hEXSl72G9J+WC/1R64fAcEF\r\n\tW51rEyFYiIeZGx/BVzwXbeBoNUK41OK65sxGuflMo5gLflbwJtHBRIEKAfVVp3YR\r\n\tgW7cMA/s/XKgL1GEC7rQw8lIZT8RApukCGqOVHSi/F1SiFlPDxuDfmdiNzL31+sL\r\n\t0iwHDdNkGjy5pyBSB8Y79dsSJtCW/iaLB0/n8Sj7HgvvZJ7x0fr+RQjYOUUfrePP\r\n\tu2MSpFyf+9BbC/aXgaZuiCvSR+8Snv3xApQY+fULK/xY8h8Ua51iXoQ5jrgu2SqR\r\n\twgA7BUi3G8LFzMBl8FRCDYGUDy7M6QaHXx1ZWIPWNKsCAwEAAaOCAiQwggIgMAwG\r\n\tA1UdEwEB/wQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMA4GA1UdDwEB/wQEAwID6DAs\r\n\tBglghkgBhvhCAQ0EHxYdVUsgZS1TY2llbmNlIFVzZXIgQ2VydGlmaWNhdGUwHQYD\r\n\tVR0OBBYEFDTt/sf9PeMaZDHkUIldrDYMNTBZMIGaBgNVHSMEgZIwgY+AFAI4qxGj\r\n\tloCLDdMVKwiljjDastqooXSkcjBwMQswCQYDVQQGEwJVSzERMA8GA1UEChMIZVNj\r\n\taWVuY2UxEjAQBgNVBAsTCUF1dGhvcml0eTELMAkGA1UEAxMCQ0ExLTArBgkqhkiG\r\n\t9w0BCQEWHmNhLW9wZXJhdG9yQGdyaWQtc3VwcG9ydC5hYy51a4IBADApBgNVHRIE\r\n\tIjAggR5jYS1vcGVyYXRvckBncmlkLXN1cHBvcnQuYWMudWswGQYDVR0gBBIwEDAO\r\n\tBgwrBgEEAdkvAQEBAQYwPQYJYIZIAYb4QgEEBDAWLmh0dHA6Ly9jYS5ncmlkLXN1\r\n\tcHBvcnQuYWMudmT4sopwqlBWsvcHViL2NybC9jYWNybC5jcmwwPQYJYIZIAYb4QgEDBDAWLmh0\r\n\tdHA6Ly9jYS5ncmlkLXN1cHBvcnQuYWMudWsvcHViL2NybC9jYWNybC5jcmwwPwYD\r\n\tVR0fBDgwNjA0oDKgMIYuaHR0cDovL2NhLmdyaWQt5hYy51ay9wdWIv\r\n\tY3JsL2NhY3JsLmNybDANBgkqhkiG9w0BAQUFAAOCAQEAS/U4iiooBENGW/Hwmmd3\r\n\tXCy6Zrt08YjKCzGNjorT98g8uGsqYjSxv/hmi0qlnlHs+k/3Iobc3LjS5AMYr5L8\r\n\tUO7OSkgFFlLHQyC9JzPfmLCAugvzEbyv4Olnsr8hbxF1MbKZoQxUZtMVu29wjfXk\r\n\thTeApBv7eaKCWpSp7MCbvgzm74izKhu3vlDk9w6qVrxePfGgpKPqfHiOoGhFnbTK\r\n\twTC6o2xq5y0qZ03JonF7OJspEd3I5zKY3E+ov7/ZhW6DqT8UFvsAdjvQbXyhV8Eu\r\n\tYhixw1aKEPzNjNowuIseVogKOLXxWI5vAi5HgXdS0/ES5gDGsABo4fqovUKlgop3\r\n\tRA==\r\n\t-----END CERTIFICATE-----\r\n\r\n" req = parser.env parser.buf << nasty_pound_header.dup assert nasty_pound_header =~ /(-----BEGIN .*--END CERTIFICATE-----)/m expect = $1.dup expect.gsub!(/\r\n\t/, ' ') assert_equal req, parser.parse assert_equal '', parser.buf assert_equal expect, req['HTTP_X_SSL_BULLSHIT'] end def test_continuation_eats_leading_spaces parser = HttpParser.new header = "GET / HTTP/1.1\r\n" \ "X-ASDF: \r\n" \ "\t\r\n" \ " \r\n" \ " ASDF\r\n\r\n" parser.buf << header req = parser.env assert_equal req, parser.parse assert_equal '', parser.buf assert_equal 'ASDF', req['HTTP_X_ASDF'] end def test_continuation_eats_scattered_leading_spaces parser = HttpParser.new header = "GET / HTTP/1.1\r\n" \ "X-ASDF: hi\r\n" \ " y\r\n" \ "\t\r\n" \ " x\r\n" \ " ASDF\r\n\r\n" req = parser.env parser.buf << header assert_equal req, parser.parse assert_equal '', parser.buf assert_equal 'hi y x ASDF', req['HTTP_X_ASDF'] end def test_continuation_eats_trailing_spaces parser = HttpParser.new header = "GET / HTTP/1.1\r\n" \ "X-ASDF: \r\n" \ "\t\r\n" \ " b \r\n" \ " ASDF\r\n\r\n" parser.buf << header req = parser.env assert_equal req, parser.parse assert_equal '', parser.buf assert_equal 'b ASDF', req['HTTP_X_ASDF'] end def test_continuation_with_absolute_uri_and_ignored_host_header parser = HttpParser.new header = "GET http://example.com/ HTTP/1.1\r\n" \ "Host: \r\n" \ " YHBT.net\r\n" \ "\r\n" parser.buf << header req = parser.env assert_equal req, parser.parse assert_equal 'example.com', req['HTTP_HOST'] end # this may seem to be testing more of an implementation detail, but # it also helps ensure we're safe in the presence of multiple parsers # in case we ever go multithreaded/evented... def test_resumable_continuations nr = 1000 header = "GET / HTTP/1.1\r\n" \ "X-ASDF: \r\n" \ " hello\r\n" tmp = [] nr.times { |i| parser = HttpParser.new req = parser.env parser.buf << "#{header} #{i}\r\n" assert parser.parse.nil? asdf = req['HTTP_X_ASDF'] assert_equal "hello #{i}", asdf tmp << [ parser, asdf ] } tmp.each_with_index { |(parser, asdf), i| parser.buf << " .\r\n\r\n" assert parser.parse assert_equal "hello #{i} .", asdf } end def test_invalid_continuation parser = HttpParser.new header = "GET / HTTP/1.1\r\n" \ " y\r\n" \ "Host: hello\r\n" \ "\r\n" parser.buf << header assert_raises(HttpParserError) { parser.parse } end def test_parse_ie6_urls %w(/some/random/path" /some/random/path> /some/random/path< /we/love/you/ie6?q=<""> /url?<="&>=" /mal"formed"? ).each do |path| parser = HttpParser.new req = parser.env sorta_safe = %(GET #{path} HTTP/1.1\r\n\r\n) assert_equal req, parser.headers(req, sorta_safe) assert_equal path, req['REQUEST_URI'] assert_equal '', sorta_safe assert parser.keepalive? end end def test_parse_error parser = HttpParser.new req = parser.env bad_http = "GET / SsUTF/1.1" assert_raises(HttpParserError) { parser.headers(req, bad_http) } # make sure we can recover parser.clear req.clear assert_equal req, parser.headers(req, "GET / HTTP/1.0\r\n\r\n") assert ! parser.keepalive? end def test_piecemeal parser = HttpParser.new req = parser.env http = "GET" assert_nil parser.headers(req, http) assert_nil parser.headers(req, http) assert_nil parser.headers(req, http << " / HTTP/1.0") assert_equal '/', req['REQUEST_PATH'] assert_equal '/', req['REQUEST_URI'] assert_equal 'GET', req['REQUEST_METHOD'] assert_nil parser.headers(req, http << "\r\n") assert_equal 'HTTP/1.0', req['HTTP_VERSION'] assert_nil parser.headers(req, http << "\r") assert_equal req, parser.headers(req, http << "\n") assert_equal 'HTTP/1.0', req['SERVER_PROTOCOL'] assert_nil req['FRAGMENT'] assert_equal '', req['QUERY_STRING'] assert_equal "", http assert ! parser.keepalive? end # not common, but underscores do appear in practice def test_absolute_uri_underscores parser = HttpParser.new req = parser.env http = "GET http://under_score.example.com/foo?q=bar HTTP/1.0\r\n\r\n" parser.buf << http assert_equal req, parser.parse assert_equal 'http', req['rack.url_scheme'] assert_equal '/foo?q=bar', req['REQUEST_URI'] assert_equal '/foo', req['REQUEST_PATH'] assert_equal 'q=bar', req['QUERY_STRING'] assert_equal 'under_score.example.com', req['HTTP_HOST'] assert_equal 'under_score.example.com', req['SERVER_NAME'] assert_equal '80', req['SERVER_PORT'] assert_equal "", parser.buf assert ! parser.keepalive? end # some dumb clients add users because they're stupid def test_absolute_uri_w_user parser = HttpParser.new req = parser.env http = "GET http://user%20space@example.com/foo?q=bar HTTP/1.0\r\n\r\n" parser.buf << http assert_equal req, parser.parse assert_equal 'http', req['rack.url_scheme'] assert_equal '/foo?q=bar', req['REQUEST_URI'] assert_equal '/foo', req['REQUEST_PATH'] assert_equal 'q=bar', req['QUERY_STRING'] assert_equal 'example.com', req['HTTP_HOST'] assert_equal 'example.com', req['SERVER_NAME'] assert_equal '80', req['SERVER_PORT'] assert_equal "", parser.buf assert ! parser.keepalive? end # since Mongrel supported anything URI.parse supported, we're stuck # supporting everything URI.parse supports def test_absolute_uri_uri_parse "#{URI::REGEXP::PATTERN::UNRESERVED};:&=+$,".split(//).each do |char| parser = HttpParser.new req = parser.env http = "GET http://#{char}@example.com/ HTTP/1.0\r\n\r\n" assert_equal req, parser.headers(req, http) assert_equal 'http', req['rack.url_scheme'] assert_equal '/', req['REQUEST_URI'] assert_equal '/', req['REQUEST_PATH'] assert_equal '', req['QUERY_STRING'] assert_equal 'example.com', req['HTTP_HOST'] assert_equal 'example.com', req['SERVER_NAME'] assert_equal '80', req['SERVER_PORT'] assert_equal "", http assert ! parser.keepalive? end end def test_absolute_uri parser = HttpParser.new req = parser.env parser.buf << "GET http://example.com/foo?q=bar HTTP/1.0\r\n\r\n" assert_equal req, parser.parse assert_equal 'http', req['rack.url_scheme'] assert_equal '/foo?q=bar', req['REQUEST_URI'] assert_equal '/foo', req['REQUEST_PATH'] assert_equal 'q=bar', req['QUERY_STRING'] assert_equal 'example.com', req['HTTP_HOST'] assert_equal 'example.com', req['SERVER_NAME'] assert_equal '80', req['SERVER_PORT'] assert_equal "", parser.buf assert ! parser.keepalive? end # X-Forwarded-Proto is not in rfc2616, absolute URIs are, however... def test_absolute_uri_https parser = HttpParser.new req = parser.env http = "GET https://example.com/foo?q=bar HTTP/1.1\r\n" \ "X-Forwarded-Proto: http\r\n\r\n" parser.buf << http assert_equal req, parser.parse assert_equal 'https', req['rack.url_scheme'] assert_equal '/foo?q=bar', req['REQUEST_URI'] assert_equal '/foo', req['REQUEST_PATH'] assert_equal 'q=bar', req['QUERY_STRING'] assert_equal 'example.com', req['HTTP_HOST'] assert_equal 'example.com', req['SERVER_NAME'] assert_equal '443', req['SERVER_PORT'] assert_equal "", parser.buf assert parser.keepalive? end # Host: header should be ignored for absolute URIs def test_absolute_uri_with_port parser = HttpParser.new req = parser.env parser.buf << "GET http://example.com:8080/foo?q=bar HTTP/1.2\r\n" \ "Host: bad.example.com\r\n\r\n" assert_equal req, parser.parse assert_equal 'http', req['rack.url_scheme'] assert_equal '/foo?q=bar', req['REQUEST_URI'] assert_equal '/foo', req['REQUEST_PATH'] assert_equal 'q=bar', req['QUERY_STRING'] assert_equal 'example.com:8080', req['HTTP_HOST'] assert_equal 'example.com', req['SERVER_NAME'] assert_equal '8080', req['SERVER_PORT'] assert_equal "", parser.buf assert ! parser.keepalive? # TODO: read HTTP/1.2 when it's final end def test_absolute_uri_with_empty_port parser = HttpParser.new req = parser.env parser.buf << "GET https://example.com:/foo?q=bar HTTP/1.1\r\n" \ "Host: bad.example.com\r\n\r\n" assert_equal req, parser.parse assert_equal 'https', req['rack.url_scheme'] assert_equal '/foo?q=bar', req['REQUEST_URI'] assert_equal '/foo', req['REQUEST_PATH'] assert_equal 'q=bar', req['QUERY_STRING'] assert_equal 'example.com:', req['HTTP_HOST'] assert_equal 'example.com', req['SERVER_NAME'] assert_equal '443', req['SERVER_PORT'] assert_equal "", parser.buf assert parser.keepalive? # TODO: read HTTP/1.2 when it's final end def test_absolute_ipv6_uri parser = HttpParser.new req = parser.env url = "http://[::1]/foo?q=bar" http = "GET #{url} HTTP/1.1\r\n" \ "Host: bad.example.com\r\n\r\n" assert_equal req, parser.headers(req, http) assert_equal 'http', req['rack.url_scheme'] assert_equal '/foo?q=bar', req['REQUEST_URI'] assert_equal '/foo', req['REQUEST_PATH'] assert_equal 'q=bar', req['QUERY_STRING'] uri = URI.parse(url) assert_equal "[::1]", uri.host, "URI.parse changed upstream for #{url}? host=#{uri.host}" assert_equal "[::1]", req['HTTP_HOST'] assert_equal "[::1]", req['SERVER_NAME'] assert_equal '80', req['SERVER_PORT'] assert_equal "", http assert parser.keepalive? # TODO: read HTTP/1.2 when it's final end def test_absolute_ipv6_uri_alpha parser = HttpParser.new req = parser.env url = "http://[::a]/" http = "GET #{url} HTTP/1.1\r\n" \ "Host: bad.example.com\r\n\r\n" assert_equal req, parser.headers(req, http) assert_equal 'http', req['rack.url_scheme'] uri = URI.parse(url) assert_equal "[::a]", uri.host, "URI.parse changed upstream for #{url}? host=#{uri.host}" assert_equal "[::a]", req['HTTP_HOST'] assert_equal "[::a]", req['SERVER_NAME'] assert_equal '80', req['SERVER_PORT'] end def test_absolute_ipv6_uri_alpha_2 parser = HttpParser.new req = parser.env url = "http://[::B]/" http = "GET #{url} HTTP/1.1\r\n" \ "Host: bad.example.com\r\n\r\n" assert_equal req, parser.headers(req, http) assert_equal 'http', req['rack.url_scheme'] uri = URI.parse(url) assert_equal "[::B]", uri.host, "URI.parse changed upstream for #{url}? host=#{uri.host}" assert_equal "[::B]", req['HTTP_HOST'] assert_equal "[::B]", req['SERVER_NAME'] assert_equal '80', req['SERVER_PORT'] end def test_absolute_ipv6_uri_with_empty_port parser = HttpParser.new req = parser.env url = "https://[::1]:/foo?q=bar" http = "GET #{url} HTTP/1.1\r\n" \ "Host: bad.example.com\r\n\r\n" assert_equal req, parser.headers(req, http) assert_equal 'https', req['rack.url_scheme'] assert_equal '/foo?q=bar', req['REQUEST_URI'] assert_equal '/foo', req['REQUEST_PATH'] assert_equal 'q=bar', req['QUERY_STRING'] uri = URI.parse(url) assert_equal "[::1]", uri.host, "URI.parse changed upstream for #{url}? host=#{uri.host}" assert_equal "[::1]:", req['HTTP_HOST'] assert_equal "[::1]", req['SERVER_NAME'] assert_equal '443', req['SERVER_PORT'] assert_equal "", http assert parser.keepalive? # TODO: read HTTP/1.2 when it's final end def test_absolute_ipv6_uri_with_port parser = HttpParser.new req = parser.env url = "https://[::1]:666/foo?q=bar" http = "GET #{url} HTTP/1.1\r\n" \ "Host: bad.example.com\r\n\r\n" assert_equal req, parser.headers(req, http) assert_equal 'https', req['rack.url_scheme'] assert_equal '/foo?q=bar', req['REQUEST_URI'] assert_equal '/foo', req['REQUEST_PATH'] assert_equal 'q=bar', req['QUERY_STRING'] uri = URI.parse(url) assert_equal "[::1]", uri.host, "URI.parse changed upstream for #{url}? host=#{uri.host}" assert_equal "[::1]:666", req['HTTP_HOST'] assert_equal "[::1]", req['SERVER_NAME'] assert_equal '666', req['SERVER_PORT'] assert_equal "", http assert parser.keepalive? # TODO: read HTTP/1.2 when it's final end def test_ipv6_host_header parser = HttpParser.new req = parser.env parser.buf << "GET / HTTP/1.1\r\n" \ "Host: [::1]\r\n\r\n" assert_equal req, parser.parse assert_equal "[::1]", req['HTTP_HOST'] assert_equal "[::1]", req['SERVER_NAME'] assert_equal '80', req['SERVER_PORT'] assert_equal "", parser.buf assert parser.keepalive? # TODO: read HTTP/1.2 when it's final end def test_ipv6_host_header_with_port parser = HttpParser.new req = parser.env parser.buf << "GET / HTTP/1.1\r\n" \ "Host: [::1]:666\r\n\r\n" assert_equal req, parser.parse assert_equal "[::1]", req['SERVER_NAME'] assert_equal '666', req['SERVER_PORT'] assert_equal "[::1]:666", req['HTTP_HOST'] assert_equal "", parser.buf assert parser.keepalive? # TODO: read HTTP/1.2 when it's final end def test_ipv6_host_header_with_empty_port parser = HttpParser.new req = parser.env parser.buf << "GET / HTTP/1.1\r\nHost: [::1]:\r\n\r\n" assert_equal req, parser.parse assert_equal "[::1]", req['SERVER_NAME'] assert_equal '80', req['SERVER_PORT'] assert_equal "[::1]:", req['HTTP_HOST'] assert_equal "", parser.buf assert parser.keepalive? # TODO: read HTTP/1.2 when it's final end # XXX Highly unlikely..., just make sure we don't segfault or assert on it def test_broken_ipv6_host_header parser = HttpParser.new req = parser.env parser.buf << "GET / HTTP/1.1\r\nHost: [::1:\r\n\r\n" assert_equal req, parser.parse assert_equal "[", req['SERVER_NAME'] assert_equal ':1:', req['SERVER_PORT'] assert_equal "[::1:", req['HTTP_HOST'] assert_equal "", parser.buf end def test_put_body_oneshot parser = HttpParser.new req = parser.env parser.buf << "PUT / HTTP/1.0\r\nContent-Length: 5\r\n\r\nabcde" assert_equal req, parser.parse assert_equal '/', req['REQUEST_PATH'] assert_equal '/', req['REQUEST_URI'] assert_equal 'PUT', req['REQUEST_METHOD'] assert_equal 'HTTP/1.0', req['HTTP_VERSION'] assert_equal 'HTTP/1.0', req['SERVER_PROTOCOL'] assert_equal "abcde", parser.buf assert ! parser.keepalive? # TODO: read HTTP/1.2 when it's final end def test_put_body_later parser = HttpParser.new req = parser.env parser.buf << "PUT /l HTTP/1.0\r\nContent-Length: 5\r\n\r\n" assert_equal req, parser.parse assert_equal '/l', req['REQUEST_PATH'] assert_equal '/l', req['REQUEST_URI'] assert_equal 'PUT', req['REQUEST_METHOD'] assert_equal 'HTTP/1.0', req['HTTP_VERSION'] assert_equal 'HTTP/1.0', req['SERVER_PROTOCOL'] assert_equal "", parser.buf assert ! parser.keepalive? # TODO: read HTTP/1.2 when it's final end def test_unknown_methods %w(GETT HEADR XGET XHEAD).each { |m| parser = HttpParser.new req = parser.env s = "#{m} /forums/1/topics/2375?page=1#posts-17408 HTTP/1.1\r\n\r\n" ok = parser.headers(req, s) assert ok assert_equal '/forums/1/topics/2375?page=1', req['REQUEST_URI'] assert_equal 'posts-17408', req['FRAGMENT'] assert_equal 'page=1', req['QUERY_STRING'] assert_equal "", s assert_equal m, req['REQUEST_METHOD'] assert parser.keepalive? # TODO: read HTTP/1.2 when it's final } end def test_fragment_in_uri parser = HttpParser.new req = parser.env get = "GET /forums/1/topics/2375?page=1#posts-17408 HTTP/1.1\r\n\r\n" parser.buf << get ok = parser.parse assert ok assert_equal '/forums/1/topics/2375?page=1', req['REQUEST_URI'] assert_equal 'posts-17408', req['FRAGMENT'] assert_equal 'page=1', req['QUERY_STRING'] assert_equal '', parser.buf assert parser.keepalive? end # lame random garbage maker def rand_data(min, max, readable=true) count = min + ((rand(max)+1) *10).to_i res = count.to_s + "/" if readable res << Digest::SHA1.hexdigest(rand(count * 100).to_s) * (count / 40) else res << Digest::SHA1.digest(rand(count * 100).to_s) * (count / 20) end return res end def test_horrible_queries parser = HttpParser.new # then that large header names are caught 10.times do |c| get = "GET /#{rand_data(10,120)} HTTP/1.1\r\nX-#{rand_data(1024, 1024+(c*1024))}: Test\r\n\r\n" assert_raises(Unicorn::HttpParserError,Unicorn::RequestURITooLongError) do parser.buf << get parser.parse parser.clear end end # then that large mangled field values are caught 10.times do |c| get = "GET /#{rand_data(10,120)} HTTP/1.1\r\nX-Test: #{rand_data(1024, 1024+(c*1024), false)}\r\n\r\n" assert_raises(Unicorn::HttpParserError,Unicorn::RequestURITooLongError) do parser.buf << get parser.parse parser.clear end end # then large headers are rejected too get = "GET /#{rand_data(10,120)} HTTP/1.1\r\n" get << "X-Test: test\r\n" * (80 * 1024) parser.buf << get assert_raises(Unicorn::HttpParserError,Unicorn::RequestURITooLongError) do parser.parse end parser.clear # finally just that random garbage gets blocked all the time 10.times do |c| get = "GET #{rand_data(1024, 1024+(c*1024), false)} #{rand_data(1024, 1024+(c*1024), false)}\r\n\r\n" assert_raises(Unicorn::HttpParserError,Unicorn::RequestURITooLongError) do parser.buf << get parser.parse parser.clear end end end def test_leading_tab parser = HttpParser.new get = "GET / HTTP/1.1\r\nHost:\texample.com\r\n\r\n" assert parser.add_parse(get) assert_equal 'example.com', parser.env['HTTP_HOST'] end def test_trailing_whitespace parser = HttpParser.new get = "GET / HTTP/1.1\r\nHost: example.com \r\n\r\n" assert parser.add_parse(get) assert_equal 'example.com', parser.env['HTTP_HOST'] end def test_trailing_tab parser = HttpParser.new get = "GET / HTTP/1.1\r\nHost: example.com\t\r\n\r\n" assert parser.add_parse(get) assert_equal 'example.com', parser.env['HTTP_HOST'] end def test_trailing_multiple_linear_whitespace parser = HttpParser.new get = "GET / HTTP/1.1\r\nHost: example.com\t \t \t\r\n\r\n" assert parser.add_parse(get) assert_equal 'example.com', parser.env['HTTP_HOST'] end def test_embedded_linear_whitespace_ok parser = HttpParser.new get = "GET / HTTP/1.1\r\nX-Space: hello\t world\t \r\n\r\n" assert parser.add_parse(get) assert_equal "hello\t world", parser.env["HTTP_X_SPACE"] end def test_null_byte_header parser = HttpParser.new get = "GET / HTTP/1.1\r\nHost: \0\r\n\r\n" assert_raises(HttpParserError) { parser.add_parse(get) } end def test_null_byte_in_middle parser = HttpParser.new get = "GET / HTTP/1.1\r\nHost: hello\0world\r\n\r\n" assert_raises(HttpParserError) { parser.add_parse(get) } end def test_null_byte_at_end parser = HttpParser.new get = "GET / HTTP/1.1\r\nHost: hello\0\r\n\r\n" assert_raises(HttpParserError) { parser.add_parse(get) } end def test_empty_header parser = HttpParser.new get = "GET / HTTP/1.1\r\nHost: \r\n\r\n" assert parser.add_parse(get) assert_equal '', parser.env['HTTP_HOST'] end # so we don't care about the portability of this test # if it doesn't leak on Linux, it won't leak anywhere else # unless your C compiler or platform is otherwise broken LINUX_PROC_PID_STATUS = "/proc/self/status" def test_memory_leak match_rss = /^VmRSS:\s+(\d+)/ if File.read(LINUX_PROC_PID_STATUS) =~ match_rss before = $1.to_i 1000000.times { Unicorn::HttpParser.new } File.read(LINUX_PROC_PID_STATUS) =~ match_rss after = $1.to_i diff = after - before assert(diff < 10000, "memory grew more than 10M: #{diff}") end end if RUBY_PLATFORM =~ /linux/ && File.readable?(LINUX_PROC_PID_STATUS) && !defined?(RUBY_ENGINE) end unicorn-4.7.0/test/unit/test_configurator.rb0000644000004100000410000001256412236653132021266 0ustar www-datawww-data# -*- encoding: binary -*- require 'test/unit' require 'tempfile' require 'unicorn' TestStruct = Struct.new( *(Unicorn::Configurator::DEFAULTS.keys + %w(listener_opts listeners))) class TestConfigurator < Test::Unit::TestCase def test_config_init Unicorn::Configurator.new {} end def test_expand_addr meth = Unicorn::Configurator.new.method(:expand_addr) assert_equal "/var/run/unicorn.sock", meth.call("/var/run/unicorn.sock") assert_equal "#{Dir.pwd}/foo/bar.sock", meth.call("unix:foo/bar.sock") path = meth.call("~/foo/bar.sock") assert_equal "/", path[0..0] assert_match %r{/foo/bar\.sock\z}, path path = meth.call("~root/foo/bar.sock") assert_equal "/", path[0..0] assert_match %r{/foo/bar\.sock\z}, path assert_equal "1.2.3.4:2007", meth.call('1.2.3.4:2007') assert_equal "0.0.0.0:2007", meth.call('0.0.0.0:2007') assert_equal "0.0.0.0:2007", meth.call(':2007') assert_equal "0.0.0.0:2007", meth.call('*:2007') assert_equal "0.0.0.0:2007", meth.call('2007') assert_equal "0.0.0.0:2007", meth.call(2007) %w([::1]:2007 [::]:2007).each do |addr| assert_equal addr, meth.call(addr.dup) end # for Rainbows! users only assert_equal "[::]:80", meth.call("[::]") assert_equal "127.6.6.6:80", meth.call("127.6.6.6") # the next two aren't portable, consider them unsupported for now # assert_match %r{\A\d+\.\d+\.\d+\.\d+:2007\z}, meth.call('1:2007') # assert_match %r{\A\d+\.\d+\.\d+\.\d+:2007\z}, meth.call('2:2007') end def test_config_invalid tmp = Tempfile.new('unicorn_config') tmp.syswrite(%q(asdfasdf "hello-world")) assert_raises(NoMethodError) do Unicorn::Configurator.new(:config_file => tmp.path) end end def test_config_non_existent tmp = Tempfile.new('unicorn_config') path = tmp.path tmp.close! assert_raises(Errno::ENOENT) do Unicorn::Configurator.new(:config_file => path) end end def test_config_defaults cfg = Unicorn::Configurator.new(:use_defaults => true) test_struct = TestStruct.new cfg.commit!(test_struct) Unicorn::Configurator::DEFAULTS.each do |key,value| assert_equal value, test_struct.__send__(key) end end def test_config_defaults_skip cfg = Unicorn::Configurator.new(:use_defaults => true) skip = [ :logger ] test_struct = TestStruct.new cfg.commit!(test_struct, :skip => skip) Unicorn::Configurator::DEFAULTS.each do |key,value| next if skip.include?(key) assert_equal value, test_struct.__send__(key) end assert_nil test_struct.logger end def test_listen_options tmp = Tempfile.new('unicorn_config') expect = { :sndbuf => 1, :rcvbuf => 2, :backlog => 10 }.freeze listener = "127.0.0.1:12345" tmp.syswrite("listen '#{listener}', #{expect.inspect}\n") cfg = Unicorn::Configurator.new(:config_file => tmp.path) test_struct = TestStruct.new cfg.commit!(test_struct) assert(listener_opts = test_struct.listener_opts) assert_equal expect, listener_opts[listener] end def test_listen_option_bad tmp = Tempfile.new('unicorn_config') expect = { :sndbuf => "five" } listener = "127.0.0.1:12345" tmp.syswrite("listen '#{listener}', #{expect.inspect}\n") assert_raises(ArgumentError) do Unicorn::Configurator.new(:config_file => tmp.path) end end def test_listen_option_bad_delay tmp = Tempfile.new('unicorn_config') expect = { :delay => "five" } listener = "127.0.0.1:12345" tmp.syswrite("listen '#{listener}', #{expect.inspect}\n") assert_raises(ArgumentError) do Unicorn::Configurator.new(:config_file => tmp.path) end end def test_listen_option_float_delay tmp = Tempfile.new('unicorn_config') expect = { :delay => 0.5 } listener = "127.0.0.1:12345" tmp.syswrite("listen '#{listener}', #{expect.inspect}\n") Unicorn::Configurator.new(:config_file => tmp.path) end def test_listen_option_int_delay tmp = Tempfile.new('unicorn_config') expect = { :delay => 5 } listener = "127.0.0.1:12345" tmp.syswrite("listen '#{listener}', #{expect.inspect}\n") Unicorn::Configurator.new(:config_file => tmp.path) end def test_check_client_connection tmp = Tempfile.new('unicorn_config') test_struct = TestStruct.new tmp.syswrite("check_client_connection true\n") assert_nothing_raised do Unicorn::Configurator.new(:config_file => tmp.path).commit!(test_struct) end assert test_struct.check_client_connection end def test_check_client_connection_with_tcp_bad tmp = Tempfile.new('unicorn_config') test_struct = TestStruct.new listener = "127.0.0.1:12345" tmp.syswrite("check_client_connection true\n") tmp.syswrite("listen '#{listener}', :tcp_nopush => true\n") assert_raises(ArgumentError) do Unicorn::Configurator.new(:config_file => tmp.path).commit!(test_struct) end end def test_after_fork_proc test_struct = TestStruct.new [ proc { |a,b| }, Proc.new { |a,b| }, lambda { |a,b| } ].each do |my_proc| Unicorn::Configurator.new(:after_fork => my_proc).commit!(test_struct) assert_equal my_proc, test_struct.after_fork end end def test_after_fork_wrong_arity [ proc { |a| }, Proc.new { }, lambda { |a,b,c| } ].each do |my_proc| assert_raises(ArgumentError) do Unicorn::Configurator.new(:after_fork => my_proc) end end end end unicorn-4.7.0/test/unit/test_request.rb0000644000004100000410000001271612236653132020253 0ustar www-datawww-data# -*- encoding: binary -*- # Copyright (c) 2009 Eric Wong # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or # the GPLv2+ (GPLv3+ preferred) require 'test/test_helper' include Unicorn class RequestTest < Test::Unit::TestCase class MockRequest < StringIO alias_method :readpartial, :sysread alias_method :kgio_read!, :sysread alias_method :read_nonblock, :sysread def kgio_addr '127.0.0.1' end end def setup @request = HttpRequest.new @app = lambda do |env| [ 200, { 'Content-Length' => '0', 'Content-Type' => 'text/plain' }, [] ] end @lint = Rack::Lint.new(@app) end def test_options client = MockRequest.new("OPTIONS * HTTP/1.1\r\n" \ "Host: foo\r\n\r\n") env = @request.read(client) assert_equal '', env['REQUEST_PATH'] assert_equal '', env['PATH_INFO'] assert_equal '*', env['REQUEST_URI'] res = @lint.call(env) end def test_absolute_uri_with_query client = MockRequest.new("GET http://e:3/x?y=z HTTP/1.1\r\n" \ "Host: foo\r\n\r\n") env = @request.read(client) assert_equal '/x', env['REQUEST_PATH'] assert_equal '/x', env['PATH_INFO'] assert_equal 'y=z', env['QUERY_STRING'] res = @lint.call(env) end def test_absolute_uri_with_fragment client = MockRequest.new("GET http://e:3/x#frag HTTP/1.1\r\n" \ "Host: foo\r\n\r\n") env = @request.read(client) assert_equal '/x', env['REQUEST_PATH'] assert_equal '/x', env['PATH_INFO'] assert_equal '', env['QUERY_STRING'] assert_equal 'frag', env['FRAGMENT'] res = @lint.call(env) end def test_absolute_uri_with_query_and_fragment client = MockRequest.new("GET http://e:3/x?a=b#frag HTTP/1.1\r\n" \ "Host: foo\r\n\r\n") env = @request.read(client) assert_equal '/x', env['REQUEST_PATH'] assert_equal '/x', env['PATH_INFO'] assert_equal 'a=b', env['QUERY_STRING'] assert_equal 'frag', env['FRAGMENT'] res = @lint.call(env) end def test_absolute_uri_unsupported_schemes %w(ssh+http://e/ ftp://e/x http+ssh://e/x).each do |abs_uri| client = MockRequest.new("GET #{abs_uri} HTTP/1.1\r\n" \ "Host: foo\r\n\r\n") assert_raises(HttpParserError) { @request.read(client) } end end def test_x_forwarded_proto_https client = MockRequest.new("GET / HTTP/1.1\r\n" \ "X-Forwarded-Proto: https\r\n" \ "Host: foo\r\n\r\n") env = @request.read(client) assert_equal "https", env['rack.url_scheme'] res = @lint.call(env) end def test_x_forwarded_proto_http client = MockRequest.new("GET / HTTP/1.1\r\n" \ "X-Forwarded-Proto: http\r\n" \ "Host: foo\r\n\r\n") env = @request.read(client) assert_equal "http", env['rack.url_scheme'] res = @lint.call(env) end def test_x_forwarded_proto_invalid client = MockRequest.new("GET / HTTP/1.1\r\n" \ "X-Forwarded-Proto: ftp\r\n" \ "Host: foo\r\n\r\n") env = @request.read(client) assert_equal "http", env['rack.url_scheme'] res = @lint.call(env) end def test_rack_lint_get client = MockRequest.new("GET / HTTP/1.1\r\nHost: foo\r\n\r\n") env = @request.read(client) assert_equal "http", env['rack.url_scheme'] assert_equal '127.0.0.1', env['REMOTE_ADDR'] res = @lint.call(env) end def test_no_content_stringio client = MockRequest.new("GET / HTTP/1.1\r\nHost: foo\r\n\r\n") env = @request.read(client) assert_equal StringIO, env['rack.input'].class end def test_zero_content_stringio client = MockRequest.new("PUT / HTTP/1.1\r\n" \ "Content-Length: 0\r\n" \ "Host: foo\r\n\r\n") env = @request.read(client) assert_equal StringIO, env['rack.input'].class end def test_real_content_not_stringio client = MockRequest.new("PUT / HTTP/1.1\r\n" \ "Content-Length: 1\r\n" \ "Host: foo\r\n\r\n") env = @request.read(client) assert_equal Unicorn::TeeInput, env['rack.input'].class end def test_rack_lint_put client = MockRequest.new( "PUT / HTTP/1.1\r\n" \ "Host: foo\r\n" \ "Content-Length: 5\r\n" \ "\r\n" \ "abcde") env = @request.read(client) assert ! env.include?(:http_body) res = @lint.call(env) end def test_rack_lint_big_put count = 100 bs = 0x10000 buf = (' ' * bs).freeze length = bs * count client = Tempfile.new('big_put') def client.kgio_addr; '127.0.0.1'; end def client.kgio_read(*args) readpartial(*args) rescue EOFError end def client.kgio_read!(*args) readpartial(*args) end client.syswrite( "PUT / HTTP/1.1\r\n" \ "Host: foo\r\n" \ "Content-Length: #{length}\r\n" \ "\r\n") count.times { assert_equal bs, client.syswrite(buf) } assert_equal 0, client.sysseek(0) env = @request.read(client) assert ! env.include?(:http_body) assert_equal length, env['rack.input'].size count.times { tmp = env['rack.input'].read(bs) tmp << env['rack.input'].read(bs - tmp.size) if tmp.size != bs assert_equal buf, tmp } assert_nil env['rack.input'].read(bs) env['rack.input'].rewind res = @lint.call(env) end end unicorn-4.7.0/test/unit/test_upload.rb0000644000004100000410000002131612236653132020043 0ustar www-datawww-data# -*- encoding: binary -*- # Copyright (c) 2009 Eric Wong require 'test/test_helper' require 'digest/md5' include Unicorn class UploadTest < Test::Unit::TestCase def setup @addr = ENV['UNICORN_TEST_ADDR'] || '127.0.0.1' @port = unused_port @hdr = {'Content-Type' => 'text/plain', 'Content-Length' => '0'} @bs = 4096 @count = 256 @server = nil # we want random binary data to test 1.9 encoding-aware IO craziness @random = File.open('/dev/urandom','rb') @sha1 = Digest::SHA1.new @sha1_app = lambda do |env| input = env['rack.input'] resp = {} @sha1.reset while buf = input.read(@bs) @sha1.update(buf) end resp[:sha1] = @sha1.hexdigest # rewind and read again input.rewind @sha1.reset while buf = input.read(@bs) @sha1.update(buf) end if resp[:sha1] == @sha1.hexdigest resp[:sysread_read_byte_match] = true end if expect_size = env['HTTP_X_EXPECT_SIZE'] if expect_size.to_i == input.size resp[:expect_size_match] = true end end resp[:size] = input.size resp[:content_md5] = env['HTTP_CONTENT_MD5'] [ 200, @hdr.merge({'X-Resp' => resp.inspect}), [] ] end end def teardown redirect_test_io { @server.stop(false) } if @server @random.close reset_sig_handlers end def test_put start_server(@sha1_app) sock = TCPSocket.new(@addr, @port) sock.syswrite("PUT / HTTP/1.0\r\nContent-Length: #{length}\r\n\r\n") @count.times do |i| buf = @random.sysread(@bs) @sha1.update(buf) sock.syswrite(buf) end read = sock.read.split(/\r\n/) assert_equal "HTTP/1.1 200 OK", read[0] resp = eval(read.grep(/^X-Resp: /).first.sub!(/X-Resp: /, '')) assert_equal length, resp[:size] assert_equal @sha1.hexdigest, resp[:sha1] end def test_put_content_md5 md5 = Digest::MD5.new start_server(@sha1_app) sock = TCPSocket.new(@addr, @port) sock.syswrite("PUT / HTTP/1.0\r\nTransfer-Encoding: chunked\r\n" \ "Trailer: Content-MD5\r\n\r\n") @count.times do |i| buf = @random.sysread(@bs) @sha1.update(buf) md5.update(buf) sock.syswrite("#{'%x' % buf.size}\r\n") sock.syswrite(buf << "\r\n") end sock.syswrite("0\r\n") content_md5 = [ md5.digest! ].pack('m').strip.freeze sock.syswrite("Content-MD5: #{content_md5}\r\n\r\n") read = sock.read.split(/\r\n/) assert_equal "HTTP/1.1 200 OK", read[0] resp = eval(read.grep(/^X-Resp: /).first.sub!(/X-Resp: /, '')) assert_equal length, resp[:size] assert_equal @sha1.hexdigest, resp[:sha1] assert_equal content_md5, resp[:content_md5] end def test_put_trickle_small @count, @bs = 2, 128 start_server(@sha1_app) assert_equal 256, length sock = TCPSocket.new(@addr, @port) hdr = "PUT / HTTP/1.0\r\nContent-Length: #{length}\r\n\r\n" @count.times do buf = @random.sysread(@bs) @sha1.update(buf) hdr << buf sock.syswrite(hdr) hdr = '' sleep 0.6 end read = sock.read.split(/\r\n/) assert_equal "HTTP/1.1 200 OK", read[0] resp = eval(read.grep(/^X-Resp: /).first.sub!(/X-Resp: /, '')) assert_equal length, resp[:size] assert_equal @sha1.hexdigest, resp[:sha1] end def test_put_keepalive_truncates_small_overwrite start_server(@sha1_app) sock = TCPSocket.new(@addr, @port) to_upload = length + 1 sock.syswrite("PUT / HTTP/1.0\r\nContent-Length: #{to_upload}\r\n\r\n") @count.times do buf = @random.sysread(@bs) @sha1.update(buf) sock.syswrite(buf) end sock.syswrite('12345') # write 4 bytes more than we expected @sha1.update('1') buf = sock.readpartial(4096) while buf !~ /\r\n\r\n/ buf << sock.readpartial(4096) end read = buf.split(/\r\n/) assert_equal "HTTP/1.1 200 OK", read[0] resp = eval(read.grep(/^X-Resp: /).first.sub!(/X-Resp: /, '')) assert_equal to_upload, resp[:size] assert_equal @sha1.hexdigest, resp[:sha1] end def test_put_excessive_overwrite_closed tmp = Tempfile.new('overwrite_check') tmp.sync = true start_server(lambda { |env| nr = 0 while buf = env['rack.input'].read(65536) nr += buf.size end tmp.write(nr.to_s) [ 200, @hdr, [] ] }) sock = TCPSocket.new(@addr, @port) buf = ' ' * @bs sock.syswrite("PUT / HTTP/1.0\r\nContent-Length: #{length}\r\n\r\n") @count.times { sock.syswrite(buf) } assert_raise(Errno::ECONNRESET, Errno::EPIPE) do ::Unicorn::Const::CHUNK_SIZE.times { sock.syswrite(buf) } end sock.gets tmp.rewind assert_equal length, tmp.read.to_i end # Despite reading numerous articles and inspecting the 1.9.1-p0 C # source, Eric Wong will never trust that we're always handling # encoding-aware IO objects correctly. Thus this test uses shell # utilities that should always operate on files/sockets on a # byte-level. def test_uncomfortable_with_onenine_encodings # POSIX doesn't require all of these to be present on a system which('curl') or return which('sha1sum') or return which('dd') or return start_server(@sha1_app) tmp = Tempfile.new('dd_dest') assert(system("dd", "if=#{@random.path}", "of=#{tmp.path}", "bs=#{@bs}", "count=#{@count}"), "dd #@random to #{tmp}") sha1_re = %r!\b([a-f0-9]{40})\b! sha1_out = `sha1sum #{tmp.path}` assert $?.success?, 'sha1sum ran OK' assert_match(sha1_re, sha1_out) sha1 = sha1_re.match(sha1_out)[1] resp = `curl -isSfN -T#{tmp.path} http://#@addr:#@port/` assert $?.success?, 'curl ran OK' assert_match(%r!\b#{sha1}\b!, resp) assert_match(/sysread_read_byte_match/, resp) # small StringIO path assert(system("dd", "if=#{@random.path}", "of=#{tmp.path}", "bs=1024", "count=1"), "dd #@random to #{tmp}") sha1_re = %r!\b([a-f0-9]{40})\b! sha1_out = `sha1sum #{tmp.path}` assert $?.success?, 'sha1sum ran OK' assert_match(sha1_re, sha1_out) sha1 = sha1_re.match(sha1_out)[1] resp = `curl -isSfN -T#{tmp.path} http://#@addr:#@port/` assert $?.success?, 'curl ran OK' assert_match(%r!\b#{sha1}\b!, resp) assert_match(/sysread_read_byte_match/, resp) end def test_chunked_upload_via_curl # POSIX doesn't require all of these to be present on a system which('curl') or return which('sha1sum') or return which('dd') or return start_server(@sha1_app) tmp = Tempfile.new('dd_dest') assert(system("dd", "if=#{@random.path}", "of=#{tmp.path}", "bs=#{@bs}", "count=#{@count}"), "dd #@random to #{tmp}") sha1_re = %r!\b([a-f0-9]{40})\b! sha1_out = `sha1sum #{tmp.path}` assert $?.success?, 'sha1sum ran OK' assert_match(sha1_re, sha1_out) sha1 = sha1_re.match(sha1_out)[1] cmd = "curl -H 'X-Expect-Size: #{tmp.size}' --tcp-nodelay \ -isSf --no-buffer -T- " \ "http://#@addr:#@port/" resp = Tempfile.new('resp') resp.sync = true rd, wr = IO.pipe wr.sync = rd.sync = true pid = fork { STDIN.reopen(rd) rd.close wr.close STDOUT.reopen(resp) exec cmd } rd.close tmp.rewind @count.times { |i| wr.write(tmp.read(@bs)) sleep(rand / 10) if 0 == i % 8 } wr.close pid, status = Process.waitpid2(pid) resp.rewind resp = resp.read assert status.success?, 'curl ran OK' assert_match(%r!\b#{sha1}\b!, resp) assert_match(/sysread_read_byte_match/, resp) assert_match(/expect_size_match/, resp) end def test_curl_chunked_small # POSIX doesn't require all of these to be present on a system which('curl') or return which('sha1sum') or return which('dd') or return start_server(@sha1_app) tmp = Tempfile.new('dd_dest') # small StringIO path assert(system("dd", "if=#{@random.path}", "of=#{tmp.path}", "bs=1024", "count=1"), "dd #@random to #{tmp}") sha1_re = %r!\b([a-f0-9]{40})\b! sha1_out = `sha1sum #{tmp.path}` assert $?.success?, 'sha1sum ran OK' assert_match(sha1_re, sha1_out) sha1 = sha1_re.match(sha1_out)[1] resp = `curl -H 'X-Expect-Size: #{tmp.size}' --tcp-nodelay \ -isSf --no-buffer -T- http://#@addr:#@port/ < #{tmp.path}` assert $?.success?, 'curl ran OK' assert_match(%r!\b#{sha1}\b!, resp) assert_match(/sysread_read_byte_match/, resp) assert_match(/expect_size_match/, resp) end private def length @bs * @count end def start_server(app) redirect_test_io do @server = HttpServer.new(app, :listeners => [ "#{@addr}:#{@port}" ] ) @server.start end end end unicorn-4.7.0/test/unit/test_response.rb0000644000004100000410000000544312236653132020420 0ustar www-datawww-data# -*- encoding: binary -*- # Copyright (c) 2005 Zed A. Shaw # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or # the GPLv2+ (GPLv3+ preferred) # # Additional work donated by contributors. See http://mongrel.rubyforge.org/attributions.html # for more information. require 'test/test_helper' require 'time' include Unicorn class ResponseTest < Test::Unit::TestCase include Unicorn::HttpResponse def test_httpdate before = Time.now.to_i - 1 str = httpdate assert_kind_of(String, str) middle = Time.parse(str).to_i after = Time.now.to_i assert before <= middle assert middle <= after end def test_response_headers out = StringIO.new http_response_write(out, 200, {"X-Whatever" => "stuff"}, ["cool"]) assert ! out.closed? assert out.length > 0, "output didn't have data" end def test_response_string_status out = StringIO.new http_response_write(out,'200', {}, []) assert ! out.closed? assert out.length > 0, "output didn't have data" assert_equal 1, out.string.split(/\r\n/).grep(/^Status: 200 OK/).size end def test_response_200 io = StringIO.new http_response_write(io, 200, {}, []) assert ! io.closed? assert io.length > 0, "output didn't have data" end def test_response_with_default_reason code = 400 io = StringIO.new http_response_write(io, code, {}, []) assert ! io.closed? lines = io.string.split(/\r\n/) assert_match(/.* Bad Request$/, lines.first, "wrong default reason phrase") end def test_rack_multivalue_headers out = StringIO.new http_response_write(out,200, {"X-Whatever" => "stuff\nbleh"}, []) assert ! out.closed? assert_match(/^X-Whatever: stuff\r\nX-Whatever: bleh\r\n/, out.string) end # Even though Rack explicitly forbids "Status" in the header hash, # some broken clients still rely on it def test_status_header_added out = StringIO.new http_response_write(out,200, {"X-Whatever" => "stuff"}, []) assert ! out.closed? assert_equal 1, out.string.split(/\r\n/).grep(/^Status: 200 OK/i).size end def test_body_closed expect_body = %w(1 2 3 4).join("\n") body = StringIO.new(expect_body) body.rewind out = StringIO.new http_response_write(out,200, {}, body) assert ! out.closed? assert body.closed? assert_match(expect_body, out.string.split(/\r\n/).last) end def test_unknown_status_pass_through out = StringIO.new http_response_write(out,"666 I AM THE BEAST", {}, [] ) assert ! out.closed? headers = out.string.split(/\r\n\r\n/).first.split(/\r\n/) assert %r{\AHTTP/\d\.\d 666 I AM THE BEAST\z}.match(headers[0]) status = headers.grep(/\AStatus:/i).first assert status assert_equal "Status: 666 I AM THE BEAST", status end end unicorn-4.7.0/test/unit/test_droplet.rb0000644000004100000410000000146412236653132020232 0ustar www-datawww-datarequire 'test/unit' require 'unicorn' class TestDroplet < Test::Unit::TestCase def test_create_many_droplets now = Time.now.to_i tmp = (0..1024).map do |i| droplet = Unicorn::Worker.new(i) assert droplet.respond_to?(:tick) assert_equal 0, droplet.tick assert_equal(now, droplet.tick = now) assert_equal now, droplet.tick assert_equal(0, droplet.tick = 0) assert_equal 0, droplet.tick end end def test_shared_process droplet = Unicorn::Worker.new(0) _, status = Process.waitpid2(fork { droplet.tick += 1; exit!(0) }) assert status.success?, status.inspect assert_equal 1, droplet.tick _, status = Process.waitpid2(fork { droplet.tick += 1; exit!(0) }) assert status.success?, status.inspect assert_equal 2, droplet.tick end end unicorn-4.7.0/test/unit/test_sni_hostnames.rb0000644000004100000410000000230412236653132021425 0ustar www-datawww-data# -*- encoding: binary -*- require "test/unit" require "unicorn" # this tests an implementation detail, it may change so this test # can be removed later. class TestSniHostnames < Test::Unit::TestCase include Unicorn::SSLServer def setup GC.start end def teardown GC.start end def test_host_name_detect_one app = Rack::Builder.new do map "http://sni1.example.com/" do use Rack::ContentLength use Rack::ContentType, "text/plain" run lambda { |env| [ 200, {}, [] ] } end end.to_app hostnames = rack_sni_hostnames(app) assert hostnames.include?("sni1.example.com") end def test_host_name_detect_multiple app = Rack::Builder.new do map "http://sni2.example.com/" do use Rack::ContentLength use Rack::ContentType, "text/plain" run lambda { |env| [ 200, {}, [] ] } end map "http://sni3.example.com/" do use Rack::ContentLength use Rack::ContentType, "text/plain" run lambda { |env| [ 200, {}, [] ] } end end.to_app hostnames = rack_sni_hostnames(app) assert hostnames.include?("sni2.example.com") assert hostnames.include?("sni3.example.com") end end unicorn-4.7.0/test/unit/test_server.rb0000644000004100000410000002000312236653132020055 0ustar www-datawww-data# -*- encoding: binary -*- # Copyright (c) 2005 Zed A. Shaw # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or # the GPLv2+ (GPLv3+ preferred) # # Additional work donated by contributors. See http://mongrel.rubyforge.org/attributions.html # for more information. require 'test/test_helper' include Unicorn class TestHandler def call(env) while env['rack.input'].read(4096) end [200, { 'Content-Type' => 'text/plain' }, ['hello!\n']] rescue Unicorn::ClientShutdown, Unicorn::HttpParserError => e $stderr.syswrite("#{e.class}: #{e.message} #{e.backtrace.empty?}\n") raise e end end class WebServerTest < Test::Unit::TestCase def setup @valid_request = "GET / HTTP/1.1\r\nHost: www.zedshaw.com\r\nContent-Type: text/plain\r\n\r\n" @port = unused_port @tester = TestHandler.new redirect_test_io do @server = HttpServer.new(@tester, :listeners => [ "127.0.0.1:#{@port}" ] ) @server.start end end def teardown redirect_test_io do wait_workers_ready("test_stderr.#$$.log", 1) File.truncate("test_stderr.#$$.log", 0) @server.stop(false) end reset_sig_handlers end def test_preload_app_config teardown tmp = Tempfile.new('test_preload_app_config') ObjectSpace.undefine_finalizer(tmp) app = lambda { || tmp.sysseek(0) tmp.truncate(0) tmp.syswrite($$) lambda { |env| [ 200, { 'Content-Type' => 'text/plain' }, [ "#$$\n" ] ] } } redirect_test_io do @server = HttpServer.new(app, :listeners => [ "127.0.0.1:#@port"] ) @server.start end results = hit(["http://localhost:#@port/"]) worker_pid = results[0].to_i assert worker_pid != 0 tmp.sysseek(0) loader_pid = tmp.sysread(4096).to_i assert loader_pid != 0 assert_equal worker_pid, loader_pid teardown redirect_test_io do @server = HttpServer.new(app, :listeners => [ "127.0.0.1:#@port"], :preload_app => true) @server.start end results = hit(["http://localhost:#@port/"]) worker_pid = results[0].to_i assert worker_pid != 0 tmp.sysseek(0) loader_pid = tmp.sysread(4096).to_i assert_equal $$, loader_pid assert worker_pid != loader_pid ensure tmp.close! end def test_broken_app teardown app = lambda { |env| raise RuntimeError, "hello" } # [200, {}, []] } redirect_test_io do @server = HttpServer.new(app, :listeners => [ "127.0.0.1:#@port"] ) @server.start end sock = TCPSocket.new('127.0.0.1', @port) sock.syswrite("GET / HTTP/1.0\r\n\r\n") assert_match %r{\AHTTP/1.[01] 500\b}, sock.sysread(4096) assert_nil sock.close end def test_simple_server results = hit(["http://localhost:#{@port}/test"]) assert_equal 'hello!\n', results[0], "Handler didn't really run" end def test_client_shutdown_writes bs = 15609315 * rand sock = TCPSocket.new('127.0.0.1', @port) sock.syswrite("PUT /hello HTTP/1.1\r\n") sock.syswrite("Host: example.com\r\n") sock.syswrite("Transfer-Encoding: chunked\r\n") sock.syswrite("Trailer: X-Foo\r\n") sock.syswrite("\r\n") sock.syswrite("%x\r\n" % [ bs ]) sock.syswrite("F" * bs) sock.syswrite("\r\n0\r\nX-") "Foo: bar\r\n\r\n".each_byte do |x| sock.syswrite x.chr sleep 0.05 end # we wrote the entire request before shutting down, server should # continue to process our request and never hit EOFError on our sock sock.shutdown(Socket::SHUT_WR) buf = sock.read assert_equal 'hello!\n', buf.split(/\r\n\r\n/).last next_client = Net::HTTP.get(URI.parse("http://127.0.0.1:#@port/")) assert_equal 'hello!\n', next_client lines = File.readlines("test_stderr.#$$.log") assert lines.grep(/^Unicorn::ClientShutdown: /).empty? assert_nil sock.close end def test_client_shutdown_write_truncates bs = 15609315 * rand sock = TCPSocket.new('127.0.0.1', @port) sock.syswrite("PUT /hello HTTP/1.1\r\n") sock.syswrite("Host: example.com\r\n") sock.syswrite("Transfer-Encoding: chunked\r\n") sock.syswrite("Trailer: X-Foo\r\n") sock.syswrite("\r\n") sock.syswrite("%x\r\n" % [ bs ]) sock.syswrite("F" * (bs / 2.0)) # shutdown prematurely, this will force the server to abort # processing on us even during app dispatch sock.shutdown(Socket::SHUT_WR) IO.select([sock], nil, nil, 60) or raise "Timed out" buf = sock.read assert_equal "", buf next_client = Net::HTTP.get(URI.parse("http://127.0.0.1:#@port/")) assert_equal 'hello!\n', next_client lines = File.readlines("test_stderr.#$$.log") lines = lines.grep(/^Unicorn::ClientShutdown: bytes_read=\d+/) assert_equal 1, lines.size assert_match %r{\AUnicorn::ClientShutdown: bytes_read=\d+ true$}, lines[0] assert_nil sock.close end def test_client_malformed_body bs = 15653984 sock = TCPSocket.new('127.0.0.1', @port) sock.syswrite("PUT /hello HTTP/1.1\r\n") sock.syswrite("Host: example.com\r\n") sock.syswrite("Transfer-Encoding: chunked\r\n") sock.syswrite("Trailer: X-Foo\r\n") sock.syswrite("\r\n") sock.syswrite("%x\r\n" % [ bs ]) sock.syswrite("F" * bs) begin File.open("/dev/urandom", "rb") { |fp| sock.syswrite(fp.sysread(16384)) } rescue end assert_nil sock.close next_client = Net::HTTP.get(URI.parse("http://127.0.0.1:#@port/")) assert_equal 'hello!\n', next_client lines = File.readlines("test_stderr.#$$.log") lines = lines.grep(/^Unicorn::HttpParserError: .* true$/) assert_equal 1, lines.size end def do_test(string, chunk, close_after=nil, shutdown_delay=0) # Do not use instance variables here, because it needs to be thread safe socket = TCPSocket.new("127.0.0.1", @port); request = StringIO.new(string) chunks_out = 0 while data = request.read(chunk) chunks_out += socket.write(data) socket.flush sleep 0.2 if close_after and chunks_out > close_after socket.close sleep 1 end end sleep(shutdown_delay) socket.write(" ") # Some platforms only raise the exception on attempted write socket.flush end def test_trickle_attack do_test(@valid_request, 3) end def test_close_client assert_raises IOError do do_test(@valid_request, 10, 20) end end def test_bad_client redirect_test_io do do_test("GET /test HTTP/BAD", 3) end end def test_logger_set assert_equal @server.logger, Unicorn::HttpRequest::DEFAULTS["rack.logger"] end def test_logger_changed tmp = Logger.new($stdout) @server.logger = tmp assert_equal tmp, Unicorn::HttpRequest::DEFAULTS["rack.logger"] end def test_bad_client_400 sock = TCPSocket.new('127.0.0.1', @port) sock.syswrite("GET / HTTP/1.0\r\nHost: foo\rbar\r\n\r\n") assert_match %r{\AHTTP/1.[01] 400\b}, sock.sysread(4096) assert_nil sock.close end def test_http_0_9 sock = TCPSocket.new('127.0.0.1', @port) sock.syswrite("GET /hello\r\n") assert_match 'hello!\n', sock.sysread(4096) assert_nil sock.close end def test_header_is_too_long redirect_test_io do long = "GET /test HTTP/1.1\r\n" + ("X-Big: stuff\r\n" * 15000) + "\r\n" assert_raises Errno::ECONNRESET, Errno::EPIPE, Errno::ECONNABORTED, Errno::EINVAL, IOError do do_test(long, long.length/2, 10) end end end def test_file_streamed_request body = "a" * (Unicorn::Const::MAX_BODY * 2) long = "PUT /test HTTP/1.1\r\nContent-length: #{body.length}\r\n\r\n" + body do_test(long, Unicorn::Const::CHUNK_SIZE * 2 - 400) end def test_file_streamed_request_bad_body body = "a" * (Unicorn::Const::MAX_BODY * 2) long = "GET /test HTTP/1.1\r\nContent-ength: #{body.length}\r\n\r\n" + body assert_raises(EOFError,Errno::ECONNRESET,Errno::EPIPE,Errno::EINVAL, Errno::EBADF) { do_test(long, Unicorn::Const::CHUNK_SIZE * 2 - 400) } end def test_listener_names assert_equal [ "127.0.0.1:#@port" ], Unicorn.listener_names end end unicorn-4.7.0/test/unit/test_util.rb0000644000004100000410000000645512236653132017543 0ustar www-datawww-data# -*- encoding: binary -*- require 'test/test_helper' require 'tempfile' class TestUtil < Test::Unit::TestCase EXPECT_FLAGS = File::WRONLY | File::APPEND def test_reopen_logs_noop tmp = Tempfile.new('') fp = File.open(tmp.path, 'ab') fp.sync = true ext = fp.external_encoding rescue nil int = fp.internal_encoding rescue nil before = fp.stat.inspect Unicorn::Util.reopen_logs assert_equal before, File.stat(fp.path).inspect assert_equal ext, (fp.external_encoding rescue nil) assert_equal int, (fp.internal_encoding rescue nil) assert_equal(EXPECT_FLAGS, EXPECT_FLAGS & fp.fcntl(Fcntl::F_GETFL)) tmp.close! fp.close end def test_reopen_logs_renamed tmp = Tempfile.new('') tmp_path = tmp.path.freeze fp = File.open(tmp_path, 'ab') fp.sync = true ext = fp.external_encoding rescue nil int = fp.internal_encoding rescue nil before = fp.stat.inspect to = Tempfile.new('') File.rename(tmp_path, to.path) assert ! File.exist?(tmp_path) Unicorn::Util.reopen_logs assert_equal tmp_path, tmp.path assert File.exist?(tmp_path) assert before != File.stat(tmp_path).inspect assert_equal fp.stat.inspect, File.stat(tmp_path).inspect assert_equal ext, (fp.external_encoding rescue nil) assert_equal int, (fp.internal_encoding rescue nil) assert_equal(EXPECT_FLAGS, EXPECT_FLAGS & fp.fcntl(Fcntl::F_GETFL)) assert fp.sync tmp.close! to.close! fp.close end def test_reopen_logs_renamed_with_encoding tmp = Tempfile.new('') tmp_path = tmp.path.dup.freeze Encoding.list.each { |encoding| File.open(tmp_path, "a:#{encoding.to_s}") { |fp| fp.sync = true assert_equal encoding, fp.external_encoding assert_nil fp.internal_encoding File.unlink(tmp_path) assert ! File.exist?(tmp_path) Unicorn::Util.reopen_logs assert_equal tmp_path, fp.path assert File.exist?(tmp_path) assert_equal fp.stat.inspect, File.stat(tmp_path).inspect assert_equal encoding, fp.external_encoding assert_nil fp.internal_encoding assert_equal(EXPECT_FLAGS, EXPECT_FLAGS & fp.fcntl(Fcntl::F_GETFL)) assert fp.sync } } tmp.close! end if STDIN.respond_to?(:external_encoding) def test_reopen_logs_renamed_with_internal_encoding tmp = Tempfile.new('') tmp_path = tmp.path.dup.freeze Encoding.list.each { |ext| Encoding.list.each { |int| next if ext == int File.open(tmp_path, "a:#{ext.to_s}:#{int.to_s}") { |fp| fp.sync = true assert_equal ext, fp.external_encoding if ext != Encoding::BINARY assert_equal int, fp.internal_encoding end File.unlink(tmp_path) assert ! File.exist?(tmp_path) Unicorn::Util.reopen_logs assert_equal tmp_path, fp.path assert File.exist?(tmp_path) assert_equal fp.stat.inspect, File.stat(tmp_path).inspect assert_equal ext, fp.external_encoding if ext != Encoding::BINARY assert_equal int, fp.internal_encoding end assert_equal(EXPECT_FLAGS, EXPECT_FLAGS & fp.fcntl(Fcntl::F_GETFL)) assert fp.sync } } } tmp.close! end if STDIN.respond_to?(:external_encoding) end unicorn-4.7.0/test/unit/test_http_parser_ng.rb0000644000004100000410000005415512236653132021605 0ustar www-datawww-data# -*- encoding: binary -*- require 'test/test_helper' require 'digest/md5' include Unicorn class HttpParserNgTest < Test::Unit::TestCase def setup HttpParser.keepalive_requests = HttpParser::KEEPALIVE_REQUESTS_DEFAULT @parser = HttpParser.new end def test_next_clear r = "GET / HTTP/1.1\r\nHost: example.com\r\n\r\n" @parser.buf << r @parser.parse @parser.response_start_sent = true assert @parser.keepalive? assert @parser.next? assert @parser.response_start_sent # persistent client makes another request: @parser.buf << r @parser.parse assert @parser.keepalive? assert @parser.next? assert_equal false, @parser.response_start_sent end def test_keepalive_requests_default_constant assert_kind_of Integer, HttpParser::KEEPALIVE_REQUESTS_DEFAULT assert HttpParser::KEEPALIVE_REQUESTS_DEFAULT >= 0 end def test_keepalive_requests_setting HttpParser.keepalive_requests = 0 assert_equal 0, HttpParser.keepalive_requests HttpParser.keepalive_requests = nil assert HttpParser.keepalive_requests >= 0xffffffff HttpParser.keepalive_requests = 1 assert_equal 1, HttpParser.keepalive_requests HttpParser.keepalive_requests = 666 assert_equal 666, HttpParser.keepalive_requests assert_raises(TypeError) { HttpParser.keepalive_requests = "666" } assert_raises(TypeError) { HttpParser.keepalive_requests = [] } end def test_connection_TE @parser.buf << "GET / HTTP/1.1\r\nHost: example.com\r\nConnection: TE\r\n" @parser.buf << "TE: trailers\r\n\r\n" @parser.parse assert @parser.keepalive? assert @parser.next? end def test_keepalive_requests_with_next? req = "GET / HTTP/1.1\r\nHost: example.com\r\n\r\n".freeze expect = { "SERVER_NAME" => "example.com", "HTTP_HOST" => "example.com", "rack.url_scheme" => "http", "REQUEST_PATH" => "/", "SERVER_PROTOCOL" => "HTTP/1.1", "PATH_INFO" => "/", "HTTP_VERSION" => "HTTP/1.1", "REQUEST_URI" => "/", "SERVER_PORT" => "80", "REQUEST_METHOD" => "GET", "QUERY_STRING" => "" }.freeze HttpParser::KEEPALIVE_REQUESTS_DEFAULT.times do |nr| @parser.buf << req assert_equal expect, @parser.parse assert @parser.next? end @parser.buf << req assert_equal expect, @parser.parse assert ! @parser.next? end def test_fewer_keepalive_requests_with_next? HttpParser.keepalive_requests = 5 @parser = HttpParser.new req = "GET / HTTP/1.1\r\nHost: example.com\r\n\r\n".freeze expect = { "SERVER_NAME" => "example.com", "HTTP_HOST" => "example.com", "rack.url_scheme" => "http", "REQUEST_PATH" => "/", "SERVER_PROTOCOL" => "HTTP/1.1", "PATH_INFO" => "/", "HTTP_VERSION" => "HTTP/1.1", "REQUEST_URI" => "/", "SERVER_PORT" => "80", "REQUEST_METHOD" => "GET", "QUERY_STRING" => "" }.freeze 5.times do |nr| @parser.buf << req assert_equal expect, @parser.parse assert @parser.next? end @parser.buf << req assert_equal expect, @parser.parse assert ! @parser.next? end def test_default_keepalive_is_off assert ! @parser.keepalive? assert ! @parser.next? @parser.buf << "GET / HTTP/1.1\r\nHost: example.com\r\n\r\n" @parser.parse assert @parser.keepalive? @parser.clear assert ! @parser.keepalive? assert ! @parser.next? end def test_identity_byte_headers req = @parser.env str = "PUT / HTTP/1.1\r\n" str << "Content-Length: 123\r\n" str << "\r" hdr = @parser.buf str.each_byte { |byte| hdr << byte.chr assert_nil @parser.parse } hdr << "\n" assert_equal req.object_id, @parser.parse.object_id assert_equal '123', req['CONTENT_LENGTH'] assert_equal 0, hdr.size assert ! @parser.keepalive? assert @parser.headers? assert_equal 123, @parser.content_length dst = "" buf = '.' * 123 @parser.filter_body(dst, buf) assert_equal '.' * 123, dst assert_equal "", buf assert @parser.keepalive? end def test_identity_step_headers req = @parser.env str = @parser.buf str << "PUT / HTTP/1.1\r\n" assert ! @parser.parse str << "Content-Length: 123\r\n" assert ! @parser.parse str << "\r\n" assert_equal req.object_id, @parser.parse.object_id assert_equal '123', req['CONTENT_LENGTH'] assert_equal 0, str.size assert ! @parser.keepalive? assert @parser.headers? dst = "" buf = '.' * 123 @parser.filter_body(dst, buf) assert_equal '.' * 123, dst assert_equal "", buf assert @parser.keepalive? end def test_identity_oneshot_header req = @parser.env str = @parser.buf str << "PUT / HTTP/1.1\r\nContent-Length: 123\r\n\r\n" assert_equal req.object_id, @parser.parse.object_id assert_equal '123', req['CONTENT_LENGTH'] assert_equal 0, str.size assert ! @parser.keepalive? assert @parser.headers? dst = "" buf = '.' * 123 @parser.filter_body(dst, buf) assert_equal '.' * 123, dst assert_equal "", buf end def test_identity_oneshot_header_with_body body = ('a' * 123).freeze req = @parser.env str = @parser.buf str << "PUT / HTTP/1.1\r\n" \ "Content-Length: #{body.length}\r\n" \ "\r\n#{body}" assert_equal req.object_id, @parser.parse.object_id assert_equal '123', req['CONTENT_LENGTH'] assert_equal 123, str.size assert_equal body, str tmp = '' assert_nil @parser.filter_body(tmp, str) assert_equal 0, str.size assert_equal tmp, body assert_equal "", @parser.filter_body(tmp, str) assert @parser.keepalive? end def test_identity_oneshot_header_with_body_partial str = @parser.buf str << "PUT / HTTP/1.1\r\nContent-Length: 123\r\n\r\na" assert_equal Hash, @parser.parse.class assert_equal 1, str.size assert_equal 'a', str tmp = '' assert_nil @parser.filter_body(tmp, str) assert_equal "", str assert_equal "a", tmp str << ' ' * 122 rv = @parser.filter_body(tmp, str) assert_equal 122, tmp.size assert_nil rv assert_equal "", str assert_equal str.object_id, @parser.filter_body(tmp, str).object_id assert @parser.keepalive? end def test_identity_oneshot_header_with_body_slop str = @parser.buf str << "PUT / HTTP/1.1\r\nContent-Length: 1\r\n\r\naG" assert_equal Hash, @parser.parse.class assert_equal 2, str.size assert_equal 'aG', str tmp = '' assert_nil @parser.filter_body(tmp, str) assert_equal "G", str assert_equal "G", @parser.filter_body(tmp, str) assert_equal 1, tmp.size assert_equal "a", tmp assert @parser.keepalive? end def test_chunked str = @parser.buf req = @parser.env str << "PUT / HTTP/1.1\r\ntransfer-Encoding: chunked\r\n\r\n" assert_equal req, @parser.parse, "msg=#{str}" assert_equal 0, str.size tmp = "" assert_nil @parser.filter_body(tmp, str << "6") assert_equal 0, tmp.size assert_nil @parser.filter_body(tmp, str << "\r\n") assert_equal 0, str.size assert_equal 0, tmp.size tmp = "" assert_nil @parser.filter_body(tmp, str << "..") assert_equal "..", tmp assert_nil @parser.filter_body(tmp, str << "abcd\r\n0\r\n") assert_equal "abcd", tmp assert_equal str.object_id, @parser.filter_body(tmp, str << "PUT").object_id assert_equal "PUT", str assert ! @parser.keepalive? str << "TY: FOO\r\n\r\n" assert_equal req, @parser.parse assert_equal "FOO", req["HTTP_PUTTY"] assert @parser.keepalive? end def test_chunked_empty str = @parser.buf req = @parser.env str << "PUT / HTTP/1.1\r\ntransfer-Encoding: chunked\r\n\r\n" assert_equal req, @parser.parse, "msg=#{str}" assert_equal 0, str.size tmp = "" assert_equal str, @parser.filter_body(tmp, str << "0\r\n\r\n") assert_equal "", tmp end def test_two_chunks str = @parser.buf str << "PUT / HTTP/1.1\r\ntransfer-Encoding: chunked\r\n\r\n" req = @parser.env assert_equal req, @parser.parse assert_equal 0, str.size tmp = "" assert_nil @parser.filter_body(tmp, str << "6") assert_equal 0, tmp.size assert_nil @parser.filter_body(tmp, str << "\r\n") assert_equal "", str assert_equal 0, tmp.size tmp = "" assert_nil @parser.filter_body(tmp, str << "..") assert_equal 2, tmp.size assert_equal "..", tmp assert_nil @parser.filter_body(tmp, str << "abcd\r\n1") assert_equal "abcd", tmp assert_nil @parser.filter_body(tmp, str << "\r") assert_equal "", tmp assert_nil @parser.filter_body(tmp, str << "\n") assert_equal "", tmp assert_nil @parser.filter_body(tmp, str << "z") assert_equal "z", tmp assert_nil @parser.filter_body(tmp, str << "\r\n") assert_nil @parser.filter_body(tmp, str << "0") assert_nil @parser.filter_body(tmp, str << "\r") rv = @parser.filter_body(tmp, str << "\nGET") assert_equal "GET", rv assert_equal str.object_id, rv.object_id assert ! @parser.keepalive? end def test_big_chunk str = @parser.buf str << "PUT / HTTP/1.1\r\ntransfer-Encoding: chunked\r\n\r\n" \ "4000\r\nabcd" req = @parser.env assert_equal req, @parser.parse tmp = '' assert_nil @parser.filter_body(tmp, str) assert_equal '', str str << ' ' * 16300 assert_nil @parser.filter_body(tmp, str) assert_equal '', str str << ' ' * 80 assert_nil @parser.filter_body(tmp, str) assert_equal '', str assert ! @parser.body_eof? assert_equal "", @parser.filter_body(tmp, str << "\r\n0\r\n") assert_equal "", tmp assert @parser.body_eof? str << "\r\n" assert_equal req, @parser.parse assert_equal "", str assert @parser.body_eof? assert @parser.keepalive? end def test_two_chunks_oneshot str = @parser.buf req = @parser.env str << "PUT / HTTP/1.1\r\ntransfer-Encoding: chunked\r\n\r\n" \ "1\r\na\r\n2\r\n..\r\n0\r\n" assert_equal req, @parser.parse tmp = '' assert_nil @parser.filter_body(tmp, str) assert_equal 'a..', tmp rv = @parser.filter_body(tmp, str) assert_equal rv.object_id, str.object_id assert ! @parser.keepalive? end def test_chunks_bytewise chunked = "10\r\nabcdefghijklmnop\r\n11\r\n0123456789abcdefg\r\n0\r\n" str = "PUT / HTTP/1.1\r\ntransfer-Encoding: chunked\r\n\r\n" buf = @parser.buf buf << str req = @parser.env assert_equal req, @parser.parse assert_equal "", buf tmp = '' body = '' str = chunked[0..-2] str.each_byte { |byte| assert_nil @parser.filter_body(tmp, buf << byte.chr) body << tmp } assert_equal 'abcdefghijklmnop0123456789abcdefg', body rv = @parser.filter_body(tmp, buf<< "\n") assert_equal rv.object_id, buf.object_id assert ! @parser.keepalive? end def test_trailers req = @parser.env str = @parser.buf str << "PUT / HTTP/1.1\r\n" \ "Trailer: Content-MD5\r\n" \ "transfer-Encoding: chunked\r\n\r\n" \ "1\r\na\r\n2\r\n..\r\n0\r\n" assert_equal req, @parser.parse assert_equal 'Content-MD5', req['HTTP_TRAILER'] assert_nil req['HTTP_CONTENT_MD5'] tmp = '' assert_nil @parser.filter_body(tmp, str) assert_equal 'a..', tmp md5_b64 = [ Digest::MD5.digest(tmp) ].pack('m').strip.freeze rv = @parser.filter_body(tmp, str) assert_equal rv.object_id, str.object_id assert_equal '', str md5_hdr = "Content-MD5: #{md5_b64}\r\n".freeze str << md5_hdr assert_nil @parser.trailers(req, str) assert_equal md5_b64, req['HTTP_CONTENT_MD5'] assert_equal "CONTENT_MD5: #{md5_b64}\r\n", str str << "\r" assert_nil @parser.parse str << "\nGET / " assert_equal req, @parser.parse assert_equal "GET / ", str assert @parser.keepalive? end def test_trailers_slowly str = @parser.buf str << "PUT / HTTP/1.1\r\n" \ "Trailer: Content-MD5\r\n" \ "transfer-Encoding: chunked\r\n\r\n" \ "1\r\na\r\n2\r\n..\r\n0\r\n" req = @parser.env assert_equal req, @parser.parse assert_equal 'Content-MD5', req['HTTP_TRAILER'] assert_nil req['HTTP_CONTENT_MD5'] tmp = '' assert_nil @parser.filter_body(tmp, str) assert_equal 'a..', tmp md5_b64 = [ Digest::MD5.digest(tmp) ].pack('m').strip.freeze rv = @parser.filter_body(tmp, str) assert_equal rv.object_id, str.object_id assert_equal '', str assert_nil @parser.trailers(req, str) md5_hdr = "Content-MD5: #{md5_b64}\r\n".freeze md5_hdr.each_byte { |byte| str << byte.chr assert_nil @parser.trailers(req, str) } assert_equal md5_b64, req['HTTP_CONTENT_MD5'] assert_equal "CONTENT_MD5: #{md5_b64}\r\n", str str << "\r" assert_nil @parser.parse str << "\n" assert_equal req, @parser.parse end def test_max_chunk str = @parser.buf str << "PUT / HTTP/1.1\r\n" \ "transfer-Encoding: chunked\r\n\r\n" \ "#{HttpParser::CHUNK_MAX.to_s(16)}\r\na\r\n2\r\n..\r\n0\r\n" req = @parser.env assert_equal req, @parser.parse assert_nil @parser.content_length @parser.filter_body('', str) assert ! @parser.keepalive? end def test_max_body n = HttpParser::LENGTH_MAX @parser.buf << "PUT / HTTP/1.1\r\nContent-Length: #{n}\r\n\r\n" req = @parser.env @parser.headers(req, @parser.buf) assert_equal n, req['CONTENT_LENGTH'].to_i assert ! @parser.keepalive? end def test_overflow_chunk n = HttpParser::CHUNK_MAX + 1 str = @parser.buf req = @parser.env str << "PUT / HTTP/1.1\r\n" \ "transfer-Encoding: chunked\r\n\r\n" \ "#{n.to_s(16)}\r\na\r\n2\r\n..\r\n0\r\n" assert_equal req, @parser.parse assert_nil @parser.content_length assert_raise(HttpParserError) { @parser.filter_body('', str) } end def test_overflow_content_length n = HttpParser::LENGTH_MAX + 1 @parser.buf << "PUT / HTTP/1.1\r\nContent-Length: #{n}\r\n\r\n" assert_raise(HttpParserError) { @parser.parse } end def test_bad_chunk @parser.buf << "PUT / HTTP/1.1\r\n" \ "transfer-Encoding: chunked\r\n\r\n" \ "#zzz\r\na\r\n2\r\n..\r\n0\r\n" req = @parser.env assert_equal req, @parser.parse assert_nil @parser.content_length assert_raise(HttpParserError) { @parser.filter_body("", @parser.buf) } end def test_bad_content_length @parser.buf << "PUT / HTTP/1.1\r\nContent-Length: 7ff\r\n\r\n" assert_raise(HttpParserError) { @parser.parse } end def test_bad_trailers str = @parser.buf req = @parser.env str << "PUT / HTTP/1.1\r\n" \ "Trailer: Transfer-Encoding\r\n" \ "transfer-Encoding: chunked\r\n\r\n" \ "1\r\na\r\n2\r\n..\r\n0\r\n" assert_equal req, @parser.parse assert_equal 'Transfer-Encoding', req['HTTP_TRAILER'] tmp = '' assert_nil @parser.filter_body(tmp, str) assert_equal 'a..', tmp assert_equal '', str str << "Transfer-Encoding: identity\r\n\r\n" assert_raise(HttpParserError) { @parser.parse } end def test_repeat_headers str = "PUT / HTTP/1.1\r\n" \ "Trailer: Content-MD5\r\n" \ "Trailer: Content-SHA1\r\n" \ "transfer-Encoding: chunked\r\n\r\n" \ "1\r\na\r\n2\r\n..\r\n0\r\n" req = @parser.env @parser.buf << str assert_equal req, @parser.parse assert_equal 'Content-MD5,Content-SHA1', req['HTTP_TRAILER'] assert ! @parser.keepalive? end def test_parse_simple_request parser = HttpParser.new req = parser.env parser.buf << "GET /read-rfc1945-if-you-dont-believe-me\r\n" assert_equal req, parser.parse assert_equal '', parser.buf expect = { "SERVER_NAME"=>"localhost", "rack.url_scheme"=>"http", "REQUEST_PATH"=>"/read-rfc1945-if-you-dont-believe-me", "PATH_INFO"=>"/read-rfc1945-if-you-dont-believe-me", "REQUEST_URI"=>"/read-rfc1945-if-you-dont-believe-me", "SERVER_PORT"=>"80", "SERVER_PROTOCOL"=>"HTTP/0.9", "REQUEST_METHOD"=>"GET", "QUERY_STRING"=>"" } assert_equal expect, req assert ! parser.headers? end def test_path_info_semicolon qs = "QUERY_STRING" pi = "PATH_INFO" req = {} str = "GET %s HTTP/1.1\r\nHost: example.com\r\n\r\n" { "/1;a=b?c=d&e=f" => { qs => "c=d&e=f", pi => "/1;a=b" }, "/1?c=d&e=f" => { qs => "c=d&e=f", pi => "/1" }, "/1;a=b" => { qs => "", pi => "/1;a=b" }, "/1;a=b?" => { qs => "", pi => "/1;a=b" }, "/1?a=b;c=d&e=f" => { qs => "a=b;c=d&e=f", pi => "/1" }, "*" => { qs => "", pi => "" }, }.each do |uri,expect| assert_equal req, @parser.headers(req.clear, str % [ uri ]) req = req.dup @parser.clear assert_equal uri, req["REQUEST_URI"], "REQUEST_URI mismatch" assert_equal expect[qs], req[qs], "#{qs} mismatch" assert_equal expect[pi], req[pi], "#{pi} mismatch" next if uri == "*" uri = URI.parse("http://example.com#{uri}") assert_equal uri.query.to_s, req[qs], "#{qs} mismatch URI.parse disagrees" assert_equal uri.path, req[pi], "#{pi} mismatch URI.parse disagrees" end end def test_path_info_semicolon_absolute qs = "QUERY_STRING" pi = "PATH_INFO" req = {} str = "GET http://example.com%s HTTP/1.1\r\nHost: www.example.com\r\n\r\n" { "/1;a=b?c=d&e=f" => { qs => "c=d&e=f", pi => "/1;a=b" }, "/1?c=d&e=f" => { qs => "c=d&e=f", pi => "/1" }, "/1;a=b" => { qs => "", pi => "/1;a=b" }, "/1;a=b?" => { qs => "", pi => "/1;a=b" }, "/1?a=b;c=d&e=f" => { qs => "a=b;c=d&e=f", pi => "/1" }, }.each do |uri,expect| assert_equal req, @parser.headers(req.clear, str % [ uri ]) req = req.dup @parser.clear assert_equal uri, req["REQUEST_URI"], "REQUEST_URI mismatch" assert_equal "example.com", req["HTTP_HOST"], "Host: mismatch" assert_equal expect[qs], req[qs], "#{qs} mismatch" assert_equal expect[pi], req[pi], "#{pi} mismatch" end end def test_negative_content_length req = {} str = "PUT / HTTP/1.1\r\n" \ "Content-Length: -1\r\n" \ "\r\n" assert_raises(HttpParserError) do @parser.headers(req, str) end end def test_invalid_content_length req = {} str = "PUT / HTTP/1.1\r\n" \ "Content-Length: zzzzz\r\n" \ "\r\n" assert_raises(HttpParserError) do @parser.headers(req, str) end end def test_backtrace_is_empty begin @parser.headers({}, "AAADFSFDSFD\r\n\r\n") assert false, "should never get here line:#{__LINE__}" rescue HttpParserError => e assert_equal [], e.backtrace return end assert false, "should never get here line:#{__LINE__}" end def test_ignore_version_header @parser.buf << "GET / HTTP/1.1\r\nVersion: hello\r\n\r\n" req = @parser.env assert_equal req, @parser.parse assert_equal '', @parser.buf expect = { "SERVER_NAME" => "localhost", "rack.url_scheme" => "http", "REQUEST_PATH" => "/", "SERVER_PROTOCOL" => "HTTP/1.1", "PATH_INFO" => "/", "HTTP_VERSION" => "HTTP/1.1", "REQUEST_URI" => "/", "SERVER_PORT" => "80", "REQUEST_METHOD" => "GET", "QUERY_STRING" => "" } assert_equal expect, req end def test_pipelined_requests host = "example.com" expect = { "HTTP_HOST" => host, "SERVER_NAME" => host, "REQUEST_PATH" => "/", "rack.url_scheme" => "http", "SERVER_PROTOCOL" => "HTTP/1.1", "PATH_INFO" => "/", "HTTP_VERSION" => "HTTP/1.1", "REQUEST_URI" => "/", "SERVER_PORT" => "80", "REQUEST_METHOD" => "GET", "QUERY_STRING" => "" } req1 = "GET / HTTP/1.1\r\nHost: example.com\r\n\r\n" req2 = "GET / HTTP/1.1\r\nHost: www.example.com\r\n\r\n" @parser.buf << (req1 + req2) env1 = @parser.parse.dup assert_equal expect, env1 assert_equal req2, @parser.buf assert ! @parser.env.empty? assert @parser.next? assert @parser.keepalive? assert @parser.headers? assert_equal expect, @parser.env env2 = @parser.parse.dup host.replace "www.example.com" assert_equal "www.example.com", expect["HTTP_HOST"] assert_equal "www.example.com", expect["SERVER_NAME"] assert_equal expect, env2 assert_equal "", @parser.buf end def test_keepalive_requests_disabled req = "GET / HTTP/1.1\r\nHost: example.com\r\n\r\n".freeze expect = { "SERVER_NAME" => "example.com", "HTTP_HOST" => "example.com", "rack.url_scheme" => "http", "REQUEST_PATH" => "/", "SERVER_PROTOCOL" => "HTTP/1.1", "PATH_INFO" => "/", "HTTP_VERSION" => "HTTP/1.1", "REQUEST_URI" => "/", "SERVER_PORT" => "80", "REQUEST_METHOD" => "GET", "QUERY_STRING" => "" }.freeze HttpParser.keepalive_requests = 0 @parser = HttpParser.new @parser.buf << req assert_equal expect, @parser.parse assert ! @parser.next? end def test_chunk_only tmp = "" assert_equal @parser, @parser.dechunk! assert_nil @parser.filter_body(tmp, "6\r\n") assert_equal "", tmp assert_nil @parser.filter_body(tmp, "abcdef") assert_equal "abcdef", tmp assert_nil @parser.filter_body(tmp, "\r\n") assert_equal "", tmp src = "0\r\n\r\n" assert_equal src.object_id, @parser.filter_body(tmp, src).object_id assert_equal "", tmp end def test_chunk_only_bad_align tmp = "" assert_equal @parser, @parser.dechunk! assert_nil @parser.filter_body(tmp, "6\r\na") assert_equal "a", tmp assert_nil @parser.filter_body(tmp, "bcde") assert_equal "bcde", tmp assert_nil @parser.filter_body(tmp, "f\r") assert_equal "f", tmp src = "\n0\r\n\r\n" assert_equal src.object_id, @parser.filter_body(tmp, src).object_id assert_equal "", tmp end def test_chunk_only_reset_ok tmp = "" assert_equal @parser, @parser.dechunk! src = "1\r\na\r\n0\r\n\r\n" assert_nil @parser.filter_body(tmp, src) assert_equal "a", tmp assert_equal src.object_id, @parser.filter_body(tmp, src).object_id assert_equal @parser, @parser.dechunk! src = "0\r\n\r\n" assert_equal src.object_id, @parser.filter_body(tmp, src).object_id assert_equal "", tmp assert_equal src, @parser.filter_body(tmp, src) end end unicorn-4.7.0/test/unit/test_socket_helper.rb0000644000004100000410000001427212236653132021411 0ustar www-datawww-data# -*- encoding: binary -*- require 'test/test_helper' require 'tempfile' class TestSocketHelper < Test::Unit::TestCase include Unicorn::SocketHelper attr_reader :logger GET_SLASH = "GET / HTTP/1.0\r\n\r\n".freeze def setup @log_tmp = Tempfile.new 'logger' @logger = Logger.new(@log_tmp.path) @test_addr = ENV['UNICORN_TEST_ADDR'] || '127.0.0.1' @test6_addr = ENV['UNICORN_TEST6_ADDR'] || '::1' GC.disable end def teardown GC.enable end def test_bind_listen_tcp port = unused_port @test_addr @tcp_listener_name = "#@test_addr:#{port}" @tcp_listener = bind_listen(@tcp_listener_name) assert TCPServer === @tcp_listener assert_equal @tcp_listener_name, sock_name(@tcp_listener) end def test_bind_listen_options port = unused_port @test_addr tcp_listener_name = "#@test_addr:#{port}" tmp = Tempfile.new 'unix.sock' unix_listener_name = tmp.path File.unlink(tmp.path) [ { :backlog => 5 }, { :sndbuf => 4096 }, { :rcvbuf => 4096 }, { :backlog => 16, :rcvbuf => 4096, :sndbuf => 4096 } ].each do |opts| tcp_listener = bind_listen(tcp_listener_name, opts) assert TCPServer === tcp_listener tcp_listener.close unix_listener = bind_listen(unix_listener_name, opts) assert UNIXServer === unix_listener unix_listener.close end end def test_bind_listen_unix old_umask = File.umask(0777) tmp = Tempfile.new 'unix.sock' @unix_listener_path = tmp.path File.unlink(@unix_listener_path) @unix_listener = bind_listen(@unix_listener_path) assert UNIXServer === @unix_listener assert_equal @unix_listener_path, sock_name(@unix_listener) assert File.readable?(@unix_listener_path), "not readable" assert File.writable?(@unix_listener_path), "not writable" assert_equal 0777, File.umask ensure File.umask(old_umask) end def test_bind_listen_unix_umask old_umask = File.umask(0777) tmp = Tempfile.new 'unix.sock' @unix_listener_path = tmp.path File.unlink(@unix_listener_path) @unix_listener = bind_listen(@unix_listener_path, :umask => 077) assert UNIXServer === @unix_listener assert_equal @unix_listener_path, sock_name(@unix_listener) assert_equal 0140700, File.stat(@unix_listener_path).mode assert_equal 0777, File.umask ensure File.umask(old_umask) end def test_bind_listen_unix_idempotent test_bind_listen_unix a = bind_listen(@unix_listener) assert_equal a.fileno, @unix_listener.fileno unix_server = server_cast(@unix_listener) assert UNIXServer === unix_server a = bind_listen(unix_server) assert_equal a.fileno, unix_server.fileno assert_equal a.fileno, @unix_listener.fileno end def test_bind_listen_tcp_idempotent test_bind_listen_tcp a = bind_listen(@tcp_listener) assert_equal a.fileno, @tcp_listener.fileno tcp_server = server_cast(@tcp_listener) assert TCPServer === tcp_server a = bind_listen(tcp_server) assert_equal a.fileno, tcp_server.fileno assert_equal a.fileno, @tcp_listener.fileno end def test_bind_listen_unix_rebind test_bind_listen_unix new_listener = nil assert_raises(Errno::EADDRINUSE) do new_listener = bind_listen(@unix_listener_path) end File.unlink(@unix_listener_path) new_listener = bind_listen(@unix_listener_path) assert UNIXServer === new_listener assert new_listener.fileno != @unix_listener.fileno assert_equal sock_name(new_listener), sock_name(@unix_listener) assert_equal @unix_listener_path, sock_name(new_listener) pid = fork do client = server_cast(new_listener).accept client.syswrite('abcde') exit 0 end s = UNIXSocket.new(@unix_listener_path) IO.select([s]) assert_equal 'abcde', s.sysread(5) pid, status = Process.waitpid2(pid) assert status.success? end def test_server_cast test_bind_listen_unix test_bind_listen_tcp unix_listener_socket = Socket.for_fd(@unix_listener.fileno) assert Socket === unix_listener_socket @unix_server = server_cast(unix_listener_socket) assert_equal @unix_listener.fileno, @unix_server.fileno assert UNIXServer === @unix_server assert_equal(@unix_server.path, @unix_listener.path, "##{@unix_server.path} != #{@unix_listener.path}") assert File.socket?(@unix_server.path) assert_equal @unix_listener_path, sock_name(@unix_server) tcp_listener_socket = Socket.for_fd(@tcp_listener.fileno) assert Socket === tcp_listener_socket @tcp_server = server_cast(tcp_listener_socket) assert_equal @tcp_listener.fileno, @tcp_server.fileno assert TCPServer === @tcp_server assert_equal @tcp_listener_name, sock_name(@tcp_server) end def test_sock_name test_server_cast sock_name(@unix_server) end def test_tcp_defer_accept_default port = unused_port @test_addr name = "#@test_addr:#{port}" sock = bind_listen(name) cur = sock.getsockopt(Socket::SOL_TCP, TCP_DEFER_ACCEPT).unpack('i')[0] assert cur >= 1 end if defined?(TCP_DEFER_ACCEPT) def test_tcp_defer_accept_disable port = unused_port @test_addr name = "#@test_addr:#{port}" sock = bind_listen(name, :tcp_defer_accept => false) cur = sock.getsockopt(Socket::SOL_TCP, TCP_DEFER_ACCEPT).unpack('i')[0] assert_equal 0, cur end if defined?(TCP_DEFER_ACCEPT) def test_tcp_defer_accept_nr port = unused_port @test_addr name = "#@test_addr:#{port}" sock = bind_listen(name, :tcp_defer_accept => 60) cur = sock.getsockopt(Socket::SOL_TCP, TCP_DEFER_ACCEPT).unpack('i')[0] assert cur > 1 end if defined?(TCP_DEFER_ACCEPT) def test_ipv6only port = begin unused_port "#@test6_addr" rescue Errno::EINVAL return end sock = bind_listen "[#@test6_addr]:#{port}", :ipv6only => true cur = sock.getsockopt(:IPPROTO_IPV6, :IPV6_V6ONLY).unpack('i')[0] assert_equal 1, cur rescue Errno::EAFNOSUPPORT end if RUBY_VERSION >= "1.9.2" def test_reuseport port = unused_port @test_addr name = "#@test_addr:#{port}" sock = bind_listen(name, :reuseport => true) cur = sock.getsockopt(Socket::SOL_SOCKET, SO_REUSEPORT).unpack('i')[0] assert_equal 1, cur end if defined?(SO_REUSEPORT) end unicorn-4.7.0/test/unit/test_signals.rb0000644000004100000410000001311212236653132020212 0ustar www-datawww-data# -*- encoding: binary -*- # Copyright (c) 2009 Eric Wong # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or # the GPLv2+ (GPLv3+ preferred) # # Ensure we stay sane in the face of signals being sent to us require 'test/test_helper' include Unicorn class Dd def initialize(bs, count) @count = count @buf = ' ' * bs end def each(&block) @count.times { yield @buf } end end class SignalsTest < Test::Unit::TestCase def setup @bs = 1 * 1024 * 1024 @count = 100 @port = unused_port @sock = Tempfile.new('unicorn.sock') @tmp = Tempfile.new('unicorn.write') @tmp.sync = true File.unlink(@sock.path) File.unlink(@tmp.path) @server_opts = { :listeners => [ "127.0.0.1:#@port", @sock.path ], :after_fork => lambda { |server,worker| trap(:HUP) { @tmp.syswrite('.') } }, } @server = nil end def teardown reset_sig_handlers end def test_worker_dies_on_dead_master pid = fork { app = lambda { |env| [ 200, {'X-Pid' => "#$$" }, [] ] } opts = @server_opts.merge(:timeout => 3) redirect_test_io { HttpServer.new(app, opts).start.join } } wait_workers_ready("test_stderr.#{pid}.log", 1) sock = TCPSocket.new('127.0.0.1', @port) sock.syswrite("GET / HTTP/1.0\r\n\r\n") buf = sock.readpartial(4096) assert_nil sock.close buf =~ /\bX-Pid: (\d+)\b/ or raise Exception child = $1.to_i wait_master_ready("test_stderr.#{pid}.log") wait_workers_ready("test_stderr.#{pid}.log", 1) Process.kill(:KILL, pid) Process.waitpid(pid) File.unlink("test_stderr.#{pid}.log", "test_stdout.#{pid}.log") t0 = Time.now assert child assert t0 assert_raises(Errno::ESRCH) { loop { Process.kill(0, child); sleep 0.2 } } assert((Time.now - t0) < 60) end def test_sleepy_kill rd, wr = IO.pipe pid = fork { rd.close app = lambda { |env| wr.syswrite('.'); sleep; [ 200, {}, [] ] } redirect_test_io { HttpServer.new(app, @server_opts).start.join } } wr.close wait_workers_ready("test_stderr.#{pid}.log", 1) sock = TCPSocket.new('127.0.0.1', @port) sock.syswrite("GET / HTTP/1.0\r\n\r\n") buf = rd.readpartial(1) wait_master_ready("test_stderr.#{pid}.log") Process.kill(:INT, pid) Process.waitpid(pid) assert_equal '.', buf buf = nil assert_raises(EOFError,Errno::ECONNRESET,Errno::EPIPE,Errno::EINVAL, Errno::EBADF) do buf = sock.sysread(4096) end assert_nil buf end def test_timeout_slow_response pid = fork { app = lambda { |env| sleep } opts = @server_opts.merge(:timeout => 3) redirect_test_io { HttpServer.new(app, opts).start.join } } t0 = Time.now wait_workers_ready("test_stderr.#{pid}.log", 1) sock = TCPSocket.new('127.0.0.1', @port) sock.syswrite("GET / HTTP/1.0\r\n\r\n") buf = nil assert_raises(EOFError,Errno::ECONNRESET,Errno::EPIPE,Errno::EINVAL, Errno::EBADF) do buf = sock.sysread(4096) end diff = Time.now - t0 assert_nil buf assert diff > 1.0, "diff was #{diff.inspect}" assert diff < 60.0 ensure Process.kill(:TERM, pid) rescue nil end def test_response_write app = lambda { |env| [ 200, { 'Content-Type' => 'text/plain', 'X-Pid' => Process.pid.to_s }, Dd.new(@bs, @count) ] } redirect_test_io { @server = HttpServer.new(app, @server_opts).start } wait_workers_ready("test_stderr.#{$$}.log", 1) sock = TCPSocket.new('127.0.0.1', @port) sock.syswrite("GET / HTTP/1.0\r\n\r\n") buf = '' header_len = pid = nil buf = sock.sysread(16384, buf) pid = buf[/\r\nX-Pid: (\d+)\r\n/, 1].to_i header_len = buf[/\A(.+?\r\n\r\n)/m, 1].size assert pid > 0, "pid not positive: #{pid.inspect}" read = buf.size size_before = @tmp.stat.size assert_raises(EOFError,Errno::ECONNRESET,Errno::EPIPE,Errno::EINVAL, Errno::EBADF) do loop do 3.times { Process.kill(:HUP, pid) } sock.sysread(16384, buf) read += buf.size 3.times { Process.kill(:HUP, pid) } end end redirect_test_io { @server.stop(true) } # can't check for == since pending signals get merged assert size_before < @tmp.stat.size got = read - header_len expect = @bs * @count assert_equal(expect, got, "expect=#{expect} got=#{got}") assert_nil sock.close end def test_request_read app = lambda { |env| while env['rack.input'].read(4096) end [ 200, {'Content-Type'=>'text/plain', 'X-Pid'=>Process.pid.to_s}, [] ] } redirect_test_io { @server = HttpServer.new(app, @server_opts).start } wait_workers_ready("test_stderr.#{$$}.log", 1) sock = TCPSocket.new('127.0.0.1', @port) sock.syswrite("GET / HTTP/1.0\r\n\r\n") pid = sock.sysread(4096)[/\r\nX-Pid: (\d+)\r\n/, 1].to_i assert_nil sock.close assert pid > 0, "pid not positive: #{pid.inspect}" sock = TCPSocket.new('127.0.0.1', @port) sock.syswrite("PUT / HTTP/1.0\r\n") sock.syswrite("Content-Length: #{@bs * @count}\r\n\r\n") 1000.times { Process.kill(:HUP, pid) } size_before = @tmp.stat.size killer = fork { loop { Process.kill(:HUP, pid); sleep(0.01) } } buf = ' ' * @bs @count.times { sock.syswrite(buf) } Process.kill(:KILL, killer) Process.waitpid2(killer) redirect_test_io { @server.stop(true) } # can't check for == since pending signals get merged assert size_before < @tmp.stat.size assert_equal pid, sock.sysread(4096)[/\r\nX-Pid: (\d+)\r\n/, 1].to_i assert_nil sock.close end end unicorn-4.7.0/test/unit/test_tee_input.rb0000644000004100000410000001672612236653132020564 0ustar www-datawww-data# -*- encoding: binary -*- require 'test/unit' require 'digest/sha1' require 'unicorn' class TeeInput < Unicorn::TeeInput attr_accessor :tmp, :len end class TestTeeInput < Test::Unit::TestCase def setup @rs = $/ @rd, @wr = Kgio::UNIXSocket.pair @rd.sync = @wr.sync = true @start_pid = $$ end def teardown return if $$ != @start_pid $/ = @rs @rd.close rescue nil @wr.close rescue nil begin Process.wait rescue Errno::ECHILD break end while true end def test_gets_long r = init_request("hello", 5 + (4096 * 4 * 3) + "#$/foo#$/".size) ti = TeeInput.new(@rd, r) status = line = nil pid = fork { @rd.close 3.times { @wr.write("ffff" * 4096) } @wr.write "#$/foo#$/" @wr.close } @wr.close line = ti.gets assert_equal(4096 * 4 * 3 + 5 + $/.size, line.size) assert_equal("hello" << ("ffff" * 4096 * 3) << "#$/", line) line = ti.gets assert_equal "foo#$/", line assert_nil ti.gets pid, status = Process.waitpid2(pid) assert status.success? end def test_gets_short r = init_request("hello", 5 + "#$/foo".size) ti = TeeInput.new(@rd, r) status = line = nil pid = fork { @rd.close @wr.write "#$/foo" @wr.close } @wr.close line = ti.gets assert_equal("hello#$/", line) line = ti.gets assert_equal "foo", line assert_nil ti.gets pid, status = Process.waitpid2(pid) assert status.success? end def test_small_body r = init_request('hello') ti = TeeInput.new(@rd, r) assert_equal 0, @parser.content_length assert @parser.body_eof? assert_equal StringIO, ti.tmp.class assert_equal 0, ti.tmp.pos assert_equal 5, ti.size assert_equal 'hello', ti.read assert_equal '', ti.read assert_nil ti.read(4096) assert_equal 5, ti.size end def test_read_with_buffer r = init_request('hello') ti = TeeInput.new(@rd, r) buf = '' rv = ti.read(4, buf) assert_equal 'hell', rv assert_equal 'hell', buf assert_equal rv.object_id, buf.object_id assert_equal 'o', ti.read assert_equal nil, ti.read(5, buf) assert_equal 0, ti.rewind assert_equal 'hello', ti.read(5, buf) assert_equal 'hello', buf end def test_big_body r = init_request('.' * Unicorn::Const::MAX_BODY << 'a') ti = TeeInput.new(@rd, r) assert_equal 0, @parser.content_length assert @parser.body_eof? assert_kind_of File, ti.tmp assert_equal 0, ti.tmp.pos assert_equal Unicorn::Const::MAX_BODY + 1, ti.size end def test_read_in_full_if_content_length a, b = 300, 3 r = init_request('.' * b, 300) assert_equal 300, @parser.content_length ti = TeeInput.new(@rd, r) pid = fork { @wr.write('.' * 197) sleep 1 # still a *potential* race here that would make the test moot... @wr.write('.' * 100) } assert_equal a, ti.read(a).size _, status = Process.waitpid2(pid) assert status.success? @wr.close end def test_big_body_multi r = init_request('.', Unicorn::Const::MAX_BODY + 1) ti = TeeInput.new(@rd, r) assert_equal Unicorn::Const::MAX_BODY, @parser.content_length assert ! @parser.body_eof? assert_kind_of File, ti.tmp assert_equal 0, ti.tmp.pos assert_equal Unicorn::Const::MAX_BODY + 1, ti.size nr = Unicorn::Const::MAX_BODY / 4 pid = fork { @rd.close nr.times { @wr.write('....') } @wr.close } @wr.close assert_equal '.', ti.read(1) assert_equal Unicorn::Const::MAX_BODY + 1, ti.size nr.times { |x| assert_equal '....', ti.read(4), "nr=#{x}" assert_equal Unicorn::Const::MAX_BODY + 1, ti.size } assert_nil ti.read(1) pid, status = Process.waitpid2(pid) assert status.success? end def test_chunked @parser = Unicorn::HttpParser.new @parser.buf << "POST / HTTP/1.1\r\n" \ "Host: localhost\r\n" \ "Transfer-Encoding: chunked\r\n" \ "\r\n" assert @parser.parse assert_equal "", @parser.buf pid = fork { @rd.close 5.times { @wr.write("5\r\nabcde\r\n") } @wr.write("0\r\n\r\n") } @wr.close ti = TeeInput.new(@rd, @parser) assert_nil @parser.content_length assert_nil ti.len assert ! @parser.body_eof? assert_equal 25, ti.size assert @parser.body_eof? assert_equal 25, ti.len assert_equal 0, ti.tmp.pos ti.rewind assert_equal 0, ti.tmp.pos assert_equal 'abcdeabcdeabcdeabcde', ti.read(20) assert_equal 20, ti.tmp.pos ti.rewind assert_equal 0, ti.tmp.pos assert_kind_of File, ti.tmp status = nil pid, status = Process.waitpid2(pid) assert status.success? end def test_chunked_ping_pong @parser = Unicorn::HttpParser.new buf = @parser.buf buf << "POST / HTTP/1.1\r\n" \ "Host: localhost\r\n" \ "Transfer-Encoding: chunked\r\n" \ "\r\n" assert @parser.parse assert_equal "", buf chunks = %w(aa bbb cccc dddd eeee) rd, wr = IO.pipe pid = fork { chunks.each do |chunk| rd.read(1) == "." and @wr.write("#{'%x' % [ chunk.size]}\r\n#{chunk}\r\n") end @wr.write("0\r\n\r\n") } ti = TeeInput.new(@rd, @parser) assert_nil @parser.content_length assert_nil ti.len assert ! @parser.body_eof? chunks.each do |chunk| wr.write('.') assert_equal chunk, ti.read(16384) end _, status = Process.waitpid2(pid) assert status.success? end def test_chunked_with_trailer @parser = Unicorn::HttpParser.new buf = @parser.buf buf << "POST / HTTP/1.1\r\n" \ "Host: localhost\r\n" \ "Trailer: Hello\r\n" \ "Transfer-Encoding: chunked\r\n" \ "\r\n" assert @parser.parse assert_equal "", buf pid = fork { @rd.close 5.times { @wr.write("5\r\nabcde\r\n") } @wr.write("0\r\n") @wr.write("Hello: World\r\n\r\n") } @wr.close ti = TeeInput.new(@rd, @parser) assert_nil @parser.content_length assert_nil ti.len assert ! @parser.body_eof? assert_equal 25, ti.size assert_equal "World", @parser.env['HTTP_HELLO'] pid, status = Process.waitpid2(pid) assert status.success? end def test_chunked_and_size_slow @parser = Unicorn::HttpParser.new buf = @parser.buf buf << "POST / HTTP/1.1\r\n" \ "Host: localhost\r\n" \ "Trailer: Hello\r\n" \ "Transfer-Encoding: chunked\r\n" \ "\r\n" assert @parser.parse assert_equal "", buf @wr.write("9\r\nabcde") ti = TeeInput.new(@rd, @parser) assert_nil @parser.content_length assert_equal "abcde", ti.read(9) assert ! @parser.body_eof? @wr.write("fghi\r\n0\r\nHello: World\r\n\r\n") assert_equal 9, ti.size assert_equal "fghi", ti.read(9) assert_equal nil, ti.read(9) assert_equal "World", @parser.env['HTTP_HELLO'] end def test_gets_read_mix r = init_request("hello\nasdfasdf") ti = Unicorn::TeeInput.new(@rd, r) assert_equal "hello\n", ti.gets assert_equal "asdfasdf", ti.read(9) assert_nil ti.read(9) end private def init_request(body, size = nil) @parser = Unicorn::HttpParser.new body = body.to_s.freeze buf = @parser.buf buf << "POST / HTTP/1.1\r\n" \ "Host: localhost\r\n" \ "Content-Length: #{size || body.size}\r\n" \ "\r\n#{body}" assert @parser.parse assert_equal body, buf @buf = buf @parser end end unicorn-4.7.0/test/unit/test_stream_input.rb0000644000004100000410000001230112236653132021263 0ustar www-datawww-data# -*- encoding: binary -*- require 'test/unit' require 'digest/sha1' require 'unicorn' class TestStreamInput < Test::Unit::TestCase def setup @rs = $/ @env = {} @rd, @wr = Kgio::UNIXSocket.pair @rd.sync = @wr.sync = true @start_pid = $$ end def teardown return if $$ != @start_pid $/ = @rs @rd.close rescue nil @wr.close rescue nil Process.waitall end def test_read_negative r = init_request('hello') si = Unicorn::StreamInput.new(@rd, r) assert_raises(ArgumentError) { si.read(-1) } assert_equal 'hello', si.read end def test_read_small r = init_request('hello') si = Unicorn::StreamInput.new(@rd, r) assert_equal 'hello', si.read assert_equal '', si.read assert_nil si.read(5) assert_nil si.gets end def test_gets_oneliner r = init_request('hello') si = Unicorn::StreamInput.new(@rd, r) assert_equal 'hello', si.gets assert_nil si.gets end def test_gets_multiline r = init_request("a\nb\n\n") si = Unicorn::StreamInput.new(@rd, r) assert_equal "a\n", si.gets assert_equal "b\n", si.gets assert_equal "\n", si.gets assert_nil si.gets end def test_gets_empty_rs $/ = nil r = init_request("a\nb\n\n") si = Unicorn::StreamInput.new(@rd, r) assert_equal "a\nb\n\n", si.gets assert_nil si.gets end def test_read_with_equal_len r = init_request("abcde") si = Unicorn::StreamInput.new(@rd, r) assert_equal "abcde", si.read(5) assert_nil si.read(5) end def test_big_body_multi r = init_request('.', Unicorn::Const::MAX_BODY + 1) si = Unicorn::StreamInput.new(@rd, r) assert_equal Unicorn::Const::MAX_BODY, @parser.content_length assert ! @parser.body_eof? nr = Unicorn::Const::MAX_BODY / 4 pid = fork { @rd.close nr.times { @wr.write('....') } @wr.close } @wr.close assert_equal '.', si.read(1) nr.times { |x| assert_equal '....', si.read(4), "nr=#{x}" } assert_nil si.read(1) pid, status = Process.waitpid2(pid) assert status.success? end def test_gets_long r = init_request("hello", 5 + (4096 * 4 * 3) + "#$/foo#$/".size) si = Unicorn::StreamInput.new(@rd, r) status = line = nil pid = fork { @rd.close 3.times { @wr.write("ffff" * 4096) } @wr.write "#$/foo#$/" @wr.close } @wr.close line = si.gets assert_equal(4096 * 4 * 3 + 5 + $/.size, line.size) assert_equal("hello" << ("ffff" * 4096 * 3) << "#$/", line) line = si.gets assert_equal "foo#$/", line assert_nil si.gets pid, status = Process.waitpid2(pid) assert status.success? end def test_read_with_buffer r = init_request('hello') si = Unicorn::StreamInput.new(@rd, r) buf = '' rv = si.read(4, buf) assert_equal 'hell', rv assert_equal 'hell', buf assert_equal rv.object_id, buf.object_id assert_equal 'o', si.read assert_equal nil, si.read(5, buf) end def test_read_with_buffer_clobbers r = init_request('hello') si = Unicorn::StreamInput.new(@rd, r) buf = 'foo' assert_equal 'hello', si.read(nil, buf) assert_equal 'hello', buf assert_equal '', si.read(nil, buf) assert_equal '', buf buf = 'asdf' assert_nil si.read(5, buf) assert_equal '', buf end def test_read_zero r = init_request('hello') si = Unicorn::StreamInput.new(@rd, r) assert_equal '', si.read(0) buf = 'asdf' rv = si.read(0, buf) assert_equal rv.object_id, buf.object_id assert_equal '', buf assert_equal 'hello', si.read assert_nil si.read(5) assert_equal '', si.read(0) buf = 'hello' rv = si.read(0, buf) assert_equal rv.object_id, buf.object_id assert_equal '', rv end def test_gets_read_mix r = init_request("hello\nasdfasdf") si = Unicorn::StreamInput.new(@rd, r) assert_equal "hello\n", si.gets assert_equal "asdfasdf", si.read(9) assert_nil si.read(9) end def test_gets_read_mix_chunked r = @parser = Unicorn::HttpParser.new body = "6\r\nhello" @buf = "POST / HTTP/1.1\r\n" \ "Host: localhost\r\n" \ "Transfer-Encoding: chunked\r\n" \ "\r\n#{body}" assert_equal @env, @parser.headers(@env, @buf) assert_equal body, @buf si = Unicorn::StreamInput.new(@rd, r) @wr.syswrite "\n\r\n" assert_equal "hello\n", si.gets @wr.syswrite "8\r\nasdfasdf\r\n" assert_equal"asdfasdf", si.read(9) + si.read(9) @wr.syswrite "0\r\n\r\n" assert_nil si.read(9) end def test_gets_read_mix_big r = init_request("hello\n#{'.' * 65536}") si = Unicorn::StreamInput.new(@rd, r) assert_equal "hello\n", si.gets assert_equal '.' * 16384, si.read(16384) assert_equal '.' * 16383, si.read(16383) assert_equal '.' * 16384, si.read(16384) assert_equal '.' * 16385, si.read(16385) assert_nil si.gets end def init_request(body, size = nil) @parser = Unicorn::HttpParser.new body = body.to_s.freeze @buf = "POST / HTTP/1.1\r\n" \ "Host: localhost\r\n" \ "Content-Length: #{size || body.size}\r\n" \ "\r\n#{body}" assert_equal @env, @parser.headers(@env, @buf) assert_equal body, @buf @parser end end unicorn-4.7.0/test/unit/test_http_parser_xftrust.rb0000644000004100000410000000202112236653132022701 0ustar www-datawww-data# -*- encoding: binary -*- require 'test/test_helper' include Unicorn class HttpParserXFTrustTest < Test::Unit::TestCase def setup assert HttpParser.trust_x_forwarded? end def test_xf_trust_false_xfp HttpParser.trust_x_forwarded = false parser = HttpParser.new parser.buf << "GET / HTTP/1.1\r\nHost: foo:\r\n" \ "X-Forwarded-Proto: https\r\n\r\n" env = parser.parse assert_kind_of Hash, env assert_equal 'foo', env['SERVER_NAME'] assert_equal '80', env['SERVER_PORT'] assert_equal 'http', env['rack.url_scheme'] end def test_xf_trust_false_xfs HttpParser.trust_x_forwarded = false parser = HttpParser.new parser.buf << "GET / HTTP/1.1\r\nHost: foo:\r\n" \ "X-Forwarded-SSL: on\r\n\r\n" env = parser.parse assert_kind_of Hash, env assert_equal 'foo', env['SERVER_NAME'] assert_equal '80', env['SERVER_PORT'] assert_equal 'http', env['rack.url_scheme'] end def teardown HttpParser.trust_x_forwarded = true end end unicorn-4.7.0/test/exec/0000755000004100000410000000000012236653132015135 5ustar www-datawww-dataunicorn-4.7.0/test/exec/README0000644000004100000410000000043612236653132016020 0ustar www-datawww-dataThese tests require the "unicorn" executable script to be installed in PATH and rack being directly "require"-able ("rubygems" will not be loaded for you). The tester is responsible for setting up RUBYLIB and PATH environment variables (or running tests via GNU Make instead of Rake). unicorn-4.7.0/test/exec/test_exec.rb0000644000004100000410000007714312236653132017461 0ustar www-datawww-data# -*- encoding: binary -*- # Copyright (c) 2009 Eric Wong FLOCK_PATH = File.expand_path(__FILE__) require 'test/test_helper' do_test = true $unicorn_bin = ENV['UNICORN_TEST_BIN'] || "unicorn" redirect_test_io do do_test = system($unicorn_bin, '-v') end unless do_test warn "#{$unicorn_bin} not found in PATH=#{ENV['PATH']}, " \ "skipping this test" end unless try_require('rack') warn "Unable to load Rack, skipping this test" do_test = false end class ExecTest < Test::Unit::TestCase trap(:QUIT, 'IGNORE') HI = <<-EOS use Rack::ContentLength run proc { |env| [ 200, { 'Content-Type' => 'text/plain' }, [ "HI\\n" ] ] } EOS SHOW_RACK_ENV = <<-EOS use Rack::ContentLength run proc { |env| [ 200, { 'Content-Type' => 'text/plain' }, [ ENV['RACK_ENV'] ] ] } EOS HELLO = <<-EOS class Hello def call(env) [ 200, { 'Content-Type' => 'text/plain' }, [ "HI\\n" ] ] end end EOS COMMON_TMP = Tempfile.new('unicorn_tmp') unless defined?(COMMON_TMP) HEAVY_CFG = <<-EOS worker_processes 4 timeout 30 logger Logger.new('#{COMMON_TMP.path}') before_fork do |server, worker| server.logger.info "before_fork: worker=\#{worker.nr}" end EOS WORKING_DIRECTORY_CHECK_RU = <<-EOS use Rack::ContentLength run lambda { |env| pwd = ENV['PWD'] a = ::File.stat(pwd) b = ::File.stat(Dir.pwd) if (a.ino == b.ino && a.dev == b.dev) [ 200, { 'Content-Type' => 'text/plain' }, [ pwd ] ] else [ 404, { 'Content-Type' => 'text/plain' }, [] ] end } EOS def setup @pwd = Dir.pwd @tmpfile = Tempfile.new('unicorn_exec_test') @tmpdir = @tmpfile.path @tmpfile.close! Dir.mkdir(@tmpdir) Dir.chdir(@tmpdir) @addr = ENV['UNICORN_TEST_ADDR'] || '127.0.0.1' @port = unused_port(@addr) @sockets = [] @start_pid = $$ end def teardown return if @start_pid != $$ Dir.chdir(@pwd) FileUtils.rmtree(@tmpdir) @sockets.each { |path| File.unlink(path) rescue nil } loop do Process.kill('-QUIT', 0) begin Process.waitpid(-1, Process::WNOHANG) or break rescue Errno::ECHILD break end end end def test_working_directory_rel_path_config_file other = Tempfile.new('unicorn.wd') File.unlink(other.path) Dir.mkdir(other.path) File.open("config.ru", "wb") do |fp| fp.syswrite WORKING_DIRECTORY_CHECK_RU end FileUtils.cp("config.ru", other.path + "/config.ru") Dir.chdir(@tmpdir) tmp = File.open('unicorn.config', 'wb') tmp.syswrite < 0 end rescue Errno::ENOENT (sleep(DEFAULT_RES) and (tries -= 1) > 0) and retry end assert_equal current_pid, File.read(pid_file).to_i tries = DEFAULT_TRIES while File.exist?(old_file) (sleep(DEFAULT_RES) and (tries -= 1) > 0) or break end assert ! File.exist?(old_file), "oldbin=#{old_file} gone" port2 = unused_port(@addr) # fix the bug ucfg.sysseek(0) ucfg.truncate(0) ucfg.syswrite("listen %(#@addr:#@port)\n") ucfg.syswrite("listen %(#@addr:#{port2})\n") ucfg.syswrite("pid %(#{pid_file})\n") Process.kill(:USR2, current_pid) wait_for_file(old_file) wait_for_file(pid_file) new_pid = File.read(pid_file).to_i assert_not_equal current_pid, new_pid assert_equal current_pid, File.read(old_file).to_i results = retry_hit(["http://#{@addr}:#{@port}/", "http://#{@addr}:#{port2}/"]) assert_equal String, results[0].class assert_equal String, results[1].class Process.kill(:QUIT, current_pid) Process.kill(:QUIT, new_pid) end def test_broken_reexec_ru File.open("config.ru", "wb") { |fp| fp.syswrite(HI) } pid_file = "#{@tmpdir}/test.pid" old_file = "#{pid_file}.oldbin" ucfg = Tempfile.new('unicorn_test_config') ucfg.syswrite("pid %(#{pid_file})\n") ucfg.syswrite("logger Logger.new(%(#{@tmpdir}/log))\n") pid = xfork do redirect_test_io do exec($unicorn_bin, "-D", "-l#{@addr}:#{@port}", "-c#{ucfg.path}") end end results = retry_hit(["http://#{@addr}:#{@port}/"]) assert_equal String, results[0].class wait_for_file(pid_file) Process.waitpid(pid) Process.kill(:USR2, File.read(pid_file).to_i) wait_for_file(old_file) wait_for_file(pid_file) old_pid = File.read(old_file).to_i Process.kill(:QUIT, old_pid) wait_for_death(old_pid) File.unlink("config.ru") # break reloading current_pid = File.read(pid_file).to_i Process.kill(:USR2, current_pid) # wait for pid_file to restore itself tries = DEFAULT_TRIES begin while current_pid != File.read(pid_file).to_i sleep(DEFAULT_RES) and (tries -= 1) > 0 end rescue Errno::ENOENT (sleep(DEFAULT_RES) and (tries -= 1) > 0) and retry end tries = DEFAULT_TRIES while File.exist?(old_file) (sleep(DEFAULT_RES) and (tries -= 1) > 0) or break end assert ! File.exist?(old_file), "oldbin=#{old_file} gone" assert_equal current_pid, File.read(pid_file).to_i # fix the bug File.open("config.ru", "wb") { |fp| fp.syswrite(HI) } Process.kill(:USR2, current_pid) wait_for_file(old_file) wait_for_file(pid_file) new_pid = File.read(pid_file).to_i assert_not_equal current_pid, new_pid assert_equal current_pid, File.read(old_file).to_i results = retry_hit(["http://#{@addr}:#{@port}/"]) assert_equal String, results[0].class Process.kill(:QUIT, current_pid) Process.kill(:QUIT, new_pid) end def test_unicorn_config_listener_swap port_cli = unused_port File.open("config.ru", "wb") { |fp| fp.syswrite(HI) } ucfg = Tempfile.new('unicorn_test_config') ucfg.syswrite("listen '#@addr:#@port'\n") pid = xfork do redirect_test_io do exec($unicorn_bin, "-c#{ucfg.path}", "-l#@addr:#{port_cli}") end end results = retry_hit(["http://#@addr:#{port_cli}/"]) assert_equal String, results[0].class results = retry_hit(["http://#@addr:#@port/"]) assert_equal String, results[0].class port2 = unused_port(@addr) ucfg.sysseek(0) ucfg.truncate(0) ucfg.syswrite("listen '#@addr:#{port2}'\n") Process.kill(:HUP, pid) results = retry_hit(["http://#@addr:#{port2}/"]) assert_equal String, results[0].class results = retry_hit(["http://#@addr:#{port_cli}/"]) assert_equal String, results[0].class reuse = TCPServer.new(@addr, @port) reuse.close assert_shutdown(pid) end def test_unicorn_config_listen_with_options File.open("config.ru", "wb") { |fp| fp.syswrite(HI) } ucfg = Tempfile.new('unicorn_test_config') ucfg.syswrite("listen '#{@addr}:#{@port}', :backlog => 512,\n") ucfg.syswrite(" :rcvbuf => 4096,\n") ucfg.syswrite(" :sndbuf => 4096\n") pid = xfork do redirect_test_io { exec($unicorn_bin, "-c#{ucfg.path}") } end results = retry_hit(["http://#{@addr}:#{@port}/"]) assert_equal String, results[0].class assert_shutdown(pid) end def test_unicorn_config_per_worker_listen port2 = unused_port pid_spit = 'use Rack::ContentLength;' \ 'run proc { |e| [ 200, {"Content-Type"=>"text/plain"}, ["#$$\\n"] ] }' File.open("config.ru", "wb") { |fp| fp.syswrite(pid_spit) } tmp = Tempfile.new('test.socket') File.unlink(tmp.path) ucfg = Tempfile.new('unicorn_test_config') ucfg.syswrite("listen '#@addr:#@port'\n") ucfg.syswrite("after_fork { |s,w|\n") ucfg.syswrite(" s.listen('#{tmp.path}', :backlog => 5, :sndbuf => 8192)\n") ucfg.syswrite(" s.listen('#@addr:#{port2}', :rcvbuf => 8192)\n") ucfg.syswrite("\n}\n") pid = xfork do redirect_test_io { exec($unicorn_bin, "-c#{ucfg.path}") } end results = retry_hit(["http://#{@addr}:#{@port}/"]) assert_equal String, results[0].class worker_pid = results[0].to_i assert_not_equal pid, worker_pid s = UNIXSocket.new(tmp.path) s.syswrite("GET / HTTP/1.0\r\n\r\n") results = '' loop { results << s.sysread(4096) } rescue nil s.close assert_equal worker_pid, results.split(/\r\n/).last.to_i results = hit(["http://#@addr:#{port2}/"]) assert_equal String, results[0].class assert_equal worker_pid, results[0].to_i assert_shutdown(pid) end def test_unicorn_config_listen_augments_cli port2 = unused_port(@addr) File.open("config.ru", "wb") { |fp| fp.syswrite(HI) } ucfg = Tempfile.new('unicorn_test_config') ucfg.syswrite("listen '#{@addr}:#{@port}'\n") pid = xfork do redirect_test_io do exec($unicorn_bin, "-c#{ucfg.path}", "-l#{@addr}:#{port2}") end end uris = [@port, port2].map { |i| "http://#{@addr}:#{i}/" } results = retry_hit(uris) assert_equal results.size, uris.size assert_equal String, results[0].class assert_equal String, results[1].class assert_shutdown(pid) end def test_weird_config_settings File.open("config.ru", "wb") { |fp| fp.syswrite(HI) } ucfg = Tempfile.new('unicorn_test_config') ucfg.syswrite(HEAVY_CFG) pid = xfork do redirect_test_io do exec($unicorn_bin, "-c#{ucfg.path}", "-l#{@addr}:#{@port}") end end results = retry_hit(["http://#{@addr}:#{@port}/"]) assert_equal String, results[0].class wait_master_ready(COMMON_TMP.path) wait_workers_ready(COMMON_TMP.path, 4) bf = File.readlines(COMMON_TMP.path).grep(/\bbefore_fork: worker=/) assert_equal 4, bf.size rotate = Tempfile.new('unicorn_rotate') File.rename(COMMON_TMP.path, rotate.path) Process.kill(:USR1, pid) wait_for_file(COMMON_TMP.path) assert File.exist?(COMMON_TMP.path), "#{COMMON_TMP.path} exists" # USR1 should've been passed to all workers tries = DEFAULT_TRIES log = File.readlines(rotate.path) while (tries -= 1) > 0 && log.grep(/reopening logs\.\.\./).size < 5 sleep DEFAULT_RES log = File.readlines(rotate.path) end assert_equal 5, log.grep(/reopening logs\.\.\./).size assert_equal 0, log.grep(/done reopening logs/).size tries = DEFAULT_TRIES log = File.readlines(COMMON_TMP.path) while (tries -= 1) > 0 && log.grep(/done reopening logs/).size < 5 sleep DEFAULT_RES log = File.readlines(COMMON_TMP.path) end assert_equal 5, log.grep(/done reopening logs/).size assert_equal 0, log.grep(/reopening logs\.\.\./).size Process.kill(:QUIT, pid) pid, status = Process.waitpid2(pid) assert status.success?, "exited successfully" end def test_read_embedded_cli_switches File.open("config.ru", "wb") do |fp| fp.syswrite("#\\ -p #{@port} -o #{@addr}\n") fp.syswrite(HI) end pid = fork { redirect_test_io { exec($unicorn_bin) } } results = retry_hit(["http://#{@addr}:#{@port}/"]) assert_equal String, results[0].class assert_shutdown(pid) end def test_config_ru_alt_path config_path = "#{@tmpdir}/foo.ru" File.open(config_path, "wb") { |fp| fp.syswrite(HI) } pid = fork do redirect_test_io do Dir.chdir("/") exec($unicorn_bin, "-l#{@addr}:#{@port}", config_path) end end results = retry_hit(["http://#{@addr}:#{@port}/"]) assert_equal String, results[0].class assert_shutdown(pid) end def test_load_module libdir = "#{@tmpdir}/lib" FileUtils.mkpath([ libdir ]) config_path = "#{libdir}/hello.rb" File.open(config_path, "wb") { |fp| fp.syswrite(HELLO) } pid = fork do redirect_test_io do Dir.chdir("/") exec($unicorn_bin, "-l#{@addr}:#{@port}", config_path) end end results = retry_hit(["http://#{@addr}:#{@port}/"]) assert_equal String, results[0].class assert_shutdown(pid) end def test_reexec File.open("config.ru", "wb") { |fp| fp.syswrite(HI) } pid_file = "#{@tmpdir}/test.pid" pid = fork do redirect_test_io do exec($unicorn_bin, "-l#{@addr}:#{@port}", "-P#{pid_file}") end end reexec_basic_test(pid, pid_file) end def test_reexec_alt_config config_file = "#{@tmpdir}/foo.ru" File.open(config_file, "wb") { |fp| fp.syswrite(HI) } pid_file = "#{@tmpdir}/test.pid" pid = fork do redirect_test_io do exec($unicorn_bin, "-l#{@addr}:#{@port}", "-P#{pid_file}", config_file) end end reexec_basic_test(pid, pid_file) end def test_socket_unlinked_restore results = nil sock = Tempfile.new('unicorn_test_sock') sock_path = sock.path @sockets << sock_path sock.close! ucfg = Tempfile.new('unicorn_test_config') ucfg.syswrite("listen \"#{sock_path}\"\n") File.open("config.ru", "wb") { |fp| fp.syswrite(HI) } pid = xfork { redirect_test_io { exec($unicorn_bin, "-c#{ucfg.path}") } } wait_for_file(sock_path) assert File.socket?(sock_path) sock = UNIXSocket.new(sock_path) sock.syswrite("GET / HTTP/1.0\r\n\r\n") results = sock.sysread(4096) assert_equal String, results.class File.unlink(sock_path) Process.kill(:HUP, pid) wait_for_file(sock_path) assert File.socket?(sock_path) sock = UNIXSocket.new(sock_path) sock.syswrite("GET / HTTP/1.0\r\n\r\n") results = sock.sysread(4096) assert_equal String, results.class end def test_unicorn_config_file pid_file = "#{@tmpdir}/test.pid" sock = Tempfile.new('unicorn_test_sock') sock_path = sock.path sock.close! @sockets << sock_path log = Tempfile.new('unicorn_test_log') ucfg = Tempfile.new('unicorn_test_config') ucfg.syswrite("listen \"#{sock_path}\"\n") ucfg.syswrite("pid \"#{pid_file}\"\n") ucfg.syswrite("logger Logger.new('#{log.path}')\n") ucfg.close File.open("config.ru", "wb") { |fp| fp.syswrite(HI) } pid = xfork do redirect_test_io do exec($unicorn_bin, "-l#{@addr}:#{@port}", "-P#{pid_file}", "-c#{ucfg.path}") end end results = retry_hit(["http://#{@addr}:#{@port}/"]) assert_equal String, results[0].class wait_master_ready(log.path) assert File.exist?(pid_file), "pid_file created" assert_equal pid, File.read(pid_file).to_i assert File.socket?(sock_path), "socket created" sock = UNIXSocket.new(sock_path) sock.syswrite("GET / HTTP/1.0\r\n\r\n") results = sock.sysread(4096) assert_equal String, results.class # try reloading the config sock = Tempfile.new('new_test_sock') new_sock_path = sock.path @sockets << new_sock_path sock.close! new_log = Tempfile.new('unicorn_test_log') new_log.sync = true assert_equal 0, new_log.size ucfg = File.open(ucfg.path, "wb") ucfg.syswrite("listen \"#{sock_path}\"\n") ucfg.syswrite("listen \"#{new_sock_path}\"\n") ucfg.syswrite("pid \"#{pid_file}\"\n") ucfg.syswrite("logger Logger.new('#{new_log.path}')\n") ucfg.close Process.kill(:HUP, pid) wait_for_file(new_sock_path) assert File.socket?(new_sock_path), "socket exists" @sockets.each do |path| sock = UNIXSocket.new(path) sock.syswrite("GET / HTTP/1.0\r\n\r\n") results = sock.sysread(4096) assert_equal String, results.class end assert_not_equal 0, new_log.size reexec_usr2_quit_test(pid, pid_file) end def test_daemonize_reexec pid_file = "#{@tmpdir}/test.pid" log = Tempfile.new('unicorn_test_log') ucfg = Tempfile.new('unicorn_test_config') ucfg.syswrite("pid \"#{pid_file}\"\n") ucfg.syswrite("logger Logger.new('#{log.path}')\n") ucfg.close File.open("config.ru", "wb") { |fp| fp.syswrite(HI) } pid = xfork do redirect_test_io do exec($unicorn_bin, "-D", "-l#{@addr}:#{@port}", "-c#{ucfg.path}") end end results = retry_hit(["http://#{@addr}:#{@port}/"]) assert_equal String, results[0].class wait_for_file(pid_file) new_pid = File.read(pid_file).to_i assert_not_equal pid, new_pid pid, status = Process.waitpid2(pid) assert status.success?, "original process exited successfully" Process.kill(0, new_pid) reexec_usr2_quit_test(new_pid, pid_file) end def test_daemonize_redirect_fail pid_file = "#{@tmpdir}/test.pid" ucfg = Tempfile.new('unicorn_test_config') ucfg.syswrite("pid #{pid_file}\"\n") err = Tempfile.new('stderr') out = Tempfile.new('stdout ') File.open("config.ru", "wb") { |fp| fp.syswrite(HI) } pid = xfork do $stderr.reopen(err.path, "a") $stdout.reopen(out.path, "a") exec($unicorn_bin, "-D", "-l#{@addr}:#{@port}", "-c#{ucfg.path}") end pid, status = Process.waitpid2(pid) assert ! status.success?, "original process exited successfully" sleep 1 # can't waitpid on a daemonized process :< assert err.stat.size > 0 end def test_reexec_fd_leak unless RUBY_PLATFORM =~ /linux/ # Solaris may work, too, but I forget... warn "FD leak test only works on Linux at the moment" return end pid_file = "#{@tmpdir}/test.pid" log = Tempfile.new('unicorn_test_log') log.sync = true ucfg = Tempfile.new('unicorn_test_config') ucfg.syswrite("pid \"#{pid_file}\"\n") ucfg.syswrite("logger Logger.new('#{log.path}')\n") ucfg.syswrite("stderr_path '#{log.path}'\n") ucfg.syswrite("stdout_path '#{log.path}'\n") ucfg.close File.open("config.ru", "wb") { |fp| fp.syswrite(HI) } pid = xfork do redirect_test_io do exec($unicorn_bin, "-D", "-l#{@addr}:#{@port}", "-c#{ucfg.path}") end end wait_master_ready(log.path) wait_workers_ready(log.path, 1) File.truncate(log.path, 0) wait_for_file(pid_file) orig_pid = pid = File.read(pid_file).to_i orig_fds = `ls -l /proc/#{pid}/fd`.split(/\n/) assert $?.success? expect_size = orig_fds.size Process.kill(:USR2, pid) wait_for_file("#{pid_file}.oldbin") Process.kill(:QUIT, pid) wait_for_death(pid) wait_master_ready(log.path) wait_workers_ready(log.path, 1) File.truncate(log.path, 0) wait_for_file(pid_file) pid = File.read(pid_file).to_i assert_not_equal orig_pid, pid curr_fds = `ls -l /proc/#{pid}/fd`.split(/\n/) assert $?.success? # we could've inherited descriptors the first time around assert expect_size >= curr_fds.size, curr_fds.inspect expect_size = curr_fds.size Process.kill(:USR2, pid) wait_for_file("#{pid_file}.oldbin") Process.kill(:QUIT, pid) wait_for_death(pid) wait_master_ready(log.path) wait_workers_ready(log.path, 1) File.truncate(log.path, 0) wait_for_file(pid_file) pid = File.read(pid_file).to_i curr_fds = `ls -l /proc/#{pid}/fd`.split(/\n/) assert $?.success? assert_equal expect_size, curr_fds.size, curr_fds.inspect Process.kill(:QUIT, pid) wait_for_death(pid) end def hup_test_common(preload, check_client=false) File.open("config.ru", "wb") { |fp| fp.syswrite(HI.gsub("HI", '#$$')) } pid_file = Tempfile.new('pid') ucfg = Tempfile.new('unicorn_test_config') ucfg.syswrite("listen '#@addr:#@port'\n") ucfg.syswrite("pid '#{pid_file.path}'\n") ucfg.syswrite("preload_app true\n") if preload ucfg.syswrite("check_client_connection true\n") if check_client ucfg.syswrite("stderr_path 'test_stderr.#$$.log'\n") ucfg.syswrite("stdout_path 'test_stdout.#$$.log'\n") pid = xfork { redirect_test_io { exec($unicorn_bin, "-D", "-c", ucfg.path) } } _, status = Process.waitpid2(pid) assert status.success? wait_master_ready("test_stderr.#$$.log") wait_workers_ready("test_stderr.#$$.log", 1) uri = URI.parse("http://#@addr:#@port/") pids = Tempfile.new('worker_pids') r, w = IO.pipe hitter = fork { r.close bodies = Hash.new(0) at_exit { pids.syswrite(bodies.inspect) } trap(:TERM) { exit(0) } nr = 0 loop { rv = Net::HTTP.get(uri) pid = rv.to_i exit!(1) if pid <= 0 bodies[pid] += 1 nr += 1 if nr == 1 w.syswrite('1') elsif bodies.size > 1 w.syswrite('2') sleep end } } w.close assert_equal '1', r.read(1) daemon_pid = File.read(pid_file.path).to_i assert daemon_pid > 0 Process.kill(:HUP, daemon_pid) assert_equal '2', r.read(1) Process.kill(:TERM, hitter) _, hitter_status = Process.waitpid2(hitter) assert(hitter_status.success?, "invalid: #{hitter_status.inspect} #{File.read(pids.path)}" \ "#{File.read("test_stderr.#$$.log")}") pids.sysseek(0) pids = eval(pids.read) assert_kind_of(Hash, pids) assert_equal 2, pids.size pids.keys.each { |x| assert_kind_of(Integer, x) assert x > 0 assert pids[x] > 0 } Process.kill(:QUIT, daemon_pid) wait_for_death(daemon_pid) end def test_preload_app_hup hup_test_common(true) end def test_hup hup_test_common(false) end def test_check_client_hup hup_test_common(false, true) end def test_default_listen_hup_holds_listener default_listen_lock do res, pid_path = default_listen_setup daemon_pid = File.read(pid_path).to_i Process.kill(:HUP, daemon_pid) wait_workers_ready("test_stderr.#$$.log", 1) res2 = hit(["http://#{Unicorn::Const::DEFAULT_LISTEN}/"]) assert_match %r{\d+}, res2.first assert res2.first != res.first Process.kill(:QUIT, daemon_pid) wait_for_death(daemon_pid) end end def test_default_listen_upgrade_holds_listener default_listen_lock do res, pid_path = default_listen_setup daemon_pid = File.read(pid_path).to_i Process.kill(:USR2, daemon_pid) wait_for_file("#{pid_path}.oldbin") wait_for_file(pid_path) Process.kill(:QUIT, daemon_pid) wait_for_death(daemon_pid) daemon_pid = File.read(pid_path).to_i wait_workers_ready("test_stderr.#$$.log", 1) File.truncate("test_stderr.#$$.log", 0) res2 = hit(["http://#{Unicorn::Const::DEFAULT_LISTEN}/"]) assert_match %r{\d+}, res2.first assert res2.first != res.first Process.kill(:HUP, daemon_pid) wait_workers_ready("test_stderr.#$$.log", 1) File.truncate("test_stderr.#$$.log", 0) res3 = hit(["http://#{Unicorn::Const::DEFAULT_LISTEN}/"]) assert res2.first != res3.first Process.kill(:QUIT, daemon_pid) wait_for_death(daemon_pid) end end def default_listen_setup File.open("config.ru", "wb") { |fp| fp.syswrite(HI.gsub("HI", '#$$')) } pid_path = (tmp = Tempfile.new('pid')).path tmp.close! ucfg = Tempfile.new('unicorn_test_config') ucfg.syswrite("pid '#{pid_path}'\n") ucfg.syswrite("stderr_path 'test_stderr.#$$.log'\n") ucfg.syswrite("stdout_path 'test_stdout.#$$.log'\n") pid = xfork { redirect_test_io { exec($unicorn_bin, "-D", "-c", ucfg.path) } } _, status = Process.waitpid2(pid) assert status.success? wait_master_ready("test_stderr.#$$.log") wait_workers_ready("test_stderr.#$$.log", 1) File.truncate("test_stderr.#$$.log", 0) res = hit(["http://#{Unicorn::Const::DEFAULT_LISTEN}/"]) assert_match %r{\d+}, res.first [ res, pid_path ] end # we need to flock() something to prevent these tests from running def default_listen_lock(&block) fp = File.open(FLOCK_PATH, "rb") begin fp.flock(File::LOCK_EX) begin TCPServer.new(Unicorn::Const::DEFAULT_HOST, Unicorn::Const::DEFAULT_PORT).close rescue Errno::EADDRINUSE, Errno::EACCES warn "can't bind to #{Unicorn::Const::DEFAULT_LISTEN}" return false end # unused_port should never take this, but we may run an environment # where tests are being run against older unicorns... lock_path = "#{Dir::tmpdir}/unicorn_test." \ "#{Unicorn::Const::DEFAULT_LISTEN}.lock" begin File.open(lock_path, File::WRONLY|File::CREAT|File::EXCL, 0600) yield rescue Errno::EEXIST lock_path = nil return false ensure File.unlink(lock_path) if lock_path end ensure fp.flock(File::LOCK_UN) end end end if do_test unicorn-4.7.0/test/benchmark/0000755000004100000410000000000012236653132016143 5ustar www-datawww-dataunicorn-4.7.0/test/benchmark/README0000644000004100000410000000337512236653132017033 0ustar www-datawww-data= Performance Unicorn is pretty fast, and we want it to get faster. Unicorn strives to get HTTP requests to your application and write HTTP responses back as quickly as possible. Unicorn does not do any background processing while your app runs, so your app will get all the CPU time provided to it by your OS kernel. A gentle reminder: Unicorn is NOT for serving clients over slow network connections. Use nginx (or something similar) to complement Unicorn if you have slow clients. == dd.ru This is a pure I/O benchmark. In the context of Unicorn, this is the only one that matters. It is a standard rackup-compatible .ru file and may be used with other Rack-compatible servers. unicorn -E none dd.ru You can change the size and number of chunks in the response with the "bs" and "count" environment variables. The following command will cause dd.ru to return 4 chunks of 16384 bytes each, leading to 65536 byte response: bs=16384 count=4 unicorn -E none dd.ru Or if you want to add logging (small performance impact): unicorn -E deployment dd.ru Eric runs then runs clients on a LAN it in several different ways: client@host1 -> unicorn@host1(tcp) client@host2 -> unicorn@host1(tcp) client@host3 -> nginx@host1 -> unicorn@host1(tcp) client@host3 -> nginx@host1 -> unicorn@host1(unix) client@host3 -> nginx@host2 -> unicorn@host1(tcp) The benchmark client is usually httperf. Another gentle reminder: performance with slow networks/clients is NOT our problem. That is the job of nginx (or similar). == Contributors This directory is maintained independently in the "benchmark" branch based against v0.1.0. Only changes to this directory (test/benchmarks) are committed to this branch although the master branch may merge this branch occassionaly. unicorn-4.7.0/test/benchmark/stack.ru0000644000004100000410000000024112236653132017615 0ustar www-datawww-datarun(lambda { |env| body = "#{caller.size}\n" h = { "Content-Length" => body.size.to_s, "Content-Type" => "text/plain", } [ 200, h, [ body ] ] }) unicorn-4.7.0/test/benchmark/dd.ru0000644000004100000410000000136212236653132017104 0ustar www-datawww-data# This benchmark is the simplest test of the I/O facilities in # unicorn. It is meant to return a fixed-sized blob to test # the performance of things in Unicorn, _NOT_ the app. # # Adjusting this benchmark is done via the "bs" (byte size) and "count" # environment variables. "count" designates the count of elements of # "bs" length in the Rack response body. The defaults are bs=4096, count=1 # to return one 4096-byte chunk. bs = ENV['bs'] ? ENV['bs'].to_i : 4096 count = ENV['count'] ? ENV['count'].to_i : 1 slice = (' ' * bs).freeze body = (1..count).map { slice }.freeze hdr = { 'Content-Length' => (bs * count).to_s.freeze, 'Content-Type' => 'text/plain'.freeze }.freeze response = [ 200, hdr, body ].freeze run(lambda { |env| response }) unicorn-4.7.0/.wrongdoc.yml0000644000004100000410000000052112236653132015653 0ustar www-datawww-data--- cgit_url: http://bogomips.org/unicorn.git git_url: git://bogomips.org/unicorn.git rdoc_url: http://unicorn.bogomips.org/ changelog_start: v1.1.5 merge_html: unicorn_1: Documentation/unicorn.1.html unicorn_rails_1: Documentation/unicorn_rails.1.html public_email: mongrel-unicorn@rubyforge.org private_email: unicorn@bogomips.org unicorn-4.7.0/examples/0000755000004100000410000000000012236653132015050 5ustar www-datawww-dataunicorn-4.7.0/examples/logrotate.conf0000644000004100000410000000165712236653132017730 0ustar www-datawww-data# example logrotate config file, I usually keep this in # /etc/logrotate.d/unicorn_app on my Debian systems # # See the logrotate(8) manpage for more information: # http://linux.die.net/man/8/logrotate # Modify the following glob to match the logfiles your app writes to: /var/log/unicorn_app/*.log { # this first block is mostly just personal preference, though # I wish logrotate offered an "hourly" option... daily missingok rotate 180 compress # must use with delaycompress below dateext # this is important if using "compress" since we need to call # the "lastaction" script below before compressing: delaycompress # note the lack of the evil "copytruncate" option in this # config. Unicorn supports the USR1 signal and we send it # as our "lastaction" action: lastaction # assuming your pid file is in /var/run/unicorn_app/pid pid=/var/run/unicorn_app/pid test -s $pid && kill -USR1 "$(cat $pid)" endscript } unicorn-4.7.0/examples/nginx.conf0000644000004100000410000001422312236653132017044 0ustar www-datawww-data# This is example contains the bare mininum to get nginx going with # Unicorn or Rainbows! servers. Generally these configuration settings # are applicable to other HTTP application servers (and not just Ruby # ones), so if you have one working well for proxying another app # server, feel free to continue using it. # # The only setting we feel strongly about is the fail_timeout=0 # directive in the "upstream" block. max_fails=0 also has the same # effect as fail_timeout=0 for current versions of nginx and may be # used in its place. # # Users are strongly encouraged to refer to nginx documentation for more # details and search for other example configs. # you generally only need one nginx worker unless you're serving # large amounts of static files which require blocking disk reads worker_processes 1; # # drop privileges, root is needed on most systems for binding to port 80 # # (or anything < 1024). Capability-based security may be available for # # your system and worth checking out so you won't need to be root to # # start nginx to bind on 80 user nobody nogroup; # for systems with a "nogroup" # user nobody nobody; # for systems with "nobody" as a group instead # Feel free to change all paths to suite your needs here, of course pid /path/to/nginx.pid; error_log /path/to/nginx.error.log; events { worker_connections 1024; # increase if you have lots of clients accept_mutex off; # "on" if nginx worker_processes > 1 # use epoll; # enable for Linux 2.6+ # use kqueue; # enable for FreeBSD, OSX } http { # nginx will find this file in the config directory set at nginx build time include mime.types; # fallback in case we can't determine a type default_type application/octet-stream; # click tracking! access_log /path/to/nginx.access.log combined; # you generally want to serve static files with nginx since neither # Unicorn nor Rainbows! is optimized for it at the moment sendfile on; tcp_nopush on; # off may be better for *some* Comet/long-poll stuff tcp_nodelay off; # on may be better for some Comet/long-poll stuff # we haven't checked to see if Rack::Deflate on the app server is # faster or not than doing compression via nginx. It's easier # to configure it all in one place here for static files and also # to disable gzip for clients who don't get gzip/deflate right. # There are other gzip settings that may be needed used to deal with # bad clients out there, see http://wiki.nginx.org/NginxHttpGzipModule gzip on; gzip_http_version 1.0; gzip_proxied any; gzip_min_length 500; gzip_disable "MSIE [1-6]\."; gzip_types text/plain text/html text/xml text/css text/comma-separated-values text/javascript application/x-javascript application/atom+xml; # this can be any application server, not just Unicorn/Rainbows! upstream app_server { # fail_timeout=0 means we always retry an upstream even if it failed # to return a good HTTP response (in case the Unicorn master nukes a # single worker for timing out). # for UNIX domain socket setups: server unix:/path/to/.unicorn.sock fail_timeout=0; # for TCP setups, point these to your backend servers # server 192.168.0.7:8080 fail_timeout=0; # server 192.168.0.8:8080 fail_timeout=0; # server 192.168.0.9:8080 fail_timeout=0; } server { # enable one of the following if you're on Linux or FreeBSD # listen 80 default deferred; # for Linux # listen 80 default accept_filter=httpready; # for FreeBSD # If you have IPv6, you'll likely want to have two separate listeners. # One on IPv4 only (the default), and another on IPv6 only instead # of a single dual-stack listener. A dual-stack listener will make # for ugly IPv4 addresses in $remote_addr (e.g ":ffff:10.0.0.1" # instead of just "10.0.0.1") and potentially trigger bugs in # some software. # listen [::]:80 ipv6only=on; # deferred or accept_filter recommended client_max_body_size 4G; server_name _; # ~2 seconds is often enough for most folks to parse HTML/CSS and # retrieve needed images/icons/frames, connections are cheap in # nginx so increasing this is generally safe... keepalive_timeout 5; # path for static files root /path/to/app/current/public; # Prefer to serve static files directly from nginx to avoid unnecessary # data copies from the application server. # # try_files directive appeared in in nginx 0.7.27 and has stabilized # over time. Older versions of nginx (e.g. 0.6.x) requires # "if (!-f $request_filename)" which was less efficient: # http://bogomips.org/unicorn.git/tree/examples/nginx.conf?id=v3.3.1#n127 try_files $uri/index.html $uri.html $uri @app; location @app { # an HTTP header important enough to have its own Wikipedia entry: # http://en.wikipedia.org/wiki/X-Forwarded-For proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # enable this if you forward HTTPS traffic to unicorn, # this helps Rack set the proper URL scheme for doing redirects: # proxy_set_header X-Forwarded-Proto $scheme; # pass the Host: header from the client right along so redirects # can be set properly within the Rack application proxy_set_header Host $http_host; # we don't want nginx trying to do something clever with # redirects, we set the Host: header above already. proxy_redirect off; # set "proxy_buffering off" *only* for Rainbows! when doing # Comet/long-poll/streaming. It's also safe to set if you're using # only serving fast clients with Unicorn + nginx, but not slow # clients. You normally want nginx to buffer responses to slow # clients, even with Rails 3.1 streaming because otherwise a slow # client can become a bottleneck of Unicorn. # # The Rack application may also set "X-Accel-Buffering (yes|no)" # in the response headers do disable/enable buffering on a # per-response basis. # proxy_buffering off; proxy_pass http://app_server; } # Rails error pages error_page 500 502 503 504 /500.html; location = /500.html { root /path/to/app/current/public; } } } unicorn-4.7.0/examples/logger_mp_safe.rb0000644000004100000410000000150312236653132020345 0ustar www-datawww-data# Multi-Processing-safe monkey patch for Logger # # This monkey patch fixes the case where "preload_app true" is used and # the application spawns a background thread upon being loaded. # # This removes all lock from the Logger code and solely relies on the # underlying filesystem to handle write(2) system calls atomically when # O_APPEND is used. This is safe in the presence of both multiple # threads (native or green) and multiple processes when writing to # a filesystem with POSIX O_APPEND semantics. # # It should be noted that the original locking on Logger could _never_ be # considered reliable on non-POSIX filesystems with multiple processes, # either, so nothing is lost in that case. require 'logger' class Logger::LogDevice def write(message) @dev.syswrite(message) end def close @dev.close end end unicorn-4.7.0/examples/init.sh0000644000004100000410000000257312236653132016356 0ustar www-datawww-data#!/bin/sh set -e # Example init script, this can be used with nginx, too, # since nginx and unicorn accept the same signals # Feel free to change any of the following variables for your app: TIMEOUT=${TIMEOUT-60} APP_ROOT=/home/x/my_app/current PID=$APP_ROOT/tmp/pids/unicorn.pid CMD="/usr/bin/unicorn -D -c $APP_ROOT/config/unicorn.rb" INIT_CONF=$APP_ROOT/config/init.conf action="$1" set -u test -f "$INIT_CONF" && . $INIT_CONF old_pid="$PID.oldbin" cd $APP_ROOT || exit 1 sig () { test -s "$PID" && kill -$1 `cat $PID` } oldsig () { test -s $old_pid && kill -$1 `cat $old_pid` } case $action in start) sig 0 && echo >&2 "Already running" && exit 0 $CMD ;; stop) sig QUIT && exit 0 echo >&2 "Not running" ;; force-stop) sig TERM && exit 0 echo >&2 "Not running" ;; restart|reload) sig HUP && echo reloaded OK && exit 0 echo >&2 "Couldn't reload, starting '$CMD' instead" $CMD ;; upgrade) if sig USR2 && sleep 2 && sig 0 && oldsig QUIT then n=$TIMEOUT while test -s $old_pid && test $n -ge 0 do printf '.' && sleep 1 && n=$(( $n - 1 )) done echo if test $n -lt 0 && test -s $old_pid then echo >&2 "$old_pid still exists after $TIMEOUT seconds" exit 1 fi exit 0 fi echo >&2 "Couldn't upgrade, starting '$CMD' instead" $CMD ;; reopen-logs) sig USR1 ;; *) echo >&2 "Usage: $0 " exit 1 ;; esac unicorn-4.7.0/examples/unicorn.conf.minimal.rb0000644000004100000410000000112312236653132021420 0ustar www-datawww-data# Minimal sample configuration file for Unicorn (not Rack) when used # with daemonization (unicorn -D) started in your working directory. # # See http://unicorn.bogomips.org/Unicorn/Configurator.html for complete # documentation. # See also http://unicorn.bogomips.org/examples/unicorn.conf.rb for # a more verbose configuration using more features. listen 2007 # by default Unicorn listens on port 8080 worker_processes 2 # this should be >= nr_cpus pid "/path/to/app/shared/pids/unicorn.pid" stderr_path "/path/to/app/shared/log/unicorn.log" stdout_path "/path/to/app/shared/log/unicorn.log" unicorn-4.7.0/examples/unicorn.conf.rb0000644000004100000410000001044412236653132020001 0ustar www-datawww-data# Sample verbose configuration file for Unicorn (not Rack) # # This configuration file documents many features of Unicorn # that may not be needed for some applications. See # http://unicorn.bogomips.org/examples/unicorn.conf.minimal.rb # for a much simpler configuration file. # # See http://unicorn.bogomips.org/Unicorn/Configurator.html for complete # documentation. # Use at least one worker per core if you're on a dedicated server, # more will usually help for _short_ waits on databases/caches. worker_processes 4 # Since Unicorn is never exposed to outside clients, it does not need to # run on the standard HTTP port (80), there is no reason to start Unicorn # as root unless it's from system init scripts. # If running the master process as root and the workers as an unprivileged # user, do this to switch euid/egid in the workers (also chowns logs): # user "unprivileged_user", "unprivileged_group" # Help ensure your application will always spawn in the symlinked # "current" directory that Capistrano sets up. working_directory "/path/to/app/current" # available in 0.94.0+ # listen on both a Unix domain socket and a TCP port, # we use a shorter backlog for quicker failover when busy listen "/path/to/.unicorn.sock", :backlog => 64 listen 8080, :tcp_nopush => true # nuke workers after 30 seconds instead of 60 seconds (the default) timeout 30 # feel free to point this anywhere accessible on the filesystem pid "/path/to/app/shared/pids/unicorn.pid" # By default, the Unicorn logger will write to stderr. # Additionally, ome applications/frameworks log to stderr or stdout, # so prevent them from going to /dev/null when daemonized here: stderr_path "/path/to/app/shared/log/unicorn.stderr.log" stdout_path "/path/to/app/shared/log/unicorn.stdout.log" # combine Ruby 2.0.0dev or REE with "preload_app true" for memory savings # http://rubyenterpriseedition.com/faq.html#adapt_apps_for_cow preload_app true GC.respond_to?(:copy_on_write_friendly=) and GC.copy_on_write_friendly = true # Enable this flag to have unicorn test client connections by writing the # beginning of the HTTP headers before calling the application. This # prevents calling the application for connections that have disconnected # while queued. This is only guaranteed to detect clients on the same # host unicorn runs on, and unlikely to detect disconnects even on a # fast LAN. check_client_connection false before_fork do |server, worker| # the following is highly recomended for Rails + "preload_app true" # as there's no need for the master process to hold a connection defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect! # The following is only recommended for memory/DB-constrained # installations. It is not needed if your system can house # twice as many worker_processes as you have configured. # # # This allows a new master process to incrementally # # phase out the old master process with SIGTTOU to avoid a # # thundering herd (especially in the "preload_app false" case) # # when doing a transparent upgrade. The last worker spawned # # will then kill off the old master process with a SIGQUIT. # old_pid = "#{server.config[:pid]}.oldbin" # if old_pid != server.pid # begin # sig = (worker.nr + 1) >= server.worker_processes ? :QUIT : :TTOU # Process.kill(sig, File.read(old_pid).to_i) # rescue Errno::ENOENT, Errno::ESRCH # end # end # # Throttle the master from forking too quickly by sleeping. Due # to the implementation of standard Unix signal handlers, this # helps (but does not completely) prevent identical, repeated signals # from being lost when the receiving process is busy. # sleep 1 end after_fork do |server, worker| # per-process listener ports for debugging/admin/migrations # addr = "127.0.0.1:#{9293 + worker.nr}" # server.listen(addr, :tries => -1, :delay => 5, :tcp_nopush => true) # the following is *required* for Rails + "preload_app true", defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection # if preload_app is true, then you may also want to check and # restart any other shared sockets/descriptors such as Memcached, # and Redis. TokyoCabinet file handles are safe to reuse # between any number of forked children (assuming your kernel # correctly implements pread()/pwrite() system calls) end unicorn-4.7.0/examples/git.ru0000644000004100000410000000067212236653132016210 0ustar www-datawww-data#\-E none # See http://thread.gmane.org/gmane.comp.web.curl.general/10473/raw on # how to setup git for this. A better version of the above patch was # accepted and committed on June 15, 2009, so you can pull the latest # curl CVS snapshot to try this out. require 'unicorn/app/inetd' use Rack::Lint use Rack::Chunked # important! run Unicorn::App::Inetd.new( *%w(git daemon --verbose --inetd --export-all --base-path=/home/ew/unicorn) ) unicorn-4.7.0/examples/echo.ru0000644000004100000410000000121512236653132016335 0ustar www-datawww-data#\-E none # # Example application that echoes read data back to the HTTP client. # This emulates the old echo protocol people used to run. # # An example of using this in a client would be to run: # curl --no-buffer -T- http://host:port/ # # Then type random stuff in your terminal to watch it get echoed back! class EchoBody < Struct.new(:input) def each(&block) while buf = input.read(4096) yield buf end self end end use Rack::Chunked run lambda { |env| /\A100-continue\z/i =~ env['HTTP_EXPECT'] and return [100, {}, []] [ 200, { 'Content-Type' => 'application/octet-stream' }, EchoBody.new(env['rack.input']) ] } unicorn-4.7.0/examples/big_app_gc.rb0000644000004100000410000000022212236653132017443 0ustar www-datawww-data# see {Unicorn::OobGC}[http://unicorn.bogomips.org/Unicorn/OobGC.html] # Unicorn::OobGC was broken in Unicorn v3.3.1 - v3.6.1 and fixed in v3.6.2 unicorn-4.7.0/LICENSE0000644000004100000410000000573312236653132014247 0ustar www-datawww-dataUnicorn is copyrighted free software by all contributors, see logs in revision control for names and email addresses of all of them. You can redistribute it and/or modify it under either the terms of the GNU General Public License (GPL) as published by the Free Software Foundation (FSF), either version 2 of the License, or (at your option) any later version. We currently prefer the GPLv3 or later for derivative works, but the GPLv2 is fine. The complete texts of the GPLv2 and GPLv3 are below: GPLv2 - http://www.gnu.org/licenses/gpl-2.0.txt GPLv3 - http://www.gnu.org/licenses/gpl-3.0.txt You may (against our _preference_) also use the Ruby 1.8 license terms which we inherited from the original Mongrel project when we forked it: === Ruby 1.8-specific terms (if you're not using the GPL) 1. You may make and give away verbatim copies of the source form of the software without restriction, provided that you duplicate all of the original copyright notices and associated disclaimers. 2. You may modify your copy of the software in any way, provided that you do at least ONE of the following: a) place your modifications in the Public Domain or otherwise make them Freely Available, such as by posting said modifications to Usenet or an equivalent medium, or by allowing the author to include your modifications in the software. b) use the modified software only within your corporation or organization. c) rename any non-standard executables so the names do not conflict with standard executables, which must also be provided. d) make other distribution arrangements with the author. 3. You may distribute the software in object code or executable form, provided that you do at least ONE of the following: a) distribute the executables and library files of the software, together with instructions (in the manual page or equivalent) on where to get the original distribution. b) accompany the distribution with the machine-readable source of the software. c) give non-standard executables non-standard names, with instructions on where to get the original software distribution. d) make other distribution arrangements with the author. 4. You may modify and include the part of the software into any other software (possibly commercial). But some files in the distribution are not written by the author, so that they are not under this terms. 5. The scripts and library files supplied as input to or produced as output from the software do not automatically fall under the copyright of the software, but belong to whomever generated them, and may be sold commercially, and may be aggregated with this software. 6. THIS SOFTWARE IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. unicorn-4.7.0/HACKING0000644000004100000410000001077412236653132014232 0ustar www-datawww-data= Unicorn Hacker's Guide == Polyglot Infrastructure Like Mongrel, we use Ruby where it makes sense, and Ragel with C where it helps performance. All of the code that actually runs your Rack application is written Ruby, Ragel or C. As far as tests and documentation goes, we're not afraid to embrace Unix and use traditional Unix tools where they make sense and get the job done. === Tests Tests are good, but slow tests make development slow, so we make tests faster (in parallel) with GNU make (instead of Rake) and avoiding RubyGems. Users of GNU-based systems (such as GNU/Linux) usually have GNU make installed as "make" instead of "gmake". Since we don't load RubyGems by default, loading Rack properly requires setting up RUBYLIB to point to where Rack is located. Not loading RubyGems drastically lowers the time to run the full test suite. You may setup a "local.mk" file in the top-level working directory to setup your RUBYLIB and any other environment variables. A "local.mk.sample" file is provided for reference. Running the entire test suite with 4 tests in parallel: gmake -j4 test Running just one unit test: gmake test/unit/test_http_parser.rb Running just one test case in a unit test: gmake test/unit/test_http_parser.rb--test_parse_simple.n === HttpServer We strive to write as little code as possible while still maintaining readability. However, readability and flexibility may be sacrificed for performance in hot code paths. For Ruby, less code generally means faster code. Memory allocation should be minimized as much as practically possible. Buffers for IO#readpartial are preallocated in the hot paths to avoid building up garbage. Hash assignments use frozen strings to avoid the duplication behind-the-scenes. We spend as little time as possible inside signal handlers and instead defer handling them for predictability and robustness. Most of the Unix-specific things are in the Unicorn::HttpServer class. Unix systems programming experience will come in handy (or be learned) here. === Documentation We use RDoc 2.5.x with Darkfish for documentation as much as possible, if you're on Ruby 1.8 you want to install the latest "rdoc" gem. Due to the lack of RDoc-to-manpage converters we know about, we're writing manpages in Markdown and converting to troff/HTML with Pandoc. Please wrap documentation at 72 characters-per-line or less (long URLs are exempt) so it is comfortably readable from terminals. When referencing mailing list posts, use "http://mid.gmane.org/$MESSAGE_ID" if possible since the Message-ID remains searchable even if Gmane becomes unavailable. === Ruby/C Compatibility We target Ruby 1.8.6+, 1.9 and will target Rubinius as it becomes production-ready. We need the Ruby implementation to support fork, exec, pipe, UNIX signals, access to integer file descriptors and ability to use unlinked files. All of our C code is OS-independent and should run on compilers supported by the versions of Ruby we target. === Ragel Compatibility We target the latest released version of Ragel and will update our code to keep up with new releases. Packaged tarballs and gems include the generated source code so they will remain usable if compatibility is broken. == Contributing Contributions are welcome in the form of patches, pull requests, code review, testing, documentation, user support or any other feedback is welcome. The mailing list is the central coordination point for all user and developer feedback and bug reports. === Submitting Patches Follow conventions already established in the code and do not exceed 80 characters per line. Inline patches (from "git format-patch -M") to the mailing list are preferred because they allow code review and comments in the reply to the patch. We will adhere to mostly the same conventions for patch submissions as git itself. See the Documentation/SubmittingPatches document distributed with git on on patch submission guidelines to follow. Just don't email the git mailing list or maintainer with Unicorn patches :) == Building a Gem In order to build the gem, you must install the following components: * wrongdoc * pandoc You can build the Unicorn gem with the following command: gmake gem == Running Development Versions It is easy to install the contents of your git working directory: Via RubyGems (RubyGems 1.3.5+ recommended for prerelease versions): gmake install-gem Without RubyGems (via setup.rb): gmake install It is not at all recommended to mix a RubyGems installation with an installation done without RubyGems, however. unicorn-4.7.0/ISSUES0000644000004100000410000000244312236653132014173 0ustar www-datawww-data= Issues The {mailing list}[mailto:mongrel-unicorn@rubyforge.org] is the best place to report bugs, submit patches and/or obtain support after you have searched the mailing list archives and {documentation}[http://unicorn.bogomips.org]. * No subscription is needed to post to the mailing list, let us know that we need to Cc: replies to you if you're unsubscribed. * Do not {top post}[http://catb.org/jargon/html/T/top-post.html] in replies * Quote only the relevant portions of the message you're replying to * Do not send HTML mail If your issue is of a sensitive nature or you're just shy in public, then feel free to email us privately at mailto:unicorn@bogomips.org instead and your issue will be handled discreetly. If you don't get a response within a few days, we may have forgotten about it so feel free to ask again. == Submitting Patches See the HACKING document (and additionally, the Documentation/SubmittingPatches document distributed with git) on guidelines for patch submission. == Mailing List Info * subscribe: http://rubyforge.org/mailman/listinfo/mongrel-unicorn * post: mailto:mongrel-unicorn@rubyforge.org * private: mailto:unicorn@bogomips.org == Mailing List Archives * nntp://news.gmane.org/gmane.comp.lang.ruby.unicorn.general * http://rubyforge.org/pipermail/mongrel-unicorn unicorn-4.7.0/script/0000755000004100000410000000000012236653132014536 5ustar www-datawww-dataunicorn-4.7.0/script/isolate_for_tests0000755000004100000410000000163312236653132020217 0ustar www-datawww-data#!/usr/bin/env ruby # scripts/Makefiles can read and eval the output of this script and # use it as RUBYLIB require 'rubygems' require 'isolate' fp = File.open(__FILE__, "rb") fp.flock(File::LOCK_EX) ruby_engine = defined?(RUBY_ENGINE) ? RUBY_ENGINE : 'ruby' opts = { :system => false, # we want "ruby-1.8.7" and not "ruby-1.8", so disable :multiruby :multiruby => false, :path => "tmp/isolate/#{ruby_engine}-#{RUBY_VERSION}", } pid = fork do Isolate.now!(opts) do gem 'raindrops', '0.12.0' gem 'kgio-monkey', '0.4.0' gem 'kgio', '2.8.1' gem 'rack', '1.5.2' end end _, status = Process.waitpid2(pid) status.success? or abort status.inspect lib_paths = Dir["#{opts[:path]}/gems/*-*/lib"].map { |x| File.expand_path(x) } dst = "tmp/isolate/#{ruby_engine}-#{RUBY_VERSION}.mk" File.open("#{dst}.#$$", "w") do |fp| fp.puts "ISOLATE_LIBS=#{lib_paths.join(':')}" end File.rename("#{dst}.#$$", dst) unicorn-4.7.0/Rakefile0000644000004100000410000000340712236653132014703 0ustar www-datawww-data# -*- encoding: binary -*- autoload :Gem, 'rubygems' require 'wrongdoc' cgit_url = Wrongdoc.config[:cgit_url] git_url = Wrongdoc.config[:git_url] desc "post to FM" task :fm_update do require 'tempfile' require 'net/http' require 'net/netrc' require 'json' version = ENV['VERSION'] or abort "VERSION= needed" uri = URI.parse('https://freecode.com/projects/unicorn/releases.json') rc = Net::Netrc.locate('unicorn-fm') or abort "~/.netrc not found" api_token = rc.password _, subject, body = `git cat-file tag v#{version}`.split(/\n\n/, 3) tmp = Tempfile.new('fm-changelog') tmp.puts subject tmp.puts tmp.puts body tmp.flush system(ENV["VISUAL"], tmp.path) or abort "#{ENV["VISUAL"]} failed: #$?" changelog = File.read(tmp.path).strip req = { "auth_code" => api_token, "release" => { "tag_list" => "Experimental", "version" => version, "changelog" => changelog, }, }.to_json if ! changelog.strip.empty? && version =~ %r{\A[\d\.]+\d+\z} Net::HTTP.start(uri.host, uri.port, :use_ssl => true) do |http| p http.post(uri.path, req, {'Content-Type'=>'application/json'}) end else warn "not updating freshmeat for v#{version}" end end # optional rake-compiler support in case somebody needs to cross compile begin mk = "ext/unicorn_http/Makefile" if File.readable?(mk) warn "run 'gmake -C ext/unicorn_http clean' and\n" \ "remove #{mk} before using rake-compiler" elsif ENV['VERSION'] unless File.readable?("ext/unicorn_http/unicorn_http.c") abort "run 'gmake ragel' or 'make ragel' to generate the Ragel source" end spec = Gem::Specification.load('unicorn.gemspec') require 'rake/extensiontask' Rake::ExtensionTask.new('unicorn_http', spec) end rescue LoadError end unicorn-4.7.0/ChangeLog0000600000004100000410000052730512236653132015010 0ustar www-datawww-dataChangeLog from http://bogomips.org/unicorn.git (v1.1.5..v4.7.0) commit 9c8747d290dfc7ab4bc11c4f88b3c284cc5ba949 Author: Eric Wong Date: Mon Nov 4 06:28:56 2013 +0000 unicorn 4.7.0 - minor updates, license tweak * support SO_REUSEPORT on new listeners (:reuseport) This allows users to start an independent instance of unicorn on a the same port as a running unicorn (as long as both instances use :reuseport). ref: https://lwn.net/Articles/542629/ * unicorn is now GPLv2-or-later and Ruby 1.8-licensed (instead of GPLv2-only, GPLv3-only, and Ruby 1.8-licensed) This changes nothing at the moment. Once the FSF publishes the next version of the GPL, users may choose the newer GPL version without the unicorn BDFL approving it. Two years ago when I got permission to add GPLv3 to the license options, I also got permission from all past contributors to approve future versions of the GPL. So now I'm approving all future versions of the GPL for use with unicorn. Reasoning below: In case the GPLv4 arrives and I am not alive to approve/review it, the lesser of evils is have give blanket approval of all future GPL versions (as published by the FSF). The worse evil is to be stuck with a license which cannot guarantee the Free-ness of this project in the future. This unfortunately means the FSF can theoretically come out with license terms I do not agree with, but the GPLv2 and GPLv3 will always be an option to all users. Note: we currently prefer GPLv3 Two improvements thanks to Ernest W. Durbin III: * USR2 redirects fixed for Ruby 1.8.6 (broken since 4.1.0) * unicorn(1) and unicorn_rails(1) enforces valid integer for -p/--port A few more odd, minor tweaks and fixes: * attempt to rename PID file when possible (on USR2) * workaround reopen atomicity issues for stdio vs non-stdio * improve handling of client-triggerable socket errors commit d5870ccc714a4bb442a46aedd4c68c547e8e56f4 Author: Eric Wong Date: Fri Nov 1 20:02:47 2013 +0000 bin/*: enforce -p/--port argument to be a valid integer Users may confuse '-p' with the (to-be-deprecated) '-P/--pid' option, leading to surprising behavior if a pathname is passed as a port, because String#to_i would convert it to zero, causing: TCPServer.new(host, port = 0) to bind to a random, unused port. commit 7e9e4c740aba24096f768f578779dc1053cb8b70 Author: Ernest W. Durbin III Date: Fri Nov 1 10:12:33 2013 -0400 construct listener_fds Hash in 1.8.6 compatible way This renables the ability for Ruby 1.8.6 environments to perform reexecs [ew: clarified this is for 1.8.6, favor literal {} over Hash.new, tweaked LISTENERS.map => LISTENERS.each, thanks to Hleb Valoshka ] Signed-off-by: Eric Wong commit 03580a19afe5ce76323a7366b92243a94d445de1 Author: Eric Wong Date: Tue Oct 29 00:36:49 2013 +0000 configurator: validate :reuseport for boolean-ess In case we (and Linux) supports other values in the future, we can update it then. Until now, ensure users only set true or false for this option. commit f078eb93d343bb27cf5c6dc84efbe7c598d572fb Author: Eric Wong Date: Sat Oct 26 07:05:10 2013 +0000 license: allow all future versions of the GNU GPL There is currently no GPLv4, so this change has no effect at the moment. In case the GPLv4 arrives and I am not alive to approve/review it, the lesser of evils is have give blanket approval of all future GPL versions (as published by the FSF). The worse evil is to be stuck with a license which cannot guarantee the Free-ness of this project in the future. This unfortunately means the FSF can theoretically come out with license terms I do not agree with, but the GPLv2 and GPLv3 will always be an option to all users. commit d9c0db79e9eef9839aaada1be1105b3ff8ceae5c Author: Eric Wong Date: Fri Oct 25 19:56:47 2013 +0000 http_server: fixup comments for PID file renaming Thanks to Hongli Lai for noticing my typo. While we're at it, finish up a halfway-written comment for the EXDEV case commit e025cd99beee500f175a3bcc302a1307b39ffb77 Author: Eric Wong Date: Fri Oct 25 19:45:15 2013 +0000 avoid IO_PURGATORY on Ruby 1.9+ Ruby 1.9 and later includes IO#autoclose=, so we can use it and prevent some dead IO objects from hanging around. commit 7c125886b5862bf20711bae22e6697ad46141434 Author: Eric Wong Date: Fri Oct 25 19:27:05 2013 +0000 support SO_REUSEPORT on new listeners (:reuseport) This allows users to start an independent instance of unicorn on a the same port as a running unicorn (as long as both instances use :reuseport). ref: https://lwn.net/Articles/542629/ commit 1dc099228ee0f59c13385a3e7346a2cb37d85153 Author: Eric Wong Date: Fri Oct 25 19:54:39 2013 +0000 tests: limit oobgc check to accepted sockets Otherwise these tests fail if we start using IO#autoclose=true on Ruby 1.9 (and also if we use IPv6 sockets for tests). commit 7d6ac0c17eb29a00a5b74099dbb3d4d015999f27 Author: Eric Wong Date: Thu Oct 24 22:11:17 2013 +0000 attempt to rename PID file when possible This will preserve mtime on successful renames for comparisions. While we're at it, avoid writing the new PID until the listeners are inherited successfully. This can be useful to avoid accidentally clobbering a good PID if binding the listener or building the app (preload_app==true) fails commit d90eebe1e50e2bdb9632b64591e4b84cbc0049a1 Author: Eric Wong Date: Sun Oct 20 04:29:55 2013 +0000 workaround reopen atomicity issues for stdio vs non-stdio In multithreaded apps, we must use dup2/dup3 with a temporary descriptor to reopen log files atomically. This is the only way to protect all concurrent userspace access to a file when reopening. ref: http://bugs.ruby-lang.org/issues/9036 ref: yahns commit bcb10abe53cfb1d6a8ef7daef59eb10ced397c8a commit a9dfd48f9668d0a6e04cf009cea0c4ede962144d Author: Eric Wong Date: Mon Sep 30 18:17:20 2013 +0000 Rakefile: kill raa_update task RAA is dead. commit 0c2213dfe23f177c91d76c0c70aec5a01f5a7f55 Author: Eric Wong Date: Wed Sep 11 00:49:35 2013 +0000 tests: upgrade several gems (rack, kgio, raindrops) All tests seem to pass. commit 849348f82830326e7778e50a5a7f2efeeb4460e5 Author: Eric Wong Date: Wed Sep 4 19:21:57 2013 +0000 Sandbox: document SIGUSR2 + bundler issue with 2.0.0 Thanks to Eric Chapweske for the heads up. ref: http://mid.gmane.org/loom.20130904T205308-432@post.gmane.org commit 9af083d7f6b97c0f5ebbdd9a42b58478a6f874b7 Author: Eric Wong Date: Fri Aug 16 22:08:11 2013 +0000 test_util: fix encoding test for Ruby trunk (2.1.0dev) As of r40610 in ruby trunk, internal encoding is ignored if external coding is ASCII-8BIT (binary) ref: r40610 http://svn.ruby-lang.org/repos/ruby/trunk commit 24b9f66dcdda44378b4053645333ce9ce336b413 Author: Eric Wong Date: Sat Aug 17 01:09:46 2013 +0000 http_server: improve handling of client-triggerable socket errors We do not attempt to write HTTP responses for socket errors if clients disconnect from us unexpectedly. Additionally, we do not hide backtraces EINVAL/EBADF errors, since they are indicative of real bugs which must be fixed. We do continue to hide hide EOF, ECONNRESET, ENOTCONN, and EPIPE because clients (even "friendly") ones will break connections due to client crashes or network failure (which is common for me :P), and the backtraces from those will cause excessive logging and even become a DoS vector. commit 2f5174d4ca9764313d6be4c092e9e6c2e4f9d1e1 Author: Eric Wong Date: Fri Jun 21 08:00:09 2013 +0000 unicorn 4.6.3 - fix --no-default-middleware option Thanks to Micah Chalmer for this fix. There are also minor documentation updates and internal cleanups. commit 56b0c0c3d26304beeef54d8fe95bead97424f147 Author: Micah Chalmer Date: Thu Jun 6 23:03:36 2013 -0400 Make -N/--no-default-middleware option work This fixes the -N (a.k.a. --no-defaut-middleware) option, which was not working. The problem was that Unicorn::Configurator::RACKUP is cleared before the lambda returned by Unicorn.builder is run, which means that checking whether the :no_default_middleware option was set from the lambda could not detect anything. This patch copies it to a local variable that won't get clobbered, restoring the feature. [ew: squashed test commit into the fix, whitespace fixes] Signed-off-by: Eric Wong commit 421f5a8573484b1203fceebc65aee5d011d63c63 Author: Eric Wong Date: Wed May 8 22:57:23 2013 +0000 HttpParser#next? becomes response_start_sent-aware This could allow servers with persistent connection support[1] to support our check_client_connection in the future. [1] - Rainbows!/zbatery, possibly others commit c3c79fcdb71c599e426f9ce83d45dc8cc3d9cd3c Author: Eric Wong Date: Fri May 3 22:08:15 2013 +0000 test_signals: increase delay between Process.kill Otherwise, the signalled process may take too long to react to and process all the signals on machines with few CPUs. commit 9f846a26d24d7bfaf17cacad16cfbae7eec39c74 Author: Eric Wong Date: Fri Apr 5 21:38:17 2013 +0000 doc: update documentation for systemd + PrivateTmp users The PrivateTmp feature of systemd breaks the usage of /tmp for the shared Unix domain socket between nginx and unicorn, so discourage the use of /tmp in that case. While we're at it, use consistent paths for everything and use an obviously intended-for-user-customization "/path/to" prefix instead of "/tmp" ML-Ref: CAKLVLx_t+9zWMhquMWDfStrxS7xrNoGmN0ZDsjSCUE=VxU+oyQ@mail.gmail.com Reported-by: David Wilkins commit 04bcc147d0081433069235a87f779055fa7b6f3c Author: Eric Wong Date: Tue Feb 26 02:57:24 2013 +0000 unicorn 4.6.2 - HTTP parser fix for Rainbows! This release fixes a bug in Unicorn::HttpParser#filter_body which affected some configurations of Rainbows! There is also a minor size reduction in the DSO. commit f7ee06592d7709e96f64efb5e7a9485b54415c9d Author: Eric Wong Date: Tue Feb 26 02:52:37 2013 +0000 http: avoid frozen string bug in filter_body Our rb_str_modify() became no-ops due to incomplete reverts of workarounds for old Rubinius, causing rb_str_set_len to fail with: can't set length of shared string (RuntimeError) This bug was introduced due to improper workarounds for old versions of Rubinius in 2009 and 2010: commit 5e8979ad38efdc4de3a69cc53aea33710d478406 ("http: cleanups for latest Rubinius") commit f37c23704cb73d57e9e478295d1641df1d9104c7 ("http: no-op rb_str_modify() for Rubies without it") commit 3ef703179891fa3f6f9d03f2ae58d289c691738e Author: Eric Wong Date: Tue Feb 19 11:36:18 2013 +0000 httpdate: minor size reduction in DSO Extra pointers waste space in the DSO. Normally I wouldn't care, but the string lengths are identical and this code already made it into another project in this form. size(1) output: text data bss dec hex filename before: 42881 2040 336 45257 b0c9 unicorn_http.so after: 42499 1888 336 44723 aeb3 unicorn_http.so ref: http://www.akkadia.org/drepper/dsohowto.pdf commit f8829e69e28bb93dbbf9a220cdff163a6ba182d5 Author: Eric Wong Date: Thu Feb 21 08:36:35 2013 +0000 unicorn 4.6.1 - minor cleanups Unicorn::Const::UNICORN_VERSION is now auto-generated from GIT-VERSION-GEN and always correct. Minor cleanups for hijacking. commit 15c23106ffc9b7a03fdc2353f41c239f89ac9822 Author: Eric Wong Date: Sat Feb 9 01:13:17 2013 +0000 http_request: drop conditional assignment for hijack As far as I can tell, this was never necessary. commit ed28a361d234847dca550e839f22f0cc779f6ce0 Author: Eric Wong Date: Fri Feb 8 22:48:03 2013 +0000 http_request: remove FIXME for rack.version clarification commit a9474624a148fe58e0944664190b259787dcf51e in rack.git commit cb0623f25db7f06660e563e8e746bfe0ae5ba9c5 Author: Eric Wong Date: Fri Feb 8 18:50:07 2013 +0000 auto-generate Unicorn::Const::UNICORN_VERSION This DRYs out our code and prevents snafus like the 4.6.0 release where UNICORN_VERSION stayed at 4.5.0 Reported-by: Maurizio De Santis commit 1b3352ec9b5c9eeb58cf330d6b9ce8753af4ec16 Author: Eric Wong Date: Wed Feb 6 11:20:57 2013 +0000 unicorn 4.6.0 - hijacking support This pre-release adds hijacking support for Rack 1.5 users. See Rack documentation for more information about hijacking. There is also a new --no-default-middleware/-N option for the `unicorn' command to ignore RACK_ENV within unicorn thanks to Lin Jen-Shin. There are only documentation and test-portability updates since 4.6.0pre1, no code changes. commit 9cd8554749a9f120b010c93933d09d2dd27b1280 Author: Eric Wong Date: Mon Feb 4 12:39:09 2013 +0000 tests: "wc -l" portability for *BSDs On FreeBSD 9.0, "wc -l" emits leading whitespace, so filter it through tr -d '[:space:]' to eliminate it. commit 2a2163594ea2b515e98fbe9f909bcf90e4c35fe8 Author: Eric Wong Date: Mon Feb 4 12:29:00 2013 +0000 tests: "wc -c" portability for *BSDs On FreeBSD 9.0, "wc -c" emits leading whitespace, so filter it through tr -d '[:space:]' to eliminate it. This is commit 8a6117a22a7d01eeb5adc63d3152acf435cd3176 in rainbows.git commit 85223902e8229bd460ce0b4ad126f42b1db42a46 Author: Eric Wong Date: Mon Feb 4 10:36:18 2013 +0000 tests: replace non-portable "date +%s" with ruby equivalent "date +%s" is not in POSIX (it is in GNU, and at least FreeBSD 9.0, possibly earlier). The Ruby equivalent should be sufficiently portable between different Ruby versions. This change was automated via: perl -i -p -e 's/date \+%s/unix_time/' t/*.sh This is commit 0ba6fc3c30b9cf530faf7fcf5ce7be519ec13fe7 in rainbows.git commit a09a622b4988b5eee819487c96a4563e71f753f7 Author: Eric Wong Date: Mon Feb 4 10:30:25 2013 +0000 tests: remove utee POSIX already stipulates tee(1) must be unbuffered. I think my decision to use utee was due to my being misled by a bug in older curl where -N did not work as advertised (but --no-buffer did). N.B. we don't use tee in unicorn tests, this just matches commit cbff7b0892148b037581541184364e0e91d2a138 in rainbows commit 64765b95df06256d39daefdeebde97c874770131 Author: Eric Wong Date: Tue Jan 29 21:19:22 2013 +0000 manpage: update middleware-related documentation -N/--no-default-middleware needs a corresponding manpage entry. Additionally, the Rack::Chunked/ContentLength middleware comment is out-of-date as of unicorn v4.1.0 commit db919d18e01f6b2339915cbd057fba9dc040988b Author: Eric Wong Date: Tue Jan 29 21:02:55 2013 +0000 unicorn 4.6.0pre1 - hijacking support This pre-release adds hijacking support for Rack 1.5 users. See Rack documentation for more information about hijacking. There is also a new --no-default-middleware/-N option for the `unicorn' command to ignore RACK_ENV within unicorn. commit b73299a053b305098d5d68634fa928ec71aa4eac Merge: c43113e fedb5e5 Author: Eric Wong Date: Tue Jan 29 21:00:32 2013 +0000 Merge branch 'hijack' * hijack: ignore normal Rack response at request-time hijack support for Rack hijack in request and response commit c43113e350aabb78c30ba64884328458db85c901 Author: Lin Jen-Shin Date: Tue Jan 29 11:21:19 2013 +0800 Add -N or --no-default-middleware option. This would prevent Unicorn from adding default middleware, as if RACK_ENV were always none. (not development nor deployment) This should also be applied to `rainbows' and `zbatery' as well. One of the reasons to add this is to avoid conflicting RAILS_ENV and RACK_ENV. It would be helpful in the case where a Rails application and Rack application are composed together, while we want Rails app runs under development and Rack app runs under none (if we don't want those default middleware), and we don't really want to make RAILS_ENV set to development and RACK_ENV to none because it might be confusing. Note that Rails would also look into RACK_ENV. Another reason for this is that only `rackup' would be inserting those default middleware. Both `thin' and `puma' would not do this, nor does Rack::Handler.get.run which is used in Sinatra. So using this option would make it work differently from `rackup' but somehow more similar to `thin' or `puma'. Discussion thread on the mailing list: http://rubyforge.org/pipermail/mongrel-unicorn/2013-January/001675.html Signed-off-by: Eric Wong commit fdd7c851e5664c1e629a904e21d147a9dfc950d7 Author: Eric Wong Date: Tue Jan 29 03:56:16 2013 +0000 test_exec: do not count '\n' as column width This off-by-one error was incorrectly rejecting a line which would've been readable without wrapping on an 80-column terminal. commit 89071a412e161a3ea24a9574611932a1f0acc8c7 Author: Eric Wong Date: Tue Jan 29 03:37:20 2013 +0000 tests: upgrade to rack 1.5.1 This fixes a Rack::Lint regression discovered in t0005. commit fedb5e50829e6dfad30ca18ea525c812eccbec70 Author: Eric Wong Date: Tue Jan 22 23:52:14 2013 +0000 ignore normal Rack response at request-time hijack Once a connection is hijacked, we ignore it completely and leave the connection at the mercy of the application. commit 705cf5fcf8ccb37deef5d2b922d6d78d34765c5b Author: Eric Wong Date: Tue Jan 22 11:04:52 2013 +0000 support for Rack hijack in request and response Rack 1.5.0 (protocol version [1,2]) adds support for hijacking the client socket (removing it from the control of unicorn (or any other Rack webserver)). Tested with rack 1.5.0. commit faf1edc74c9bb35cf4e131d794c1923bf124aa1c Author: Eric Wong Date: Tue Jan 22 09:48:54 2013 +0000 tests: version bumps for rack, kgio, and raindrops Ensure the latest versions work in tests. commit 1bcc4ee4400152fe73a20dedf4f5823475393112 Author: Eric Wong Date: Mon Jan 7 20:10:43 2013 +0000 tests: bump tests to use rack 1.4.3 It's the latest and greatest! \o/ commit c4e5b936e5b6b535d56eff30c509a063d77710e1 Author: Eric Wong Date: Fri Dec 7 22:15:56 2012 +0000 unicorn 4.5.0 - check_client_connection option The new check_client_connection option allows unicorn to detect most disconnected local clients before potentially expensive application processing begins. This feature is useful for applications experiencing spikes of traffic leading to undesirable queue times, as clients will disconnect (and perhaps even retry, compounding the problem) before unicorn can even start processing the request. To enable this feature, add the following line to a unicorn config file: check_client_connection true This feature only works when nginx (or any other HTTP/1.0+ client) is on the same machine as unicorn. A huge thanks to Tom Burns for implementing and testing this change in production with real traffic (including mitigating an unexpected DoS attack). ref: http://mid.gmane.org/CAK4qKG3rkfVYLyeqEqQyuNEh_nZ8yw0X_cwTxJfJ+TOU+y8F+w@mail.gmail.com This release fixes broken Rainbows! compatibility in 4.5.0pre1. commit bc4c412f15a05a37ec40f374239efa83d2dbdb1e Author: Peter Marsh Date: Mon Dec 3 16:37:30 2012 +0000 gemspec: enable licenses metadata attribute This enables compatibility with metadata scanners such as LicenseFinder[1]. The previously commented-out accessor was commented out in September 2009 when ancient RubyGems were more prevalent. By now (December 2012), those ancient versions of RubyGems are unlikely to be around. [1] https://github.com/pivotal/LicenseFinder [ew: rewritten commit message] Signed-off-by: Eric Wong commit fd0192c134acd1d5037a9aa45ad7b5375c28c29c Author: Eric Wong Date: Mon Dec 3 21:19:44 2012 +0000 README: clarify license and copyright Since Ruby 1.9.3, (Matz) Ruby is licensed under a 2-clause BSDL. Thus we need to clarify we inherited the license terms from Ruby 1.8 to prevent misunderstanding. (The Ruby license change cannot alter the license of other projects automatically) Since we added the GPLv3 as an additional license in 2011, the license terms of unicorn no longer matches Mongrel 1.1.5. This is NOT a change to the unicorn license at all, just a wording clarification. commit 69e6a793d34ff71da7c8ca59962d627e2fb508d8 Author: Eric Wong Date: Tue Dec 4 02:35:26 2012 +0000 fix const error responses for Rainbows! Rainbows! relies on the ERROR_XXX_RESPONSE constants of unicorn 4.x. Changing the constants in unicorn 4.x will break existing versions of Rainbows!, so remove the dependency on the constants and generate the error response dynamically. Unlike Mongrel, unicorn is unlikely to see malicious traffic and thus unlikely to benefit from making error messages constant. For unicorn 5.x, we will drop these constants entirely. (Rainbows! most likely cannot support check_client_connection consistently across all concurrency models since some of them pessimistically buffer all writes in userspace. However, the extra concurrency of Rainbows! makes it less likely to be overloaded than unicorn, so this feature is likely less useful for Rainbows!) commit 32333a4d233f73f6fc9d904301f97a4406c446fa Author: Eric Wong Date: Thu Nov 29 23:00:45 2012 +0000 unicorn 4.5.0pre1 - check_client_connection option The new check_client_connection option allows unicorn to detect most disconnected clients before potentially expensive application processing begins. This feature is useful for applications experiencing spikes of traffic leading to undesirable queue times, as clients will disconnect (and perhaps even retry, compounding the problem) before unicorn can even start processing the request. To enable this feature, add the following line to a unicorn config file: check_client_connection true A huge thanks to Tom Burns for implementing and testing this change in production with real traffic (including mitigating an unexpected DoS attack). commit 90db7b14eab449da8cef4ef22ab76ae00f654361 Author: Eric Wong Date: Thu Nov 29 21:48:31 2012 +0000 check_client_connection: document local-only requirement In my testing, only dropped clients over Unix domain sockets or loopback TCP were detected with this option. Since many nginx+unicorn combinations run on the same host, this is not a problem. Furthermore, tcp_nodelay:true appears to work over loopback, so remove the requirement for tcp_nodelay:false. commit 5c700fc2cf398848ddcf71a2aa3f0f2a6563e87b Author: Tom Burns Date: Tue Oct 30 16:22:21 2012 -0400 Begin writing HTTP request headers early to detect disconnected clients This patch checks incoming connections and avoids calling the application if the connection has been closed. It works by sending the beginning of the HTTP response before calling the application to see if the socket can successfully be written to. By enabling this feature users can avoid wasting application rendering time only to find the connection is closed when attempting to write, and throwing out the result. When a client disconnects while being queued or processed, Nginx will log HTTP response 499 but the application will log a 200. Enabling this feature will minimize the time window during which the problem can arise. The feature is disabled by default and can be enabled by adding 'check_client_connection true' to the unicorn config. [ew: After testing this change, Tom Burns wrote: So we just finished the US Black Friday / Cyber Monday weekend running unicorn forked with the last version of the patch I had sent you. It worked splendidly and helped us handle huge flash sales without increased response time over the weekend. Whereas in previous flash traffic scenarios we would see the number of HTTP 499 responses grow past the number of real HTTP 200 responses, over the weekend we saw no growth in 499s during flash sales. Unexpectedly the patch also helped us ward off a DoS attack where the attackers were disconnecting immediately after making a request. ref: ] Signed-off-by: Eric Wong commit f4af812a28b03508c96853739aea53f7a6714abf Author: Eric Wong Date: Tue Nov 13 20:22:13 2012 +0000 tests: remove assert_nothing_raised (part 2) assert_nothing_raised ends up hiding errors and backtraces, making things harder to debug. Since Test::Unit already fails on uncaught exceptions, there is no need to assert on the lack of exceptions for a successful test run. This is a followup to commit 5acf5522295c947d3118926d1a1077007f615de9 commit 4bd0dbdf2d27672dc941746e06b647ea26fe63ee Author: Eric Wong Date: Thu Oct 11 09:16:51 2012 +0000 Rakefile: fm_update task updated for HTTPS Freecode.com now requires HTTPS. commit f0a31e43676f59762d5bf53707cd8cc21fed0727 Author: Eric Wong Date: Wed Oct 10 21:33:46 2012 +0000 unicorn 4.4.0 - minor updates Non-regular files are no longer reopened on SIGUSR1. This allows users to specify FIFOs as log destinations. TCP_NOPUSH/TCP_CORK is no longer set/unset by default. Use :tcp_nopush explicitly with the "listen" directive if you wish to enable TCP_NOPUSH/TCP_CORK. Listen sockets are now bound _after_ loading the application for preload_app(true) users. This prevents load balancers from sending traffic to an application server while the application is still loading. There are also minor test suite cleanups. commit 032791b9a367f67febbe7534f6ea4cac127e7897 Author: Eric Wong Date: Mon Oct 1 21:18:02 2012 -0700 util: only consider regular files as logs If a user specifies a non-regular file for stderr_path or stdout_path, we should not attempt to reopen or chown it. This should also allow users to specify FIFOs as log destinations. commit 5acf5522295c947d3118926d1a1077007f615de9 Author: Eric Wong Date: Mon Aug 6 13:34:34 2012 -0700 avoid assert_nothing_raised in unit tests It's better to show errors and backtraces when stuff breaks commit 7b107d66e84ad2e958d5574cb00770265dd117c2 Author: Eric Wong Date: Mon Aug 6 20:15:46 2012 +0000 do not touch TCP_NOPUSH/TCP_CORK at all by default On a certain FreeBSD 8.1 installation, explicitly setting TCP_NOPUSH to zero (off) can cause EADDRNOTAVAIL errors and also resets the listen backlog to 5. Enabling TCP_NOPUSH explicitly did not exhibit this issue for the user who (privately) reported this issue. To be on the safe side, we won't set/unset TCP_NOPUSH/TCP_CORK at all, which will leave it off on all current systems. commit 53c375dc933b62b24df2c54d3938b03fa9da1f06 Author: Eric Wong Date: Fri Jun 29 16:22:17 2012 -0700 bind listeners after loading for preload_app users In the case where preload_app is true, delay binding new listeners until after loading the application. Some applications have very long load times (especially Rails apps with Ruby 1.9.2). Binding listeners early may cause a load balancer to incorrectly believe the unicorn workers are ready to serve traffic even while the app is being loaded. Once a listener is bound, connect() requests from the load balancer succeed until the listen backlog is filled. This allows requests to pile up for a bit (depending on backlog size) before getting rejected by the kernel. By the time the application is loaded and ready-to-run, requests in the listen backlog are likely stale and not useful to process. Processes inheriting listeners do not suffer this effect, as the old process should still be capable of serving new requests. This change does not improve the situation for the preload_app=false (default) use case. There may not be a solution for preload_app=false users using large applications. Fortunately Ruby 1.9.3+ improves load times of large applications significantly over 1.9.2 so this should be less of a problem in the future. Reported via private email sent on 2012-06-29T22:59:10Z commit 91a3cde091d4ae6ff436681f155b3907daae1c04 Author: Eric Wong Date: Thu Jul 26 23:44:04 2012 +0000 remove Rails-oriented integration tests It's too much overhead to keep Rails-specific tests working, especially when it's hauling in an ancient version of SQLite3. Since Rails 3 has settled down with Rack and unicorn_rails is unlikely to need changing in the future, we can drop these tests. commit f4f2de4a526f3a88573f2f839e6865637c67dbe5 Author: Eric Wong Date: Sun Apr 29 07:00:48 2012 +0000 unicorn 4.3.1 - shutdown() fixes * Call shutdown(2) if a client EOFs on us during upload. We can avoid holding a socket open if the Rack app forked a process during uploads. * ignore potential Errno::ENOTCONN errors (from shutdown(2)). Even on LANs, connections can occasionally be accept()-ed but be unusable afterwards. Thanks to Joel Nimety , Matt Smith and George on the mongrel-unicorn@rubyforge.org mailing list for their feedback and testing for this release. commit 60b9275410277acc6adcf49a81c177c443d1d392 Author: Eric Wong Date: Sun Apr 29 06:49:23 2012 +0000 isolate_for_tests: upgrade to kgio-monkey 0.4.0 Seems to work well enough... commit 4551c8ad4d63d4031c618f76d39532b39e88f9be Author: Eric Wong Date: Fri Apr 27 14:42:38 2012 -0700 stream_input: call shutdown(2) if a client EOFs on us In case the Rack app forks before a client upload is complete, shutdown(2) the socket to ensure the client isn't attempting to read from us (even if it explicitly stopped writes). commit 04901da5ae0b4655c83be05d24ae737f1b572002 Author: Eric Wong Date: Fri Apr 27 11:48:16 2012 -0700 http_server: ignore ENOTCONN (mostly from shutdown(2)) Since there's nothing unicorn can do to avoid this error on unconnected/halfway-connected clients, ignoring ENOTCONN is a safe bet. Rainbows! has long had this rescue as it called getpeername(2) on untrusted sockets commit 8c1aff1e6335f8a55723907e2661dcb09ea16205 Author: Eric Wong Date: Tue Apr 17 21:32:07 2012 +0000 unicorn 4.3.0 - minor fixes and updates * PATH_INFO (aka REQUEST_PATH) increased to 4096 (from 1024). This allows requests with longer path components and matches the system PATH_MAX value common to GNU/Linux systems for serving filesystem components with long names. * Apps that fork() (but do not exec()) internally for background tasks now indicate the end-of-request immediately after writing the Rack response. Thanks to Hongli Lai, Lawrence Pit, Patrick Wenger and Nuo Yan for their valuable feedback for this release. commit e7f5de575b3fd58c65014191c31ed2a59bd05265 Author: Eric Wong Date: Tue Apr 17 21:10:51 2012 +0000 tests: set executable bit on integration shell scripts These should be made executable for ease-of-understanding and consistency, regardless of whether we actually execute them. commit 7eccef471a609c87281bb90d9d3b3d7a7b35709e Author: Eric Wong Date: Thu Apr 12 07:40:46 2012 +0000 http: increase REQUEST_PATH maximum length to 4K The previous REQUEST_PATH limit of 1024 is relatively small and some users encounter problems with long URLs. 4K is a common limit for PATH_MAX on modern GNU/Linux systems and REQUEST_PATH is likely to translate to a filesystem path name. Thanks to Nuo Yan and Lawrence Pit for their feedback on this issue. ref: http://mid.gmane.org/CB935F19-72B8-4EC2-8A1D-5084B37C09F2@gmail.com commit b26d3e2c4387707ca958cd9c63c213fc7ac558fa Author: Eric Wong Date: Thu Apr 12 16:46:24 2012 -0700 shutdown client socket for apps which fork in background Previously we relied on implicit socket shutdown() from the close() syscall. However, some Rack applications fork() (without calling exec()), creating a potentially long-lived reference to the underlying socket in a child process. This ends up causing nginx to wait on the socket shutdown when the child process exits. Calling shutdown() explicitly signals nginx (or whatever client) that the unicorn worker is done with the socket, regardless of the number of FD references to the underlying socket in existence. This was not an issue for applications which exec() since FD_CLOEXEC is always set on the client socket. Thanks to Patrick Wenger for discovering this. Thanks to Hongli Lai for the tip on using shutdown() as is done in Passenger. ref: http://mid.gmane.org/CAOG6bOTseAPbjU5LYchODqjdF3-Ez4+M8jo-D_D2Wq0jkdc4Rw@mail.gmail.com commit d258653745e1c8e8fa13b95b1944729294804946 Author: Eric Wong Date: Thu Apr 12 18:35:03 2012 -0700 t/sslgen.sh: use larger keys for tests This seems required for TLSv1.2 under OpenSSL 1.0.1 commit 12cd717d612fe8170f53f5f8377137e1b41db015 Author: Eric Wong Date: Wed Apr 11 21:38:10 2012 +0000 misc documentation spelling fixes Found via rdoc-spellcheck commit 4757aa70c3b3ab953255f74831b6f98e6f32fb72 Author: Eric Wong Date: Mon Mar 26 21:35:10 2012 +0000 unicorn 4.2.1 - minor fix and doc updates * Stale pid files are detected if a pid is recycled by processes belonging to another user, thanks to Graham Bleach. * nginx example config updates thanks to to Eike Herzbach. * KNOWN_ISSUES now documents issues with apps/libs that install conflicting signal handlers. commit 84e92a9d301f3f42d1d1e4430db33dfb43d54818 Author: Eric Wong Date: Sat Mar 24 07:45:44 2012 +0000 tests: depend on kgio 2.7.4 This latest version of kgio improves portability to FreeBSD-based systems. commit d0e7d8d770275654024887a05d9e986589ba358c Author: Eric Wong Date: Tue Mar 20 20:05:59 2012 +0000 log EPERM errors from invalid pid files In some cases, EPERM may indicate a real configuration problem, but it can also just mean the pid file is stale. commit 1e13ffee3469997286e65e0563b6433e7744388a Author: Eric Wong Date: Tue Mar 20 19:51:35 2012 +0000 KNOWN_ISSUES: document signal conflicts in libs/apps Jeffrey Yeung confirmed this issue on the mailing list. ref: commit 9fc5c24920726d3c10bc9f39d8e97686b93cbbe0 Author: Eric Wong Date: Tue Mar 20 19:49:56 2012 +0000 examples/nginx.conf: use $scheme instead of hard-coded "https" This adds a little more flexibility to the nginx config, especially as protocols (e.g. SPDY) become more prevalent. Suggested-by: Eike Herzbach commit 0daedd92d3e896a9fcd301bbb58e85bb54a939ee Author: Eric Wong Date: Tue Mar 20 19:27:08 2012 +0000 examples/nginx.conf: remove redundant word From: Eike Herzbach commit 2ce57950e0f61eb6f325a93cef9b7e0e598fc109 Author: Graham Bleach Date: Wed Feb 29 14:34:44 2012 +0000 Start the server if another user has a PID matching our stale pidfile. If unicorn doesn't get terminated cleanly (for example if the machine has its power interrupted) and the pid in the pidfile gets used by another process, the current unicorn code will exit and not start a server. This tiny patch fixes that behaviour. Acked-by: Eric Wong commit b6a154eba6d79fd1572f61290e55f4d05df86730 Author: Eric Wong Date: Sat Jan 28 09:05:07 2012 +0000 unicorn 4.2.0 The GPLv3 is now an option to the Unicorn license. The existing GPLv2 and Ruby-only terms will always remain options, but the GPLv3 is preferred. Daemonization is correctly detected on all terminals for development use (Brian P O'Rourke). Unicorn::OobGC respects applications that disable GC entirely during application dispatch (Yuichi Tateno). Many test fixes for OpenBSD, which may help other *BSDs, too. (Jeremy Evans). There is now _optional_ SSL support (via the "kgio-monkey" RubyGem). On fast, secure LANs, SSL is only intended for detecting data corruption that weak TCP checksums cannot detect. Our SSL support is remains unaudited by security experts. There are also some minor bugfixes and documentation improvements. Ruby 2.0.0dev also has a copy-on-write friendly GC which can save memory when combined with "preload_app true", so if you're in the mood, start testing Unicorn with the latest Ruby! commit 8478a54008ea64bf734b9dfc78d940ed69bc00ff Author: Eric Wong Date: Sat Jan 28 09:03:57 2012 +0000 doc: update doc for Ruby 2.0.0dev CoW-friendliness Ruby 2.0.0dev is the future and includes a CoW-friendly GC, so we shall encourage folks to give Ruby 2.0.0dev a spin. commit 49c70ae741b96588021eb1bb6327da4cf78f8ec0 Author: Eric Wong Date: Fri Jan 27 19:55:28 2012 +0000 script/isolate_for_tests: disable sqlite3-ruby for Ruby 2.0.0dev We don't need it because we don't test old Rails with bleeding edge Ruby. commit c8abf6a06c0bd7eb1dfc8457ef1c31de31e7715b Author: Eric Wong Date: Fri Jan 27 19:54:41 2012 +0000 disable old Rails tests for Ruby 2.0.0 I doubt anybody would attempt to run ancient, unsupported versions of Rails on the latest (unreleased, even) versions of Ruby... commit 79ae7110b37f9b82151cc61960d93a33bb543669 Author: Eric Wong Date: Fri Jan 27 19:27:43 2012 +0000 script/isolate_for_tests: update to kgio 2.7.2 Again, we test with the latest version. commit d6d9178f5dc40cf5cb4c5ef61094d4103f23dce5 Author: Eric Wong Date: Tue Jan 24 21:48:35 2012 +0000 update tests for Rack 1.4.1 Trying to ensure things always work with the latest version. commit a7b286273690f801c61a1db9475f74299ffaef6c Author: Eric Wong Date: Sun Jan 8 02:01:53 2012 +0000 Rakefile: swap freshmeat.net URL for freecode.com :< commit 0782f9fb69993b62dc0c3a90f900c4d8cf5745e6 Author: Eric Wong Date: Wed Dec 28 06:03:00 2011 +0000 update tests for rack 1.4.0 It's the latest and greatest version, so ensure everything works with it. commit cda82b5ff44c8fcfb61315f822bbaefa3471d4fe Author: Eric Wong Date: Sat Dec 17 06:51:58 2011 +0000 http: test case for "Connection: TE" We need to be sure we don't barf on this header. commit 68e8d3726542c549f291f82bdcb751d372c34597 Author: Eric Wong Date: Tue Dec 13 15:04:59 2011 -0800 cleanup exception handling on SIGUSR1 No need to duplicate logic here commit 7688fe59a8a80f473b276aa1ab01ff24cab6a653 Author: Eric Wong Date: Tue Dec 13 06:04:51 2011 +0000 quiet possible IOError from SIGUSR1 (reopen logs) It's possible for a SIGUSR1 signal to be received in the worker immediately before calling IO.select. In that case, do not clutter logging with IOError and just process the reopen log request. commit 2cc0db7761ee4286c5ccbc48395c70c41d402119 Author: Eric Wong Date: Mon Dec 5 02:27:14 2011 +0000 socket_helper: fix grammerr fail Oops :x commit ee6ffca0a8d129dd930f4c63d0c4c9ef034b245f Author: Eric Wong Date: Mon Dec 5 01:33:41 2011 +0000 socket_helper: set SO_KEEPALIVE on TCP sockets Even LANs can break or be unreliable sometimes and socket disconnect messages get lost, which means we fall back to the global (kill -9) timeout in Unicorn. While the default global timeout is much shorter (60s) than typical TCP timeouts, some HTTP application dispatches take much I/O or computational time (streaming many gigabytes), so the global timeout becomes ineffective. Under Linux, sysadmins are encouraged to lower the default net.ipv4.tcp_keepalive_* knobs in sysctl. There should be similar knobs in other operating systems (the default keepalive intervals are usually ridiculously high, too high for anything). When the listen socket has SO_KEEPALIVE set, the flag should be inherited by accept()-ed sockets. commit 27f666a973a59c8c6738a65b69f9060c41e6958c Author: Eric Wong Date: Mon Dec 5 01:28:33 2011 +0000 socket_helper: remove out-of-date comment for TCP_NODELAY We favor low latency and consistency with the Unix socket behavior even with TCP. commit 5f8ea2614f92172c7b214441aa3c09a6054c3aa8 Author: Eric Wong Date: Mon Dec 5 01:26:39 2011 +0000 bump dependencies We should always be testing with the newest available versions to watch for incompatibilities, even if we don't /require/ the latest ones to run. commit fbcf6aa641e5827da48a3b6776c9897de123b405 Author: Eric Wong Date: Tue Nov 15 16:32:12 2011 -0800 tests: try to set a shorter path for Unix domain sockets We're only allowed 108 bytes for Unix domain sockets. mktemp(1) usually generates path names of reasonable length and we rely on it anyways. commit c4c880c5a2ac521d4a6d0bad132d38dfff375a6c Author: Eric Wong Date: Tue Nov 15 15:28:44 2011 -0800 tests: just use the sha1sum implemented in Ruby The output of SHA1 command-line tools is too unstable and I'm more comfortable with Ruby 1.9 encoding support than I was in 2009. Jeremy Evans noted the output of "openssl sha1" has changed since I last used it. commit 2fd5910969419c17aa6a31fb2119eb47a121d497 Author: Jeremy Evans Date: Tue Nov 15 15:26:36 2011 -0800 test_helper: ensure test client connects to valid address You can listen on 0.0.0.0, but trying to connect to it doesn't work well on OpenBSD. Acked-by: Eric Wong commit 66c706acfb3cda802bac4629219e3c3e064352ed Author: Jeremy Evans Date: Tue Nov 15 15:21:58 2011 -0800 t0011: fix test under OpenBSD expr on OpenBSD uses a basic regular expression (according to re_format(7)), which doesn't support +, only *. Acked-by: Eric Wong commit 9e62bc10294f0b6344b47cd596a93ae457d546fb Author: Eric Wong Date: Tue Nov 15 15:13:15 2011 -0800 configurator: limit timeout to 30 days There's no practical difference between a timeout of 30 days and 68 years from an HTTP server standpoint. POSIX limits us to 31 days, actually, but there could be rounding error with floats used in Ruby time calculations and there's no real difference between 30 and 31 days, either... Thanks to Jeremy Evans for pointing out large values will throw EINVAL (on select(2) under OpenBSD with Ruby 1.9.3 and RangeError on older Rubies. commit aab850780f9ff0d74c346d7fd62ac588f4d5879b Author: Eric Wong Date: Tue Nov 15 15:09:21 2011 -0800 t: ensure SSL certificates exist on fresh test We throw up some fake SSL certs for testing commit c7ba76a21c5d00fb5c173cd6aa847442bbc652cb Author: Yuichi Tateno Date: Mon Oct 3 16:51:19 2011 +0900 OobGC: force GC.start [ew: we need to explicitly enable GC if it is disabled and respect applications that disable GC] Acked-by: Eric Wong commit ac346b5abcfa6253bd792091e5fb011774c40d49 Author: Eric Wong Date: Wed Sep 7 00:36:58 2011 +0000 add preliminary SSL support This will also be the foundation of SSL support in Rainbows! and Zbatery. Some users may also want to use this in Unicorn on LANs to meet certain security/auditing requirements. Of course, Nightmare! (in whatever form) should also be able to use it. commit b48c6659b294b37f2c6ff3e75c1c9245522d48d1 Author: Brian P O'Rourke Date: Wed Sep 14 18:50:29 2011 +0800 Detect daemonization via configuration. This prevents the stopping of all workers by SIGWINCH if you're using a windowing system that will 'exec' unicorn from a process that's already in a process group. Acked-by: Eric Wong commit db2cba26acc5748bcf9919e3184a667c46911f8c Author: Eric Wong Date: Fri Sep 9 16:10:55 2011 -0700 Links: add a link to the UnXF middleware Since unicorn is designed to be deployed behind nginx (or similar), X-Forwarded-* headers are common and Rack applications may blindly trust spoofed X-Forwarded-* headers. UnXF provides a central place for managing that trust by using rpatricia. commit d209910e29d4983f8346233262a49541464252c1 Author: Eric Wong Date: Fri Sep 9 15:48:53 2011 -0700 http_server: update comment on tick == 0 The old comment was confusing. We only zero the tick counter when forking because application loading can take a long time. Otherwise, it's always updated. ref: http://mid.gmane.org/20110908191352.GA25251@dcvr.yhbt.net commit 0113de29108fb669a43d4d7f5528c77a2f96db57 Author: Eric Wong Date: Fri Sep 2 16:17:57 2011 -0700 http_server: a few more things eligible for GC in worker There is no need to keep extra hashes or Proc objects around in the heap. commit cd22c595633ec36b69c60f27f2c3841ae0f6faca Author: Eric Wong Date: Mon Aug 29 19:54:32 2011 +0000 add GPLv3 option to the license Existing license terms (Ruby-specific) and GPLv2 remain in place, but GPLv3 is preferred as it helps with distribution of AGPLv3 code and is explicitly compatible with Apache License (v2.0). Many more reasons are documented by the FSF: https://www.gnu.org/licenses/quick-guide-gplv3.html http://gplv3.fsf.org/rms-why.html ref: http://thread.gmane.org/gmane.comp.lang.ruby.unicorn.general/933 commit 8bed251777e9850b04f52f4c520e8b173bd1d756 Author: Eric Wong Date: Thu Aug 25 14:24:23 2011 -0700 unicorn 4.1.1 - fix last-resort timeout accuracy The last-resort timeout mechanism was inaccurate and often delayed in activation since the 2.0.0 release. It is now fixed and remains power-efficient in idle situations, especially with the wakeup reduction in MRI 1.9.3+. There is also a new document on application timeouts intended to discourage the reliance on this last-resort mechanism. It is visible on the web at: http://unicorn.bogomips.org/Application_Timeouts.html commit 34b400cbec2a05e9a1d9fad2d6bd34f54620fdcb Author: Eric Wong Date: Wed Aug 24 17:59:55 2011 -0700 doc: add Application Timeouts document Hopefully this leads to fewer worker processes being killed. commit b781e5b1a9b652ee3da73e16851e1f17f0cecd88 Author: Eric Wong Date: Tue Aug 23 19:50:03 2011 -0700 test_helper: remove needless LOAD_PATH mangling We do it in the Ruby invocation or RUBYLIB. commit e9da4ce4c8917934242037db0c2735bd7dab1586 Author: Eric Wong Date: Tue Aug 23 17:39:53 2011 -0700 fix sleep/timeout activation accuracy I've noticed in stderr logs from some folks that (last resort) timeouts from the master process are taking too long to activate due to the workarounds for suspend/hibernation. commit 8d8b500816371fb8f8fce5e9f21cf235ee8d26ae Author: Eric Wong Date: Mon Aug 22 20:04:47 2011 +0000 .document: re-add OobGC documentation Oops! commit 4f33a71dc2e24f0cc59315b49e7a7ffe71f368d3 Author: Eric Wong Date: Fri Aug 19 23:04:30 2011 +0000 unicorn 4.1.0 - small updates and fixes * Rack::Chunked and Rack::ContentLength middlewares are loaded by default for RACK_ENV=(development|deployment) users to match Rack::Server behavior. As before, use RACK_ENV=none if you want fine-grained control of your middleware. This should also help users of Rainbows! and Zbatery. * CTL characters are now rejected from HTTP header values * Exception messages are now filtered for [:cntrl:] characters since application/middleware authors may forget to do so * Workers will now terminate properly if a SIGQUIT/SIGTERM/SIGINT is received while during worker process initialization. * close-on-exec is explicitly disabled to future-proof against Ruby 2.0 changes [ruby-core:38140] commit 5a6d4ddd8ea2df799654abadb1e25f3def9d478b Author: Eric Wong Date: Sat Aug 20 00:28:39 2011 +0000 rdoc cleanups commit 8de6ab371c1623669b86a5dfa8703c8fd539011f Author: Eric Wong Date: Fri Aug 19 22:13:04 2011 +0000 close race if an exit signal hits the worker before trap The signal handler from the master is still active and will push the pending signal to SIG_QUEUE if a worker receives a signal immediately after forking. commit f8b22397ca395a9173d391e8699d539503707792 Author: Eric Wong Date: Fri Aug 19 21:55:35 2011 +0000 gemspec: bump wrongdoc dependency for dev Hopefully it points people towards the mailing list commit 86bbb84231a8a16ec54a621c66843b103b5a8610 Author: Eric Wong Date: Fri Aug 19 21:54:37 2011 +0000 tests: bump test deps to the latest versions Nothing appears broken :) commit 1077961a3f8933c65d39c7e6c9ed6ff3b6b53647 Author: Eric Wong Date: Fri Aug 19 20:47:29 2011 +0000 Rack::Chunked and ContentLength middlewares by default This is needed to match the behavior of Rack::Server for RACK_ENV=(deployment|development), actually. This won't affect users of other RACK_ENV values. This change has minor performance consequences, so users negatively affected should set RACK_ENV to "none" instead for full control of their middleware stack. This mainly affects Rainbows!/Zbatery users since they have persistent connections and /need/ Content-Length or Transfer-Encoding:chunked headers. commit 7fe08addefb12bd2f4c63901e8cf631e9162ca51 Author: Eric Wong Date: Tue Aug 16 19:44:04 2011 -0700 filter exception messages with control characters We do not want to affect terminals of users who view our log files. commit b1f328b0dd3647168fcc8b1ad9b09284707ad929 Author: Eric Wong Date: Thu Aug 11 17:28:47 2011 -0700 http_server: small simplification for redirects We only need the fileno in the key which we use to generate the UNICORN_FD env. Otherwise the IO object is accepted and understood by Ruby. commit 6ab27beeda3b0aaaa66f7cc4f734944a7aa84385 Author: Eric Wong Date: Thu Aug 11 12:59:09 2011 -0700 future-proof against close-on-exec by default Setting the close-on-exec flag by default and closing non-standard descriptors is proposed for Ruby 1.9.4/2.0.0. Since Unicorn is one of the few apps to rely on FD inheritance across exec(), we need to workaround this by redirecting each listener FD to itself for Kernel#exec. Ruby supports a hash as the final argument to Kernel#exec since at least 1.9.1 (nobody cares for 1.9.0 anymore). This allows users to backport close-on-exec by default patches to older 1.9.x installs without breaking anything. ref: http://redmine.ruby-lang.org/issues/5041 commit 60d60a6fa716e91651997d86e3cb9cda41475975 Author: Eric Wong Date: Thu Aug 11 12:46:27 2011 -0700 test_socket_helper: Socket#bind may fail with EINVAL if IPv6 is missing I don't build IPv6 into all my kernels; maybe other testers do not, either. commit ec8a8f32d257290aac377f1c7b1c496e1df75f73 Author: Eric Wong Date: Wed Aug 3 11:00:28 2011 -0700 KNOWN_ISSUES: add link to FreeBSD jail workaround notes Thanks to Tatsuya Ono on the unicorn mailing list. commit 406b8b0e2ed6e5be34d8ec3cd4b16048233c2856 Author: Eric Wong Date: Tue Aug 2 23:52:14 2011 +0000 trap death signals in the worker sooner This helps close a race condition preventing shutdown if loading the application (preload_app=false) takes a long time and the user decides to kil workers instead. commit 6d56d7ab891d2cb6127b4cba428a0f7c13b9d2ce Author: Eric Wong Date: Wed Jul 20 22:42:16 2011 +0000 http_server: explicitly disable close-on-exec for listeners Future versions of Ruby may change this from the default *nix behavior, so we need to explicitly allow FD passing via exec(). ref: http://redmine.ruby-lang.org/issues/5041 commit 83f72773b7242d86263a18950fca7c8101d7038d Author: Eric Wong Date: Tue Jul 12 23:52:33 2011 +0000 http: reject non-LWS CTL chars (0..31 + 127) in field values RFC 2616 doesn't appear to allow most CTL bytes even though Mongrel always did. Rack::Lint disallows 0..31, too, though we allow "\t" (HT, 09) since it's LWS and allowed by RFC 2616. commit cc63e2ee54b4113c40631214618f51c9ef867a91 Author: Eric Wong Date: Fri Jul 1 07:52:31 2011 +0000 socket_helper: fix undefined variable for logging I corrupted a Ruby build and SOL_TCP didn't get defined :x commit 79c646d69822df542aaabe285eac08cdf4111dc0 Author: Eric Wong Date: Wed Jun 29 18:49:45 2011 +0000 unicorn 4.0.1 - regression bugfixes This release fixes things for users of per-worker "listen" directives in the after_fork hook. Thanks to ghazel@gmail.com for reporting the bug. The "timeout" configurator directive is now truncated to 0x7ffffffe seconds to prevent overflow when calling IO.select. commit cdb9bc905cf8e15e8a7d0900f57409f54a7b80ac Author: Eric Wong Date: Wed Jun 29 18:48:42 2011 +0000 configurator: limit timeout to 32-bit INT_MAX-1 Nobody will miss one second if they specify an "infinite" timeout of ~68 years. This prevents duplicating this logic in Rainbows! commit 19f798301ac1884f423640efafb277b071bb5439 Author: Eric Wong Date: Wed Jun 29 07:19:32 2011 +0000 fix per-worker listen directive in after_fork hook The testcase for this was broken, too, so we didn't notice this :< Reported-by: ghazel@gmail.com on the Rainbows! mailing list, http://mid.gmane.org/BANLkTi=oQXK5Casq9SuGD3edeUrDPvRm3A@mail.gmail.com commit 38672501206c9e64d241e3d8571f70b198f0c1e5 Author: Eric Wong Date: Mon Jun 27 20:51:16 2011 +0000 configurator: truncate timeouts to 32-bit LONG_MAX IO.select in Ruby can't wait longer than this. This means Unicorn can't support applications that take longer than 68 years to respond :( commit fb8bb4469849fa2b2241152aea7e9e82bd3cbcc8 Author: Eric Wong Date: Mon Jun 27 08:12:58 2011 +0000 unicorn 4.0.0 - for mythical hardware! A single Unicorn instance may manage more than 1024 workers without needing privileges to modify resource limits. As a result of this, the "raindrops"[1] gem/library is now a required dependency. TCP socket defaults now favor low latency to mimic UNIX domain socket behavior (tcp_nodelay: true, tcp_nopush: false). This hurts throughput, users who want to favor throughput should specify "tcp_nodelay: false, tcp_nopush: true" in the listen directive. Error logging is more consistent and all lines should be formatted correctly in backtraces. This may break the behavior of some log parsers. The call stack is smaller and thus easier to examine backtraces when debugging Rack applications. There are some internal API changes and cleanups, but none that affect applications designed for Rack. See "git log v3.7.0.." for details. For users who cannot install kgio[2] or raindrops, Unicorn 1.1.x remains supported indefinitely. Unicorn 3.x will remain supported if there is demand. We expect raindrops to introduce fewer portability problems than kgio did, however. [1] http://raindrops.bogomips.org/ [2] http://bogomips.org/kgio/ commit 4785db8cf19899756c4a79462fed861a1d1bd96c Author: Eric Wong Date: Mon Jun 27 08:46:28 2011 +0000 slightly faster worker process spawning It's still O(n) since we don't maintain a reverse mapping of spawned processes, but at least we avoid the extra overhead of creating an array every time. commit 441bb8ab48f15f583b82a3f8520648a4694a198f Author: Eric Wong Date: Sat Jun 25 22:40:20 2011 +0000 reenable heartbeat checking for idle workers Some applications/libraries may launch background threads which can lock up the process. So we can't disable heartbeat checking just because the main thread is sleeping. This also has the side effect of reducing master process wakeups when all workers are idle. commit 63bcecf48994aa9afe6dc2890efe3ba4b0696bbf Author: Eric Wong Date: Fri Jun 24 08:17:02 2011 +0000 test with latest kgio and rack versions We'll continue to support older versions, but make sure things on the latest ones work. commit 079eb70692fcda9b4bcf572319434ffa7f9e9849 Author: Eric Wong Date: Fri Jun 24 07:19:22 2011 +0000 allow multiline comments in config.ru This matches the latest Rack behavior. We can't just use Rack::Builder.parse_file because our option parser logic is slightly different and incompatible. ref: rack commit d31cf2b7c0c77c04510c08d95776315ceb24ba54 commit b3b6b0dff19f8a22a96525bba22bf061d03c3fc5 Author: Eric Wong Date: Thu Jun 23 05:12:08 2011 +0000 http_server: avoid race conditions on SIGQUIT We don't want the Worker#tick= assignment to trigger after we accept a client, since we'd drop that request when we raise the exception that breaks us out of the worker loop. Also, we don't want to enter IO.select with an empty LISTENERS array so we can fail with IOError or Errno::EBADF. commit fbe48964d79f3d592f4f75960c5940add9ccf22a Author: Eric Wong Date: Wed Jun 22 07:48:36 2011 +0000 http_server: remove unused variable A leftover from the fchmod() days commit 1a2dc92e7ff92157aa12e2c8a8a09ec0d56e0eb6 Author: Eric Wong Date: Wed Jun 22 02:06:46 2011 +0000 gemspec: fix raindrops dependency Oops, I suck at Ruby :x commit de142bc61f714392b0902b6e66a31c34ba223cdb Author: Eric Wong Date: Wed Jun 22 02:05:20 2011 +0000 TODO: remove scalability to >= 1024 workers item We can do it! commit b08410facbccf96c67822a92888de0bc1910390e Author: Eric Wong Date: Fri Jun 17 08:59:02 2011 +0000 test_http_parser: fix for URI too long errors (#3) The random garbage generator may occasionally generate URIs that are too long and cause the URI-specific error to be raised instead of the generic parser error we recently introduced. Follow-up-to: commit 742c4d77f179a757dbcb1fa350f9d75b757acfc7 commit 5f478f5a9a58f72c0a844258b8ee614bf24ea9f7 Author: Eric Wong Date: Fri Jun 17 08:54:37 2011 +0000 error logging is more consistent Backtraces are now formatted properly (with timestamps) and exceptions will be logged more consistently and similar to Logger defaults: "#{exc.message} (#{e.class})" backtrace.each { |line| ... } This may break some existing monitoring scripts, but errors will be more standardized and easier to check moving forward. commit fa7ce0a6a755cb71a30417478fb797ee7b8d94b5 Author: Eric Wong Date: Fri Jun 17 07:32:17 2011 +0000 add broken app test from Rainbows! "app error" is more correct, and consistent with Rainbows! commit 593deb92e8ebd4e77e482c567d97b6ee496ac378 Author: Eric Wong Date: Thu Jun 16 23:57:31 2011 +0000 ensure at_exit handlers run on graceful shutdown rescuing from SystemExit and exit()-ing again is ugly, but changes made to lower stack depth positively affect _everyone_ so we'll tolerate some ugliness here. We'll need to disable graceful exit for some tests, too... commit a0c59adf71506b8808de276b1288a319424ee71a Author: Eric Wong Date: Thu Jun 16 22:54:40 2011 +0000 replace fchmod()-based heartbeat with raindrops This means we no longer waste an extra file descriptor per worker process in the master. Now there's no need to set a higher file descriptor limit for systems running >= 1024 workers. commit 95f543a9583e58c56b1c480df84b4b88e6669403 Author: Eric Wong Date: Thu Jun 16 23:11:28 2011 +0000 add heartbeat timeout test from Rainbows! Just in case we break anything commit 4beeb52b1c52ea4486dea13cebe2a8438a9f2139 Author: Eric Wong Date: Wed Jun 15 01:10:07 2011 +0000 memory reductions in worker process There's absolutely no need to keep the OptionParser around in worker processes. commit e9e7a1c7c1778ed7cd7c724b26362d1f89b2801c Author: Eric Wong Date: Wed Jun 15 00:56:47 2011 +0000 test_http_parser: fix for URI too long errors (again) The random garbage generator may occasionally generate URIs that are too long and cause the URI-specific error to be raised instead of the generic parser error we recently introduced. Follow-up-to: commit 742c4d77f179a757dbcb1fa350f9d75b757acfc7 commit a7d9eb03bf3ac554854990018a67f34c2221fb20 Author: Eric Wong Date: Wed Jun 15 00:53:45 2011 +0000 http_server: kill another stack frame off We always know we have zero workers at startup, so we don't need to check before hand. SIGHUP users may suffer a small performance decrease as a result, but there's not much we can do about it. commit f8953ce747bd35b2008fc3daa040b89002a3133e Author: Eric Wong Date: Wed Jun 15 00:47:25 2011 +0000 http_server: factor out inherit_listeners! method This should be easier to understand and reduces garbage on stack, too. commit 6aa423454d7c3926297426fc22d23c88531bd15a Author: Eric Wong Date: Wed Jun 15 00:45:37 2011 +0000 test_response: httpdate is low resolution It may return the previous second commit 63e421d82ac6d838f9b8b02d4a727bf6f783e7b6 Author: Eric Wong Date: Wed Jun 15 00:39:37 2011 +0000 remove BasicSocket.do_not_reverse_lookup setting kgio never does reverse lookup commit 12024a6268d4e96fcf96df33fb7d82eaec9c16b1 Author: Eric Wong Date: Wed Jun 15 00:20:26 2011 +0000 http: delay CoW string invalidations in filter_body Not all invocations of filter_body will trigger CoW on the given destination string. We can also avoid an unnecessary rb_str_set_len() in the non-chunked path, too. commit d91ca210615432bdad3ee70c08908ea7064c6b95 Author: Eric Wong Date: Wed Jun 15 00:15:42 2011 +0000 http: remove tainting flag Needless line noise, kgio doesn't support tainting anyways. commit c719497c6db220a9f58c71970f2370cb2e6c99c3 Author: Eric Wong Date: Wed Jun 15 00:09:32 2011 +0000 http_server: get rid of EINTR checks Ruby IO.select never raises that, actually commit 742c4d77f179a757dbcb1fa350f9d75b757acfc7 Author: Eric Wong Date: Wed Jun 15 00:08:03 2011 +0000 test_http_parser: fix for URI too long errors The random garbage generator may occasionally generate URIs that are too long and cause the URI-specific error to be raised instead of the generic parser error we recently introduced. commit 20c0f28cf60f164c9788b694625bce22962464f3 Author: Eric Wong Date: Wed Jun 15 00:01:32 2011 +0000 http_server: further reduce stack usage for app.call By avoid Array#each commit ddcea26976f24dda8a0cd65022065100bb40fbb7 Author: Eric Wong Date: Tue Jun 14 23:49:57 2011 +0000 http_server: small cleanups for attr assignments ivar references using @ are slightly faster than calling attribute methods. commit f1d8dd94122395cd7b072aeec8942f2cd6b8ca99 Author: Eric Wong Date: Tue Jun 14 23:25:43 2011 +0000 http_server: do not rescue from proper exits Oops, it messes logging up badly. commit 2f3c135b15e6603e71bb9d6d054e5cd606c7b2b6 Author: Eric Wong Date: Tue Jun 14 00:51:01 2011 +0000 http: fix documentation for dechunk! chunk_ready! was my original name for it, but I'm indecisive when it comes to naming things. commit c297fde2000dcc8bdf7cb9f912fb2ea07be1c282 Author: Eric Wong Date: Mon Jun 13 23:42:54 2011 +0000 http: dechunk! method to enter dechunk mode This allows one to enter the dechunker without parsing HTTP headers beforehand. Since we skipped header parsing, trailer parsing is not supported since we don't know what trailers might be (to our knowledge, nobody uses trailers anyways) commit 131c241840990753f7b75344092058ef7434ea8b Author: Eric Wong Date: Mon Jun 13 22:35:18 2011 +0000 http: document reasoning for memcpy in filter_body copy-on-write behavior doesn't help you if your common use case triggers copies. commit 4aa8fd1322ccb46fc58a4f26ca111a03c1720c7d Author: Eric Wong Date: Mon Jun 13 22:18:30 2011 +0000 http: rename variables in filter_body implementation Makes things easier-to-understand since it's based on memcpy() commit b1d8d3de991ebc5b7d655f2e8a1294129021db8a Author: Eric Wong Date: Mon Jun 13 22:17:14 2011 +0000 change TCP defaults to favor low latency These TCP settings are a closer match to the behavior of Unix domain sockets and what users expect for fast streaming responses even if nginx can't provide them just now... commit c1cac62571b543ac8e9f7203f8c315bb75516a20 Author: Eric Wong Date: Mon Jun 13 21:44:24 2011 +0000 gemspec: bump kgio dependency to ~> 2.4 kgio 2.4.1 portability should be better than 2.3, so less user confusion and push them towards 2.4 commit 5d2284afdc2d4f4ff122394ae5fd78a32cb8c09e Author: Eric Wong Date: Fri Jun 10 23:54:47 2011 +0000 runtime stack size reductions This reduces the size of `caller` by 5 frames, which should make backtraces easier-to-read, raising exceptions less expensive, and reduce GC runtime. commit 987b9496171b090e62de488ddc7b9a175c4c8d33 Author: Eric Wong Date: Fri Jun 10 23:44:10 2011 +0000 test/benchmark/stack.ru: app for measuring stack depth Stack depth affects Ruby GC performance, so lowering it makes sense commit 1c033dfd66c713afb05911e5e220adb7fc4ddc17 Author: Eric Wong Date: Thu Jun 9 13:36:20 2011 -0700 unicorn 3.7.0 - minor feature update * miscellaneous documentation improvements * return 414 (instead of 400) for Request-URI Too Long * strip leading and trailing linear whitespace in header values User-visible improvements meant for Rainbows! users: * add :ipv6only "listen" option (same as nginx) commit c3880bb0cc00821d1715a7dd94b0b76a03a7ace0 Author: Eric Wong Date: Tue Jun 7 13:54:18 2011 -0700 configurator: add :ipv6only directive Enabling this flag for an IPv6 TCP listener allows users to specify IPv6-only listeners regardless of the OS default. This should be interest to Rainbows! users. commit 0dc56fd03ea478ae054e3d0398703f43e017723b Author: Eric Wong Date: Tue Jun 7 09:56:30 2011 -0700 build: ensure gem and tgz targets build manpages Original patch by Hongli Lai : > >From bfefc2cf0efb0913a42862886363b3140dcdbb2a Mon Sep 17 00:00:00 2001 > From: Hongli Lai (Phusion) > Date: Mon, 6 Jun 2011 13:39:00 +0200 > Subject: [PATCH] Ensure that 'make gem' builds the documentation too. > > If autogenerated documentation files, like man pages, don't exist then > 'make gem' will fail, complaining that some files are not found. By > depending the 'gem' target on the 'doc' target we ensure that 'make gem' > always works. > > Signed-off-by: Hongli Lai (Phusion) ref: http://mid.gmane.org/4DED0EE2.7040400@phusion.nl commit 6eefc641c84eaa86cb2be4a2b1983b15efcbfae1 Author: Eric Wong Date: Tue Jun 7 09:38:34 2011 -0700 examples/nginx.conf: better wording for ipv6only comment Oops. commit 32b340b88915ec945ebdbfa11b7da242860a6f44 Author: Eric Wong Date: Mon Jun 6 19:15:36 2011 -0700 examples/nginx.conf: add ipv6only comment IPv4-mapped-IPv6 addresses are fugly. commit f4b9c1cb92711a62ae047368d7694c5050d27f2c Author: Eric Wong Date: Mon Jun 6 10:00:36 2011 -0700 Documentation: remove --sanitize-html for pandoc pandoc 1.8 no longer has this. commit 8e8781aa7002079ad066c11d271b98fc29f225dd Author: Hongli Lai (Phusion) Date: Mon Jun 6 13:36:57 2011 +0200 Document the method for building the Unicorn gem. Signed-off-by: Hongli Lai (Phusion) commit 6e550cabdafd2cb0fcd1617f8815a732e79af670 Author: Eric Wong Date: Mon May 23 23:59:53 2011 +0000 isolate_for_tests: use rake 0.8.7 Rails 3.0.0 can't use Rake 0.9.0 it seems. commit 3e8971f3998249c58c9958815e0f17a04256ef9f Author: Eric Wong Date: Mon May 23 23:59:31 2011 +0000 gemspec: use latest Isolate (3.1) It's required for RubyGems 1.8.x commit 67e1fa9f9535ad009d538b8189bb3bdec0e5f79c Author: Eric Wong Date: Mon May 23 21:53:19 2011 +0000 http: call rb_str_modify before rb_str_resize Ruby 1.9.3dev (trunk) requires it if the string size is unchanged. commit 1b31c40997ff8b932a457275e9a2f219de1d32c8 Author: Eric Wong Date: Mon May 23 21:04:56 2011 +0000 strip trailing and leading linear whitespace in headers RFC 2616, section 4.2: > The field-content does not include any leading or trailing LWS: > linear white space occurring before the first non-whitespace > character of the field-value or after the last non-whitespace > character of the field-value. Such leading or trailing LWS MAY be > removed without changing the semantics of the field value. Any LWS > that occurs between field-content MAY be replaced with a single SP > before interpreting the field value or forwarding the message > downstream. commit 947704e3f8e67b8262815838e87b331802c7ba67 Author: Eric Wong Date: Mon May 23 18:22:44 2011 +0000 doc: add Links page to help folks find relevant info Older announcements on our mailing list could be harder to find. commit 66be289901508d5a6ed092db81ec96815c42d21d Author: Eric Wong Date: Mon May 23 18:21:50 2011 +0000 GNUmakefile: locale-independent grep invocation Otherwise it could casefold and we don't want that. commit c20077db941cc969fb3721c7527d37a99367f220 Author: Eric Wong Date: Sun May 8 02:39:42 2011 +0000 doc: PHILOSOPHY: formatting fixes No need to list things inside preformatted text commit 77a951c5da518dda471282635c98f3b572ca15db Author: Eric Wong Date: Thu May 5 16:42:26 2011 -0700 http_parser: add max_header_len accessor Rainbows! wants to be able to lower this eventually... commit 733cb68e444a6f324bb1ffda3839da98ef010c74 Author: Eric Wong Date: Thu May 5 16:40:42 2011 -0700 t0002-parser-error: fix race conditions "wait" needs to be done in the outside shell because the subshell could still be exiting when we grep. commit 39ffd5590e4b5d2114215854deec848f849e9e87 Author: Eric Wong Date: Wed May 4 17:59:48 2011 -0700 doc: remove redundant "of" typo commit 1b0ee5826ef146a3e2647c40f3bc929d51d1b442 Author: Eric Wong Date: Wed May 4 17:04:51 2011 -0700 http_parser: new add_parse method Combines the following sequence: http_parser.buf << socket.readpartial(0x4000) http_parser.parse Into: http_parser.add_parse(socket.readpartial(0x4000)) It was too damn redundant otherwise... commit f81aa02448b615c4d5fc4f6544c53289dae9d2ec Author: Eric Wong Date: Wed May 4 16:41:36 2011 -0700 return 414 for URI length violations There's an HTTP status code allocated for it in , so return that instead of 400. commit 3a76dc40dda91a3804276fcc73260bb2a529c034 Author: Eric Wong Date: Sat Apr 30 11:09:32 2011 -0700 Sandbox: update doc for latest Bundler versions Bundler 1.0.x is much improved :) commit f848f632a81cf8ebc977592cbf9a45d84a69f306 Author: Eric Wong Date: Sat Apr 30 06:34:52 2011 +0000 unicorn 3.6.2 - fix Unicorn::OobGC module The optional Unicorn::OobGC module is reimplemented to fix breakage that appeared in v3.3.1. There are also minor documentation updates, but no code changes as of 3.6.1 for non-OobGC users. There is also a v1.1.7 release to fix the same OobGC breakage that appeared for 1.1.x users in the v1.1.6 release. commit 1588c299703754e52b9f36219c21e13204734e6c Merge: fe47a17 0874125 Author: Eric Wong Date: Sat Apr 30 06:33:53 2011 +0000 Merge commit 'v1.1.7' * commit 'v1.1.7': unicorn 1.1.7 - major fixes to minor components oob_gc: reimplement to fix breakage and add tests exec_cgi: handle Status header in CGI response unicorn 1.1.6 - one minor, esoteric bugfix close client socket after closing response body commit fe47a179468799bbbb893b339cbb0d4fedf29c2a Author: Eric Wong Date: Fri Apr 29 23:31:35 2011 -0700 TUNING: more minor doc updates commit 0874125ce56d52cee0f634712e69d1387eadfae1 Author: Eric Wong Date: Sat Apr 30 04:56:28 2011 +0000 unicorn 1.1.7 - major fixes to minor components No changes to the core code, so this release only affects users of the Unicorn::OobGC and Unicorn::ExecCGI modules. Unicorn::OobGC was totally broken by the fix in the v1.1.6 release and is now reimplemented. Unicorn::ExecCGI (which hardly anybody uses) now returns proper HTTP status codes. commit fe0dd93cd9cb97b46f6cfb4b1e370e38717a93f0 Author: Eric Wong Date: Fri Apr 29 15:48:35 2011 -0700 oob_gc: reimplement to fix breakage and add tests This was broken since v3.3.1[1] and v1.1.6[2] since nginx relies on a closed socket (and not Content-Length/Transfer-Encoding) to detect a response completion. We have to close the client socket before invoking GC to ensure the client sees the response in a timely manner. [1] - commit b72a86f66c722d56a6d77ed1d2779ace6ad103ed [2] - commit b7a0074284d33352bb9e732c660b29162f34bf0e (cherry picked from commit faeb3223636c39ea8df4017dc9a9d39ac649b26d) Conflicts: examples/big_app_gc.rb lib/unicorn/oob_gc.rb commit 02a116c0d94a60a64abf8ad2465132e8194dd62a Author: Eric Wong Date: Fri Apr 29 16:01:35 2011 -0700 TUNING: document worker_processes tuning It seems people are still confused about it... commit faeb3223636c39ea8df4017dc9a9d39ac649b26d Author: Eric Wong Date: Fri Apr 29 15:48:35 2011 -0700 oob_gc: reimplement to fix breakage and add tests This was broken since v3.3.1[1] since nginx relies on a closed socket (and not Content-Length/Transfer-Encoding) to detect a response completion. We have to close the client socket before invoking GC to ensure the client sees the response in a timely manner. [1] - commit b72a86f66c722d56a6d77ed1d2779ace6ad103ed commit ce4995a4daf1e4da7034dc87fd218a283c405410 Author: Eric Wong Date: Fri Apr 29 15:30:07 2011 -0700 TUNING: original sentence was incomplete commit 843d30120139dc372aca6c1773ac7699b6ee6345 Author: Eric Wong Date: Fri Apr 29 12:21:38 2011 -0700 examples/big_app_gc: fix comment Oops, comments should match the latest code commit d4f70c45029ab1c6aba4bc2d69283ae43e46d9ff Author: Eric Wong Date: Fri Apr 29 12:18:59 2011 -0700 examples/big_app_gc: update this example OobGC is actually broken with nginx these days since we needed to preserve the env for body.close... commit eaf72275e36560e567efc9597d929e02dc2f577d Author: Eric Wong Date: Wed Apr 27 13:49:14 2011 -0700 configurator: attempt to clarify :tcp_nopush/:tcp_nodelay These options will probably be more important as interest in streaming responses in Rails 3.1 develops. I consider the respective defaults for Unicorn (designed to run behind nginx) and Rainbows! (designed to run standalone) to be the best choices in their respective environments. commit 37c491dcc23d445521229dbe902f02833f2a0f4c Author: Eric Wong Date: Wed Apr 27 13:13:24 2011 -0700 examples/nginx.conf: clarify proxy_buffering for Rails 3.1 I've tested with nginx 1.0.0 and confirmed "proxy_buffering off;" can cause Unicorn to block on a slow client reading a large response. While there's a potential (client-visible) performance improvement with Rails 3.1 streaming responses, it can also hurt the server with slow clients. Rainbows! with (ThreadSpawn or ThreadPool) is probably the best way to do streaming responses efficiently from all angles (from a server, client and programmer time perspective). commit 1b3befbadb99c83c24109f68b719276f0051c7fb Author: Eric Wong Date: Tue Apr 26 16:04:19 2011 -0700 unicorn 3.6.1 - fix OpenSSL PRNG workaround Our attempt in 3.6.0 to workaround a problem with the OpenSSL PRNG actually made the problem worse. This release corrects the workaround to properly reseed the OpenSSL PRNG after forking. commit 34f7dbd1b7e087bc8c86029496fd8daa7dc58441 Author: Eric Wong Date: Tue Apr 26 16:01:31 2011 -0700 properly reseed OpenSSL::Random after forking Using the return value of Kernel#srand actually made the problem worse. Using the value of Kernel#rand is required to actually get a random value to seed the OpenSSL PRNG. Thanks to ghazel for the bug report! commit 2aabf90ca53b31edef6c2b63006c33374840c816 Author: Eric Wong Date: Thu Apr 21 06:16:27 2011 +0000 unicorn 3.6.0 - small fixes, PRNG workarounds Mainly small fixes, improvements, and workarounds for fork() issues with pseudo-random number generators shipped with Ruby (Kernel#rand, OpenSSL::Random (used by SecureRandom and also by Rails). The PRNG issues are documented in depth here (and links to Ruby Redmine): http://bogomips.org/unicorn.git/commit?id=1107ede7 http://bogomips.org/unicorn.git/commit?id=b3241621 If you're too lazy to upgrade, you can just do this in your after_fork hooks: after_fork do |server,worker| tmp = srand OpenSSL::Random.seed(tmp.to_s) if defined?(OpenSSL::Random) end There are also small log reopening (SIGUSR1) improvements: * relative paths may also be reopened, there's a small chance this will break with a handful of setups, but unlikely. This should make configuration easier especially since the "working_directory" configurator directive exists. Brought up by Matthew Kocher: http://thread.gmane.org/gmane.comp.lang.ruby.unicorn.general/900 * workers will just die (and restart) if log reopening fails for any reason (including user error). This is to workaround the issue reported by Emmanuel Gomez: http://thread.gmane.org/gmane.comp.lang.ruby.unicorn.general/906 commit 4f7f3bbb973c8f2bb4b189592158a0682ea2a625 Author: Eric Wong Date: Thu Apr 21 06:23:21 2011 +0000 http_server: fix Rainbows! compatibility Older Rainbows! redefines the ready_pipe= accessor method to call internal after_fork hooks. commit c6c9cae960bd8cbfa2feb801ca7079f6626b436b Author: Eric Wong Date: Wed Apr 20 16:02:51 2011 +0000 KNOWN_ISSUES: document PRNG changes in 3.6.0 commit 6411add3f1a5aae5f2e0dcd73cd842500d21e9fd Author: Eric Wong Date: Mon Apr 18 15:53:08 2011 -0700 documentation cleanup/reduction Don't clutter up our RDoc/website with things that users of Unicorn don't need to see. This should make user-relevant documentation easier to find, especially since Unicorn is NOT intended to be an API. commit 1107ede716461049033d6a5b311e14c742c9363a Author: Eric Wong Date: Mon Apr 18 15:34:29 2011 -0700 reseed OpenSSL PRNG upon fork() of workers OpenSSL seeds its PRNG with the process ID, so if a process ID is recycled, there's a chance of indepedent workers getting repeated PRNG sequences over a long time period iff the same PID is used. This only affects deployments that meet both of the following conditions: 1) OpenSSL::Random.random_bytes is called before forking 2) worker (but not master) processes are die unexpectedly The SecureRandom module in Ruby (and Rails) uses the OpenSSL PRNG if available. SecureRandom is used by Rails and called when the application is loaded, so most Rails apps with frequently dying worker processes are affected. Of course dying worker processes are bad and entirely the fault of bad application/library code, not the fault of Unicorn. Thanks for Alexander Dymo for reporting this. ref: http://redmine.ruby-lang.org/issues/4579 commit b32416211ef30e958ec38c8c99833161cd476dd4 Author: Eric Wong Date: Mon Apr 18 22:21:58 2011 +0000 reinitialize PRNG for latest Ruby 1.8.7 releases The current versions of Ruby 1.8 do not reseed the PRNG after forking, so we'll work around that by calling Kernel#srand. ref: http://redmine.ruby-lang.org/issues/show/4338 commit 3c8f21a4257578e9cdc4781dd21a6a572e25ca54 Author: Eric Wong Date: Wed Apr 13 08:05:51 2011 +0000 fix some 1.9.3dev warnings commit 1355d262288352c2ced67cefc2301cee79bec0dd Author: Eric Wong Date: Wed Apr 13 07:55:11 2011 +0000 configurator: fix broken local variable Oops, changing a method definition for RDoc means code needs to be updated, too :x commit 30ece1c7cc66b2fc816b1361e498ca0d4a554a78 Author: Eric Wong Date: Wed Apr 13 07:43:05 2011 +0000 GNUmakefile: s/Config/RbConfig/ "Config" is deprecated and warns under 1.9.3dev commit cabbc6ce06487619431af102378aefa08d55f9f1 Author: Eric Wong Date: Wed Apr 13 07:34:31 2011 +0000 http_server: workers die on log reopen failures They should then recover and inherit writable descriptors from the master when it respawns. commit c1322a721d9039f54da97cf50de49f2affbfff37 Author: Eric Wong Date: Wed Apr 13 05:41:07 2011 +0000 http_parser: remove RDoc It's not needed for users, so avoid confusing them. Unicorn itself is not intended to be an API, it just hosts Rack applications. commit 8c359f50ce8b20dc3d72fe655db9d93c4a8ee7d5 Author: Eric Wong Date: Wed Apr 13 01:43:31 2011 +0000 configurator: miscellaneous RDoc improvements Mainly formatting and such, but some wording changes. commit 2d1a4fbe37ebb0f229edbaefd392bdd8b6865590 Author: Eric Wong Date: Wed Apr 13 01:11:29 2011 +0000 worker: improve RDoc, point users to Configurator#user commit 46cc05089ea34b823454f790092f386f22d3adb1 Author: Eric Wong Date: Wed Apr 13 01:04:19 2011 +0000 configurator: remove outdated user example in after_fork Configurator itself supports user at the top-level. commit c4d3cd7d7b32ed133e25e3740c8e7a3493592eec Author: Emmanuel Gomez Date: Tue Apr 12 15:36:36 2011 -0700 Document "user" directive in example unicorn conf commit 6647dcb3afa4c0b16c5fef5bfdf88292e6adf6ca Author: Eric Wong Date: Fri Apr 1 16:09:03 2011 -0700 util: allow relative paths to be rotated Users keep both pieces if it's broken :) commit ebcc5b45adfb1d04af98356d867e9221ecdc9b70 Author: Eric Wong Date: Fri Apr 1 15:48:30 2011 -0700 bump dependencies for testing No need to use an ancient Rack now that we've dropped Rails 2.3.x tests. We need to remember that Rack 1.1.0 doesn't support input#size. commit e5bf7b7207d69daf1c3537797aeeab2642f19514 Author: Eric Wong Date: Fri Apr 1 15:44:22 2011 -0700 drop Rails 2.3.x tests They were transitionary releases and the logic to deal with them and Rack versioning was too much overhead. commit c1ebb313735a280582d87c1ba44619aa47e00b06 Author: Eric Wong Date: Tue Mar 29 09:47:26 2011 -0700 add examples/logrotate.conf logrotate is the de facto tool for logrotation, so an example config for highlighting important parts are in order. Since our USR1 signal handling is part of the crusade against the slow and lossy "copytruncate" option, be sure to emphasize that :) commit ede28dc59562c862ff4641ed42a0ef357880d0f5 Author: Eric Wong Date: Sun Mar 27 20:35:16 2011 -0700 tmpio: do not redefine size method under 1.9.2+ File#size is available in 1.9.2 commit 9de69c47e0a261bc88ca40e03562b7324baaf0cf Author: Eric Wong Date: Tue Mar 22 17:57:03 2011 -0700 DESIGN: fix redundant wording "P" in HTTP is already "protocol" commit 5da78214be9518879ee96345d8184913853fe890 Author: Eric Wong Date: Tue Mar 22 17:48:30 2011 -0700 README: s/Gemcutter/RubyGems.org/ Gemcutter is the old name commit d1c9aa300c0cbda272f197b734b3e895959ae3e3 Author: Eric Wong Date: Tue Mar 15 12:19:30 2011 +0000 unicorn 3.5.0 - very minor improvements A small set of small changes but it's been more than a month since our last release. There are minor memory usage and efficiently improvements (for graceful shutdowns). MRI 1.8.7 users on *BSD should be sure they're using the latest patchlevel (or upgrade to 1.9.x) because we no longer workaround their broken stdio (that's MRI's job :) commit e6b6782030d8593006b4b7cace866cf42dd38d51 Author: Eric Wong Date: Tue Mar 8 06:59:53 2011 +0000 gemspec: update kgio dependency to 2.3.2 People reinstalling would've pulled it in anyways, but 2.3.2 is the latest and has no known issues. commit 1594937132a5d9b7f1dc24cc47e3a27679ac9950 Author: Eric Wong Date: Tue Mar 8 06:59:08 2011 +0000 gemspec: no need for require_paths commit cc7e65a1aa1bacc9658a687140011e999be6e3e7 Author: Eric Wong Date: Fri Feb 25 17:54:24 2011 +0000 tee_input: remove old *BSD stdio workaround Ruby 1.8.* users should get the latest Ruby 1.8.7 anyways since they contain critical bugfixes. We don't keep workarounds forever since the root problem is fixed/worked-around in upstream and people have had more than a year to upgrade Ruby. commit 2b6dd7653211d3d6b4cb6a46eec11bbde8cab789 Author: Eric Wong Date: Fri Feb 18 17:02:08 2011 -0800 clear listeners array on SIGQUIT We don't want to repeatedly reclose the same IOs and keep raising exceptions this way. commit d3ebd339990b0586a5993232302235c26cdb33d9 Author: Eric Wong Date: Wed Feb 16 10:33:20 2011 -0800 README: clarify the versions of "Ruby license" Ruby 1.9.3dev is now using the 2-clause BSD License, not the GPLv2. Do not mislead people into thinking we will switch to any BSD License, we won't. commit 4cfb64f10784498b9625bbbd3364231710bc7c36 Author: Eric Wong Date: Thu Feb 10 13:41:32 2011 -0800 Revert "test_helper: simplify random port binding" This causes conflicts with ports clients may use in the ephemeral range since those do not hold FS locks. This reverts commit e597e594ad88dc02d70f7d3521d0d3bdc23739bb. Conflicts: test/test_helper.rb commit 6dd90cb902f43b32b0db204484d5e3df79ec0d0c Author: Eric Wong Date: Thu Feb 10 13:34:58 2011 -0800 remove unnecessary &block usage They needlessly allocate Proc objects commit 1fd1234ca5ba3d84d2182c38b37322bd55f08882 Author: Eric Wong Date: Mon Feb 7 16:09:53 2011 -0800 test_helper: avoid FD leakage/waste No need to unnecessarily leave file descriptor open. commit 6ffc294aac4735127ac9455266623aaa3603e9c1 Author: Eric Wong Date: Fri Feb 4 13:06:30 2011 -0800 unicorn 3.4.0 - for people with very big LANs * IPv6 support in the HTTP hostname parser and configuration language. Configurator syntax for "listen" addresses should be the same as nginx. Even though we support IPv6, we will never support non-LAN/localhost clients connecting to Unicorn. * TCP_NOPUSH/TCP_CORK is enabled by default to optimize for bandwidth usage and avoid unnecessary wakeups in nginx. * Updated KNOWN_ISSUES document for bugs in recent Ruby 1.8.7 (RNG needs reset after fork) and nginx+sendfile()+FreeBSD 8. * examples/nginx.conf updated for modern stable versions of nginx. * "Status" in headers no longer ignored in the response, Rack::Lint already enforces this so we don't duplicate the work. * All tests pass under Ruby 1.9.3dev * various bugfixes in the (mostly unused) ExecCGI class that powers http://bogomips.org/unicorn.git commit 3df8a197320b8a9e8a6413dcd04613db0558d90a Author: Eric Wong Date: Fri Feb 4 13:04:39 2011 -0800 bump dependency on kgio This is needed for IPv6 support, and 2.2.0 is nicer all around for Rainbows! users. Updates wrongdoc while we're at it, too. commit 1045faa0f9e94b13ee0281b7968b72d6f50dd5bf Author: Eric Wong Date: Thu Feb 3 13:53:18 2011 -0800 test/unit: fix tests under Ruby 1.9.3dev Ugh, one day I'll clean them up, one day... commit 9e7a8114fb0fcc56b475d17f158eaa5b7f1f7bdd Author: Eric Wong Date: Wed Feb 2 17:37:22 2011 -0800 Fix Ruby 1.9.3dev warnings for i in `git ls-files '*.rb'`; do ruby -w -c $i; done commit e597e594ad88dc02d70f7d3521d0d3bdc23739bb Author: Eric Wong Date: Wed Feb 2 16:54:07 2011 -0800 test_helper: simplify random port binding Duh... commit 314680327b95c0dc5e11be45a6343ca2a18ee447 Author: Eric Wong Date: Wed Feb 2 16:27:30 2011 -0800 socket_helper: cleanup leftover debugging statement Oops! Ugh, not my day... commit e0160a18ef5c4592d1ac5ff24ba8ae0fd703057c Author: Eric Wong Date: Wed Feb 2 15:34:33 2011 -0800 socket_helper: export tcp_name as a module_function Oops! commit 87fd86ef22b6b80fa75dd8e50f53a4e62e8339f7 Author: Eric Wong Date: Wed Feb 2 15:22:02 2011 -0800 allow binding on IPv6 sockets with listen "[#{addr}]:#{port}" This is much like how nginx does it, except we always require a port when explicitly binding to IPv6 using the "listen" directive. This also adds support to listen with an address-only, which can be useful to Rainbows! users. commit d140e7b1ff44b06bc54c2b790d06e9c7325503fe Author: Eric Wong Date: Wed Feb 2 14:45:57 2011 -0800 http: parser handles IPv6 bracketed IP hostnames Just in case we have people that don't use DNS, we can support folks who enter ugly IPv6 addresses... IPv6 uses brackets around the address to avoid confusing the colons used in the address with the colon used to denote the TCP port number in URIs. commit 24f8ef5f385e38954a5582fb2e8cd9d12fbf7d20 Author: Eric Wong Date: Mon Jan 31 16:14:46 2011 -0800 force socket options to defaults if unspecified This reduces surprise when people (correctly) believe removing an option from the config file will return things back to our internal defaults. commit c28e2610cfc70e89a0ffabe18356d148afe98bfc Author: Eric Wong Date: Mon Jan 31 15:51:30 2011 -0800 enable TCP_NOPUSH/TCP_CORK by default It's actually harmless since Unicorn only supports "fast" applications that do not trickle, and we don't do keepalive so we'll always flush-on-close. This should reduce wakeups on the nginx proxy server if nginx is over TCP. Mongrel 1.x had TCP_CORK enabled by default, too. commit e3420e0ae1f3c38f125010134d2cdeb22c6fa64e Author: Eric Wong Date: Mon Jan 31 15:50:37 2011 -0800 test_upload: check size in server The client may not get a proper response with TCP_CORK enabled commit f4caf6b6bdea902abaadd3c04b2af94f056c4ff1 Author: Eric Wong Date: Fri Jan 28 18:11:26 2011 +0000 KNOWN_ISSUES: document broken RNG+fork in newer Ruby 1.8 Reported by: ghazel@gmail.com ref: commit 09afcf2ce9fc89d77b6b282bbf00a78c73741a4b Author: Eric Wong Date: Tue Jan 25 13:58:29 2011 -0800 examples/nginx.conf: use try_files directive This feature is in nginx 0.7.x and 0.8.x and optimized better than the "if" directive in nginx.conf ref: http://wiki.nginx.org/Pitfalls ref: http://wiki.nginx.org/IfIsEvil commit 1ca83b055375ab7e72d383ffd0f36f70c07d9e92 Author: Eric Wong Date: Tue Jan 25 13:56:39 2011 -0800 examples/nginx: avoid unnecessary listen directive There's no need to use listen unless you use non-default port or can enable "deferred" or "httpready" (which you usually want). commit fb1f33aecc7102fb5c10e27c65b9b27cf249415f Author: Eric Wong Date: Tue Jan 25 13:42:53 2011 -0800 KNOWN_ISSUES: split old stuff into its own section Ruby 1.9.1, Sinatra 0.3.x, and Rails 2.3.2 are not in common use anymore (at least we don't think). commit 8ac0ae45a04f5f121f323c182403ef6eb0d8aa18 Author: Eric Wong Date: Tue Jan 25 13:30:21 2011 -0800 KNOWN_ISSUES: FreeBSD 8 and sendfile can be buggy Reported by Alexey Bondar. commit d770d09dfd9e5d7148379c58cdf9a020cbdc63b6 Author: Eric Wong Date: Fri Jan 21 12:28:39 2011 -0800 git.bogomips.org => bogomips.org bogomips.org is slimming down and losing URL weight :) commit d385bc4f3ed7b783b7414f5d34299bd2bf242fe6 Author: Eric Wong Date: Fri Jan 21 04:01:01 2011 +0000 exec_cgi: handle Status header in CGI response We no longer blindly return 200 if the CGI returned another error code. We also don't want two Status headers in our output since we no longer filter it out. (cherry picked from commit 6cca8e61c66c1c2a8ebe260813fa83e44530a768) commit 6cca8e61c66c1c2a8ebe260813fa83e44530a768 Author: Eric Wong Date: Fri Jan 21 04:01:01 2011 +0000 exec_cgi: handle Status header in CGI response We no longer blindly return 200 if the CGI returned another error code. We also don't want two Status headers in our output since we no longer filter it out. commit c4d77de381c40cf315e6f84791e3fb634bc10675 Author: Eric Wong Date: Fri Jan 21 04:01:02 2011 +0000 exec_cgi: make output compatible with IO.copy_stream Rainbows! can then use this to bypass luserspace given the correct offset is set before hand and the file is unlinked. commit 4150a398a48b9bca96aa623380161229ac0f8622 Author: Eric Wong Date: Wed Jan 19 19:10:25 2011 -0800 configurator: undocument trust_x_forwarded_for This may not be supported in the future... commit ec400a537a0947796e108f3593721289661b49dc Author: Eric Wong Date: Fri Jan 7 10:14:46 2011 -0800 http_response: do not skip Status header set by app Rack::Lint already stops apps from using it. If a developer insists on it, then users who inspect their HTTP headers can point and laugh at them for not using Rack::Lint! commit 5ebd22a9d28fc96c69c09b695d99c1f173ce5a67 Author: Eric Wong Date: Thu Jan 6 15:46:56 2011 -0800 unicorn 3.3.1 - one minor, esoteric bugfix We now close the client socket after closing the response body. This does not affect most applications that run under Unicorn, in fact, it may not affect any. There is also a new v1.1.6 release for users who do not use kgio. commit 3587edb6e88ebe5c24cdde090ba8dd98de493d63 Author: Eric Wong Date: Thu Jan 6 15:40:54 2011 -0800 unicorn 1.1.6 - one minor, esoteric bugfix We now close the client socket after closing the response body. This does not affect most applications that run under Unicorn, in fact, it may not affect any. commit b7a0074284d33352bb9e732c660b29162f34bf0e Author: Eric Wong Date: Wed Jan 5 23:05:05 2011 -0800 close client socket after closing response body Response bodies may capture the block passed to each and save it for body.close, so don't close the socket before we have a chance to call body.close (cherry picked from commit b72a86f66c722d56a6d77ed1d2779ace6ad103ed) Conflicts: lib/unicorn/http_server.rb test/unit/test_response.rb commit b72a86f66c722d56a6d77ed1d2779ace6ad103ed Author: Eric Wong Date: Wed Jan 5 22:39:03 2011 -0800 close client socket after closing response body Response bodies may capture the block passed to each and save it for body.close, so don't close the socket before we have a chance to call body.close commit 1b69686fd28347eb5c071a9b76e2939bca424f04 Author: Eric Wong Date: Wed Jan 5 15:26:17 2011 -0800 unicorn 3.3.0 - minor optimizations Certain applications that already serve hundreds/thousands of requests a second should experience performance improvements due to Time.now.httpdate usage being removed and reimplemented in C. There are also minor internal changes and cleanups for Rainbows! commit 62c844e343978f233e4f2567fb344411c39e263c Author: Eric Wong Date: Wed Jan 5 14:06:00 2011 -0800 http_parser: add clear method, deprecate reset But allows small optimizations to be made to avoid constant/instance variable lookups later :) commit bd397ee11b60243ef15c5558c4309e46e27e6192 Author: Eric Wong Date: Wed Jan 5 11:41:36 2011 -0800 http_response: simplify the status == 100 comparison No need to preserve the response tuplet if we're just going to unpack it eventually. commit 062227e00f7ec589c3906a8bcd22dd7194268266 Author: Eric Wong Date: Wed Jan 5 11:32:44 2011 -0800 http_server: remove unnecessary 'nil' commit 3f5abce2b1c071f9aed4cdd0951331d7f037c4b1 Author: Eric Wong Date: Wed Jan 5 11:16:21 2011 -0800 socket_helper: expose more defaults in DEFAULTS hash This will allow Rainbows! to set :tcp_nodelay=>true and possibly other things in the future. commit d100025759450dd1cbeccd1a3e44c46921bba26b Author: Eric Wong Date: Tue Jan 4 17:50:51 2011 -0800 http_response: implement httpdate in C This can return a static string and be significantly faster as it reduces object allocations and Ruby method calls for the fastest websites that serve thousands of requests a second. It assumes the Ruby runtime is single-threaded, but that is the case of Ruby 1.8 and 1.9 and also what Unicorn is all about. This change is safe for Rainbows! under 1.8 and 1.9. commit 6183611108c571dbed29dfe2854b9f06757fd27f Author: Eric Wong Date: Thu Dec 30 02:32:41 2010 +0000 http_response: do not account for $, being set It's a minor garbage reduction, but nobody uses "$,", and if they did, they'd break things in the Ruby standard library as well as Rack, so let anybody who uses "$," shoot themselves in the foot. commit 3a2634f3f68f6b8ea1aa7b2bb5944884bbfa8017 Author: Eric Wong Date: Thu Dec 30 02:30:19 2010 +0000 tests: test parser works with keepalive_requests=0 We use this in Rainbows! to disable keepalive in certain configurations. commit 2c57f59172c45a3ca52dbddfb3f12c1bc70cbfd6 Author: Eric Wong Date: Wed Dec 29 16:45:13 2010 +0000 http: remove unnecessary dir_config statement We do not link against any external libraries commit 2eb2c74aeb0da1d3f6f575ff8e05715e8c5ed85e Author: Eric Wong Date: Sun Dec 26 08:10:35 2010 +0000 Rakefile: fix fm_update task Oops! commit 6f7a3958c1544c1034ecf8b1ccfdd9dabd171fd2 Author: Eric Wong Date: Sun Dec 26 08:03:23 2010 +0000 unicorn 3.2.1 - parser improvements for Rainbows! There are numerous improvements in the HTTP parser for Rainbows!, none of which affect Unicorn-only users. The kgio dependency is incremented to 2.1: this should avoid ENOSYS errors for folks building binaries on newer Linux kernels and then deploying to older ones. There are also minor documentation improvements, the website is now JavaScript-free! (Ignore the 3.2.0 release, I fat-fingered some packaging things) commit dece59f577d04f3735ccbeb190d26ce2c371d5f9 Author: Eric Wong Date: Sun Dec 26 07:58:38 2010 +0000 gemspec: fix gemspec build Oops commit 03a43d9dc23c21f1c1a1baa2f29eab1157f4a076 Author: Eric Wong Date: Sun Dec 26 07:44:54 2010 +0000 unicorn 3.2.0 - parser improvements for Rainbows! There are numerous improvements in the HTTP parser for Rainbows!, none of which affect Unicorn-only users. The kgio dependency is incremented to 2.1: this should avoid ENOSYS errors for folks building binaries on newer Linux kernels and then deploying to older ones. There are also minor documentation improvements, the website is now JavaScript-free! commit 51f30bf454e82f33443fe4a7f2e0496103c5ec6f Author: Eric Wong Date: Sun Dec 26 07:29:38 2010 +0000 http_server: remove needless lambda We can just use a begin block at startup, this also makes life easier on RDoc. commit 45f0220ab13ec67150b3226a83437356f141eefd Author: Eric Wong Date: Sun Dec 26 07:21:34 2010 +0000 http_response: remove TODO item An unconfigured Rainbows! (e.g. Rainbows! { use :Base }) already does keepalive and supports only a single client per-process. commit 87b1cf4eef3d717d345d730f28ddaad319f2fb2f Author: Eric Wong Date: Sun Dec 26 06:23:28 2010 +0000 http: #keepalive? and #headers? work after #next? We need to preserve our internal flags and only clear them on HttpParser#parse. This allows the async concurrency models in Rainbows! to work properly. commit c348223a045abb295b8c9d7dbf189264bc3a17c3 Author: Eric Wong Date: Sun Dec 26 03:38:13 2010 +0000 bump kgio dependency to ~> 2.1 The kgio 2.x series will maintain API compatibility until 3.x, so it's safe to use any 2.x release. commit f970d87f9c0a4479a59685920a96c4d2fb2315e1 Author: Eric Wong Date: Sat Dec 25 19:30:12 2010 +0000 http: fix typo in xftrust unit test Oops commit f62ef19a4aa3d3e4ce1aa37a499907ff776a8964 Author: Eric Wong Date: Fri Dec 24 08:37:22 2010 +0000 doc: use wrongdoc for documentation wrongdoc factors out a bunch of common code from this project into its own and removes JavaScript from RDoc to boot. commit 210e5cc3109af248d29f1d722076ff8ecd1fde2d Author: Eric Wong Date: Thu Dec 23 18:10:00 2010 +0000 TODO: remove item for TeeInput performance Disabling TeeInput is possible now, so the filesystem is no longer a bottleneck :> commit 5ffaf7df44425766a60d632881a2debd83605b52 Author: Eric Wong Date: Tue Dec 21 04:45:30 2010 +0000 rdoc: include tag subject in NEWS file It's more useful this way commit 3a67490b10ca38d7d3d30c6917d75ce0e093706b Author: Eric Wong Date: Tue Dec 21 01:58:32 2010 +0000 rdoc: enable webcvs feature for cgit links Hopefully this gets more people reading our source. commit ee29a14cb383839cf5dcef6fe442558f46a1615b Author: Eric Wong Date: Tue Dec 21 01:30:35 2010 +0000 configurator: RDoc cleanups and improvements This is the most important part of Unicorn documentation for end users. commit 1f5bac15cd8e4393c6da98eb7bb4532133dc6259 Author: Eric Wong Date: Tue Dec 21 01:28:23 2010 +0000 http: hook up "trust_x_forwarded" to configurator More config bloat, sadly this is necessary for Rainbows! :< commit bf64b9aa855cf3590a4d5b4eca853aef33ba90cc Author: Eric Wong Date: Mon Dec 20 22:05:50 2010 +0000 http: allow ignoring X-Forwarded-* for url_scheme Evil clients may be exposed to the Unicorn parser via Rainbows!, so we'll allow people to turn off blindly trusting certain X-Forwarded* headers for "rack.url_scheme" and rely on middleware to handle it. commit 8be3668c11cf721960581e325b481c105e8f3c89 Author: Eric Wong Date: Mon Dec 20 20:49:21 2010 +0000 http: refactor finalize_header function rack.url_scheme handling and SERVER_{NAME,PORT} handling each deserve their own functions. commit b740269f121167c4f93e3a0e155e05422f6e80ff Author: Eric Wong Date: Mon Dec 20 19:40:57 2010 +0000 http: update setting of "https" for rack.url_scheme The first value of X-Forwarded-Proto in rack.url_scheme should be used as it can be chained. This header can be set multiple times via different proxies in the chain, but consider the first one to be valid. Additionally, respect X-Forwarded-SSL as it may be passed with the "on" flag instead of X-Forwarded-Proto. ref: rack commit 85ca454e6143a3081d90e4546ccad602a4c3ad2e and 35bb5ba6746b5d346de9202c004cc926039650c7 commit 7ad59e0c48e12febae2a2fe86b76116c05977c6f Author: Eric Wong Date: Mon Dec 20 00:14:52 2010 +0000 http: support keepalive_requests directive This limits the number of keepalive requests of a single connection to prevent a single client from monopolizing server resources. On multi-process servers (e.g. Rainbows!) with many keepalive clients per worker process, this can force a client to reconnect and increase its chances of being accepted on a less-busy worker process. This directive is named after the nginx directive which is identical in function. commit 82ea9b442a9edaae6dc3b06a5c61035b2c2924c9 Author: Eric Wong Date: Sun Dec 19 18:47:23 2010 +0000 http: delay clearing env on HttpParser#next? This allows apps/middlewares on Rainbows! that rely on env in the response_body#close to hold onto the env. commit 39f264173717287eda70910e7a24fbafd21a4a7e Author: Eric Wong Date: Fri Dec 10 05:45:14 2010 +0800 unicorn 3.1.0 - client_buffer_body_size tuning This release enables tuning the client_buffer_body_size to raise or lower the threshold for buffering request bodies to disk. This only applies to users who have not disabled rewindable input. There is also a TeeInput bugfix for uncommon usage patterns and Configurator examples in the FAQ should be fixed commit 71716672752e573ff15002aaefd6e8ba8c6b6cb6 Author: Eric Wong Date: Thu Dec 9 03:39:03 2010 +0000 allow client_buffer_body_size to be tuned Since modern machines have more memory these days and clients are sending more data, avoiding potentially slow filesystem operations for larger uploads can be useful for some applications. commit 9d80b009a3cb795530ad23263f4eb525880e79dc Author: Eric Wong Date: Wed Dec 8 23:53:25 2010 +0000 configurator: ensure examples in FAQ still work This has been broken since 2.0.x Internal cleanups sometimes have unintended consequences :< commit 3b2fc62dadd3c90038c168849b33c4ca6df058da Author: Eric Wong Date: Wed Dec 8 22:02:45 2010 +0000 tee_input: fix accounting error on corked requests In case a request sends the header and buffer as one packet, TeeInput relying on accounting info from StreamInput is harmful as StreamInput will buffer in memory outside of TeeInput's control. This bug is triggered by calling env["rack.input"].size or env["rack.input"].rewind before to read. commit 52f55529293e466a77090691d1fe06a7933c74a1 Author: Eric Wong Date: Fri Dec 3 00:31:15 2010 +0000 unicorn 3.0.1 - one bugfix for Rainbows! ...and only Rainbows! This release fixes HTTP pipelining for requests with bodies for users of synchronous Rainbows! concurrency models. Since Unicorn itself does not support keepalive nor pipelining, Unicorn-only users need not upgrade. commit c32488dcc69181d2e10b82645ef87c8b8b88b8e1 Author: Eric Wong Date: Thu Dec 2 05:30:39 2010 +0000 stream_input: avoid trailer parsing on unchunked requests It screws up keepalive for Rainbows! requests with a body. commit dee9e6432c8eb5269a19c4c6b66ab932fdeda34f Author: Eric Wong Date: Sat Nov 20 10:14:19 2010 +0800 unicorn 3.0.0 - disable rewindable input! Rewindable "rack.input" may be disabled via the "rewindable_input false" directive in the configuration file. This will violate Rack::Lint for Rack 1.x applications, but can reduce I/O for applications that do not need a rewindable input. This release updates us to the Kgio 2.x series which should play more nicely with other libraries and applications. There are also internal cleanups and improvements for future versions of Rainbows! The Unicorn 3.x series supercedes the 2.x series while the 1.x series will remain supported indefinitely. commit ad268cea66c2b91538dd60fc7f945348bb24214d Author: Eric Wong Date: Sat Nov 20 08:07:12 2010 +0800 tests: stream_input tests for mixed gets/read calls Some apps may do them, so make sure we do them correctly. commit cd315e5a20b17d29679fb22b4e2ab44cd6d0edeb Author: Eric Wong Date: Sat Nov 20 07:45:57 2010 +0800 stream_input: use String#sub! instead of gsub! There's no difference because of the \A anchor, but sub! is doesn't loop so it's simpler. commit 5bc239fd154a7eaebeb024394f8e0b507bbf4c5a Author: Eric Wong Date: Fri Nov 19 20:51:57 2010 +0000 stream_input: small cleanups and fixes No need to accept any number of args, that could hide bugs in applications that could give three or more arguments. We also raise ArgumentError when given a negative length argument to read. commit d12e10ea88c7adeb97094e4b835201e4c2ce52ab Author: Eric Wong Date: Fri Nov 19 01:55:07 2010 +0000 tests: isolate kgio 2.0.0 instead of the prerelease Same thing, but might as well make it more obvious. commit 507f228864574437e610e57d20d3b77c1e6d0e41 Author: Eric Wong Date: Fri Nov 19 08:04:14 2010 +0800 unicorn 3.0.0pre2 - less bad than 2.x or 3.0.0pre1! This release updates us to the Kgio 2.x series which should play more nicely with other applications. There are also bugfixes from the 2.0.1 release and a small bugfix to the new StreamInput class. The Unicorn 3.x series will supercede the 2.x series while the 1.x series will remain supported indefinitely. commit 238c98ec4c353bb14671ab543c21baa068b7e3f2 Author: Eric Wong Date: Fri Nov 19 08:02:45 2010 +0800 update to kgio 2.x series The Kgio 2.x API is less brain-damaged than the 1.3.x series was, and should solve API-compatibility problems with dalli 0.11.1. commit 86d2a22ffdc4bf9f16e1870f9db9a2ff84760c7c Merge: eda4086 268c2ec Author: Eric Wong Date: Thu Nov 18 07:48:12 2010 +0800 Merge branch '2.0.x-stable' * 2.0.x-stable: unicorn 2.0.1 - fix errors in error handling tests: add parser error test from Rainbows! http_server: fix HttpParserError constant resolution t0012: fix race condition in reload commit 268c2ec5fef2630b0626b848be9d6ec46d360ddb Author: Eric Wong Date: Thu Nov 18 07:42:40 2010 +0800 unicorn 2.0.1 - fix errors in error handling This release fixes errors in our own error handling, causing certain errors to not be logged nor responded to correctly. Eric Wong (3): t0012: fix race condition in reload http_server: fix HttpParserError constant resolution tests: add parser error test from Rainbows! commit 859593b418db7e5fd93295a7a8b15de56cc4f6dd Author: Eric Wong Date: Thu Nov 18 07:44:47 2010 +0800 tests: add parser error test from Rainbows! This will help ensure we trap our own errors properly in the future. (cherry picked from commit eda408603edc51f10f17217c767b31a45eb6c627) commit eda408603edc51f10f17217c767b31a45eb6c627 Author: Eric Wong Date: Thu Nov 18 07:44:47 2010 +0800 tests: add parser error test from Rainbows! This will help ensure we trap our own errors properly in the future. commit 3362dc51934c15fd944748e55ba4a470cc60d27d Author: Eric Wong Date: Thu Nov 18 07:36:27 2010 +0800 stream_input: read with zero length returns '' Any calls to read with an explicit zero length now returns an empty string. While not explicitly specified by Rack::Lint, this is for compatibility with StringIO and IO methods which are common in other web servers. commit a6d96b61c2d81af077d55f43121c8472aa095447 Author: Eric Wong Date: Wed Nov 17 11:20:02 2010 -0800 http_server: fix HttpParserError constant resolution "Unicorn" is no longer in the default constant resolution namespace. (cherry picked from commit 390e351dd1283d4c80a12b744b1327fff091a141) commit 390e351dd1283d4c80a12b744b1327fff091a141 Author: Eric Wong Date: Wed Nov 17 11:20:02 2010 -0800 http_server: fix HttpParserError constant resolution "Unicorn" is no longer in the default constant resolution namespace. commit 01ae51fa5fda40a63277b0d1189925fb209c75a9 Author: Eric Wong Date: Thu Nov 18 02:48:41 2010 +0800 add missing test files oops :x commit 958c1f81a2c570f4027d8fe2dd4f5c40ac7ed430 Author: Eric Wong Date: Tue Nov 16 16:00:07 2010 -0800 unicorn 3.0.0pre1 Rewindable "rack.input" may be disabled via the "rewindable_input false" directive in the configuration file. This will violate Rack::Lint for Rack 1.x applications, but can reduce I/O for applications that do not need it. There are also internal cleanups and enhancements for future versions of Rainbows! Eric Wong (11): t0012: fix race condition in reload enable HTTP keepalive support for all methods http_parser: add HttpParser#next? method tee_input: switch to simpler API for parsing trailers switch versions to 3.0.0pre add stream_input class and build tee_input on it configurator: enable "rewindable_input" directive http_parser: ensure keepalive is disabled when reset *_input: make life easier for subclasses/modules tee_input: restore read position after #size preread_input: no-op for non-rewindable "rack.input" commit 431de671a29b312bd19e615bd4bd99228b0c8b13 Author: Eric Wong Date: Tue Nov 16 13:51:24 2010 -0800 preread_input: no-op for non-rewindable "rack.input" We may get "rack.input" objects that are not rewindable in the future, so be prepared for those and do no harm. commit d41e5364bde413e195df8803845f7232718325a6 Author: Eric Wong Date: Thu Oct 28 09:03:21 2010 +0000 t0012: fix race condition in reload We need to ensure the old worker is reaped before sending new requests intended for the new worker. (cherry picked from commit b45bf946545496cf8d69037113533d7a58ce7e20) commit 17a734a9f6ccea8c969a574f09b5d8dd3d568a9c Author: Eric Wong Date: Sat Nov 13 16:41:10 2010 +0800 tee_input: restore read position after #size It's possible for an application to call size after it has read a few bytes/lines, so do not screw up a user's read offset when consuming input. commit 855c02a9720a17854a2f1c715efbe502cdba54e2 Author: Eric Wong Date: Fri Nov 12 10:59:14 2010 +0800 *_input: make life easier for subclasses/modules Avoid having specific knowledge of internals in TeeInput and instead move that to StreamInput when dealing with byte counts. This makes things easier for Rainbows! which will need to extends these classes. commit 3b544fb2c0e4a1e14a7bcb752a8af9819b5aaeb2 Author: Eric Wong Date: Thu Nov 11 07:31:01 2010 +0800 http_parser: ensure keepalive is disabled when reset We'll need this in Rainbows! commit a89ccf321224f3248ddd00bb0edb320311604e4e Author: Eric Wong Date: Thu Nov 11 02:16:50 2010 +0800 configurator: enable "rewindable_input" directive This allows users to override the current Rack spec and disable the rewindable input requirement. This can allow applications to use less I/O to minimize the performance impact when processing uploads. commit 7d44b5384758aeddcb49d7606a9908308df7c698 Author: Eric Wong Date: Thu Nov 11 01:13:12 2010 +0800 add stream_input class and build tee_input on it We will eventually expose a Unicorn::StreamInput object as "rack.input" for Rack 2.x applications. StreamInput allows applications to avoid buffering input to disk, removing the (potentially expensive) rewindability requirement of Rack 1.x. TeeInput is also rewritten to build off StreamInput for simplicity. The only regression is that TeeInput#rewind forces us to consume an unconsumed stream before returning, a negligible price to pay for decreased complexity. commit 1493af7cc23afecc8592ce44f5226476afccd212 Author: Eric Wong Date: Thu Nov 11 07:17:19 2010 +0800 switch versions to 3.0.0pre Here are major, incompatible internal API changes. commit 8edcc3f9e1be9113685e61b9a83994a02d37c768 Author: Eric Wong Date: Sun Nov 7 10:21:43 2010 +0800 tee_input: switch to simpler API for parsing trailers Not that anybody uses trailers extensively, but it's good to know it's there. commit 60a9ec94f1f738f881e67f0a881c44c104f07c04 Author: Eric Wong Date: Sat Nov 6 10:30:44 2010 +0800 http_parser: add HttpParser#next? method An easy combination of the existing HttpParser#keepalive? and HttpParser#reset methods, this makes it easier to implement persistence. commit 7987e1a4001491f8a494f3926037f8cbee713263 Author: Eric Wong Date: Fri Sep 3 01:48:24 2010 +0000 enable HTTP keepalive support for all methods Yes, this means even POST/PUT bodies may be kept alive, but only if the body (and trailers) are fully-consumed. commit b45bf946545496cf8d69037113533d7a58ce7e20 Author: Eric Wong Date: Thu Oct 28 09:03:21 2010 +0000 t0012: fix race condition in reload We need to ensure the old worker is reaped before sending new requests intended for the new worker. commit 5ffc1f81c3f56d17ff3369f7514e978754840c29 Author: Eric Wong Date: Wed Oct 27 23:32:24 2010 +0000 unicorn 2.0.0 - mostly internal cleanups Despite the version number, this release mostly features internal cleanups for future versions of Rainbows!. User visible changes include reductions in CPU wakeups on idle sites using high timeouts. Barring possible portability issues due to the introduction of the kgio library, this release should be ready for all to use. However, 1.1.x (and possibly 1.0.x) will continue to be maintained. Unicorn 1.1.5 and 1.0.2 have also been released with bugfixes found during development of 2.0.0. commit a3b08e9411f1d958e2264329c67972541424ac35 Merge: 0692e8c 7f3ebe9 Author: Eric Wong Date: Wed Oct 27 23:31:41 2010 +0000 Merge branch '1.1.x-stable' * 1.1.x-stable: unicorn 1.1.5 doc: stop using deprecated rdoc CLI options gemspec: depend on Isolate 3.0.0 for dev configurator: reloading with unset values restores default configurator: use "__send__" instead of "send" Rakefile: capture prerelease tags Rakefile: don't post freshmeat on empty changelogs fix delays in signal handling commit 0692e8cb10dd27275f2de794ed6eba62e9918431 Merge: 4d493d8 ea975cc Author: Eric Wong Date: Wed Oct 27 23:31:38 2010 +0000 Merge branch 'maint' * maint: unicorn 1.0.2 doc: stop using deprecated rdoc CLI options gemspec: depend on Isolate 3.0.0 for dev configurator: reloading with unset values restores default configurator: use "__send__" instead of "send" Rakefile: capture prerelease tags Rakefile: don't post freshmeat on empty changelogs fix delays in signal handling SIGTTIN works after SIGWINCH commit 4d493d8ad203d7f13ac56b7d6ba2b3aaa481cbd2 Author: Eric Wong Date: Wed Oct 27 16:26:28 2010 -0700 examples/unicorn.conf: add a note about throttling signals Sending the same signal faster than the receiver can process means signals can get lost. commit ea975cc3e6d2e6ac9c971c8cbda712486ec63c2a Author: Eric Wong Date: Wed Oct 27 23:11:09 2010 +0000 unicorn 1.0.2 This is the latest maintenance release of the 1.0.x series. All users are encouraged to upgrade to 1.1.x stable series and report bugs there. Shortlog of changes since 1.0.1: Eric Wong (8): SIGTTIN works after SIGWINCH fix delays in signal handling Rakefile: don't post freshmeat on empty changelogs Rakefile: capture prerelease tags configurator: use "__send__" instead of "send" configurator: reloading with unset values restores default gemspec: depend on Isolate 3.0.0 for dev doc: stop using deprecated rdoc CLI options commit 856959cc0b2dbc96f115d26672d0f5b73ae79914 Author: Eric Wong Date: Wed Oct 27 23:07:42 2010 +0000 doc: stop using deprecated rdoc CLI options -N and -a switches no longer exist in rdoc 2.5 (cherry picked from commit 054c7df93db61839648925cfd881ae880709a210) commit 04f0f44f9bd0907fcff1e2cdc59f7e84d4110539 Author: Eric Wong Date: Wed Oct 27 23:08:51 2010 +0000 gemspec: depend on Isolate 3.0.0 for dev No reason to not use the latest and greatest! (cherry picked from commit 570a57c07fd8c3d24b7337637e0dd30136b3a11a) Conflicts: unicorn.gemspec commit 054c7df93db61839648925cfd881ae880709a210 Author: Eric Wong Date: Wed Oct 27 23:07:42 2010 +0000 doc: stop using deprecated rdoc CLI options -N and -a switches no longer exist in rdoc 2.5 commit 570a57c07fd8c3d24b7337637e0dd30136b3a11a Author: Eric Wong Date: Wed Oct 27 23:06:45 2010 +0000 gemspec: depend on Isolate 3.0.0 for dev No reason to not use the latest and greatest! commit 2dd4a89d5726e13b962c1e287d84a6c30f5dd46c Author: Eric Wong Date: Wed Oct 27 13:51:12 2010 -0700 configurator: reloading with unset values restores default If a configuration directive is set at startup and later unset, it correctly restores the original default value as if it had never been set in the first place. This applies to the majority of the configuration values with a few exceptions: * This only applies to stderr_path and stdout_path when daemonized (the usual case, they'll be redirected to "/dev/null"). When NOT daemonized, we cannot easily redirect back to the original stdout/stderr destinations. * Unsetting working_directory does not restore the original working directory where Unicorn was started. As far as we can tell unsetting this after setting it is rarely desirable and greatly increases the probability of user error. (cherry picked from commit 51b2b90284000aee8d79b37a5406173c45ae212d) commit 5e672c48d8a3555e4a01f653fb2e0b3556087737 Author: Eric Wong Date: Wed Oct 27 12:46:46 2010 -0700 configurator: use "__send__" instead of "send" It's less ambiguous since this is a network server after all. (cherry picked from commit f62c5850d7d17d7b5e301a494f8bdf5be3674411) commit 51b2b90284000aee8d79b37a5406173c45ae212d Author: Eric Wong Date: Wed Oct 27 13:51:12 2010 -0700 configurator: reloading with unset values restores default If a configuration directive is set at startup and later unset, it correctly restores the original default value as if it had never been set in the first place. This applies to the majority of the configuration values with a few exceptions: * This only applies to stderr_path and stdout_path when daemonized (the usual case, they'll be redirected to "/dev/null"). When NOT daemonized, we cannot easily redirect back to the original stdout/stderr destinations. * Unsetting working_directory does not restore the original working directory where Unicorn was started. As far as we can tell unsetting this after setting it is rarely desirable and greatly increases the probability of user error. commit f62c5850d7d17d7b5e301a494f8bdf5be3674411 Author: Eric Wong Date: Wed Oct 27 12:46:46 2010 -0700 configurator: use "__send__" instead of "send" It's less ambiguous since this is a network server after all. commit 928a88d5419210380078a2e141cb64d308719295 Author: Eric Wong Date: Wed Oct 6 01:27:45 2010 +0000 Rakefile: capture prerelease tags Since we do those, now. (cherry picked from commit 1d1a2b1bd5bdd89f774f19bf8ad24c2f5f8a2d4c) commit 74dec350d93b88c0a5bd792239671097901e2393 Author: Eric Wong Date: Wed Oct 27 19:32:55 2010 +0000 Rakefile: don't post freshmeat on empty changelogs We don't want to flood or monopolize freshmeat. (cherry picked from commit 1ad510d645e0c84c8d352ac0deaeefa75240ea94) commit c7feb7e10a937df2dc72f53aa6cc1ebda4c1cd3b Author: Eric Wong Date: Wed Oct 27 12:43:14 2010 -0700 configurator: switch to normal class No point in using a Struct for (1.8) space-efficiency if there's only one of them. commit 10037f2aabb3fab4296fc90c615e7caa9f4a9b53 Author: Eric Wong Date: Wed Oct 27 01:44:33 2010 +0000 fix delays in signal handling There is no need to loop in the master_sleep method at all, as the rest of the code is designed to function even on interrupted sleeps. This change is included as part of a larger cleanup in master. (commit bdc79712e5ac53d39c51e80dfe50aff950e5053f) commit 514af94321ef0fab74894e517792c4a9709d76f5 Author: Eric Wong Date: Wed Oct 27 00:36:25 2010 +0000 reduce master process wakeups To reduce CPU wakeups and save power during off hours, we can precalculate a safe amount to sleep before killing off idle workers. commit 7ef05ec23b06f06e9d4bb1cf45d1907b4eeacb80 Author: Eric Wong Date: Tue Oct 26 23:19:09 2010 +0000 master: remove limit on queued signals If a moronic sysadmin is sending too many signals, just let them do it. It's likely something is terribly wrong when the server is overloaded with signals, so don't try to protect users from it. This will also help in case where TTOU signals are sent too quickly during shutdown, although sleeping between kill(2) syscalls is always a good idea because of how non-real-time signals are delivered. commit 2243c97edf80d635871bc678794f07d6c1d033c2 Author: Eric Wong Date: Sat Oct 9 00:03:43 2010 +0000 unicorn 2.0.0pre3 - more small fixes There is a new Unicorn::PrereadInput middleware to which allows input bodies to be drained off the socket and buffered to disk (or memory) before dispatching the application. HTTP Pipelining behavior is fixed for Rainbows! There are some small Kgio fixes and updates for Rainbows! users as well. commit 6eb46e422f4b2ba98c795fca5e18e7262c0c688e Author: Eric Wong Date: Fri Oct 8 23:44:23 2010 +0000 add PrereadInput middleware to get around TeeInput This may be useful for some apps that wish to drain the body before acquiring an app-wide lock. Maybe it's more useful with Rainbows!... commit 9be78606355d4a0ad4ea59316ab2ce998c5b9a12 Author: Eric Wong Date: Fri Oct 8 22:58:59 2010 +0000 bump kgio dependency kgio 1.3.1 fixes some cases for zero-length reads. commit f20274e84169e18a73a5cd341b6bc31b625b83ce Author: Eric Wong Date: Fri Oct 8 08:49:22 2010 +0000 build: automatically call isolate on updates Automation is nice, the makefile needs some cleanup commit 861481436b933bf4b8d647c43191c701651f16e4 Author: Eric Wong Date: Fri Oct 8 01:34:37 2010 -0700 bump kgio dependency to 1.3.0 There was a backwards-incompatible API change, but that didn't even affect us. commit c9950692f44bd91af089794664dc56a446668004 Author: Eric Wong Date: Thu Oct 7 18:42:15 2010 -0700 gemspec: bump kgio version kgio 1.2.1 works around a bug for some *BSDs, some of which are popular platforms for developers. commit e99178ef89eca9e46b73484aaf9733259dac9dca Author: Eric Wong Date: Thu Oct 7 08:12:36 2010 +0000 http: fix behavior with pipelined requests We cannot clear the buffer between requests because clients may send multiple requests that get taken in one read()/recv() call. commit eb5ba488422020568e5ccf650891d7fccce7238f Author: Eric Wong Date: Thu Oct 7 07:22:58 2010 +0000 unicorn 2.0.0pre2 - releases are cheap Internal changes/cleanups for Rainbows! commit 4c48b520786807487f7f76d709b0dbcee63c4d0c Author: Eric Wong Date: Thu Oct 7 06:59:05 2010 +0000 http: remove unnecessary rb_str_update() calls Rubinius no longer uses it, and it conflicts with a public method in MRI. commit 8daf254356241c135ad2c843de567910528a10a7 Author: Eric Wong Date: Thu Oct 7 06:55:22 2010 +0000 start using more compact parser API This should be easier for Rainbows! to use commit 090f56bb79a8ec734719d9be90daa3cd01d29871 Author: Eric Wong Date: Thu Oct 7 06:33:03 2010 +0000 http_server: avoid method redefinition warnings We clobber the accessor methods. commit 5df8f15c32420c03b2e763a649e6d829ede52113 Author: Eric Wong Date: Thu Oct 7 05:32:38 2010 +0000 http: allow this to be used as a request object The parser and request object become one and the same, since the parser lives for the lifetime of the request. commit 629107d749748f661ddb73f146ab35836874cc9e Author: Eric Wong Date: Wed Oct 6 17:16:49 2010 -0700 bin/unicorn: show "RACK_ENV" in --help It's more descriptive as to what environment we're setting than "ENVIRONMENT". commit 1d1a2b1bd5bdd89f774f19bf8ad24c2f5f8a2d4c Author: Eric Wong Date: Wed Oct 6 01:27:45 2010 +0000 Raiefile: capture prerelease tags Since we do those, now. commit cb48b1bc7231db7f53bec6e88e696dc53153750d Author: Eric Wong Date: Wed Oct 6 01:08:36 2010 +0000 unicorn 2.0.0pre1 - a boring "major" release Mostly internal cleanups for future versions of Rainbows! and people trying out Rubinius. There are tiny performance improvements for Ruby 1.9.2 users which may only be noticeable with Rainbows! Unicorn 1.1.x users are NOT required to upgrade. commit 4c59a4861bf3f8d25335696c1f8cbce3cd5db902 Author: Eric Wong Date: Wed Oct 6 01:07:49 2010 +0000 gemspec: depend on newer isolate We use the latest and greatest whenever possible. commit cb233696be73873f6f8c367f4b977ade1815b265 Author: Eric Wong Date: Tue Oct 5 23:59:45 2010 +0000 various cleanups and reduce indentation This also affects some constant scoping rules, but hopefully makes things easier to follow. Accessing ivars (not via accessor methods) are also slightly faster, so use them in the criticial process_client code path. commit d4c898a4adc6cb6c3a20a648ae6b9b6a226066a6 Author: Eric Wong Date: Tue Oct 5 23:34:39 2010 +0000 upgrade to kgio 1.2.0 This provides the kgio_read! method which is like readpartial, only significantly cheaper when a client disconnects on us. commit 80f9987581014d694b8eb67bba0d5c408b7d0f98 Author: Eric Wong Date: Tue Oct 5 23:34:19 2010 +0000 GNUmakefile: fix isolate invocation again :x commit fd6b47cf1690cb45f2144cd92e0fe1f301c7c37b Author: Eric Wong Date: Tue Oct 5 22:09:20 2010 +0000 tee_input: use kgio to avoid stack traces on EOF TeeInput methods may be invoked deep in the stack, so avoid giving them more work to do if a client disconnects due to a bad upload. commit 350e8fa3a94838bcc936782315b3472615fe6517 Author: Eric Wong Date: Tue Oct 5 22:01:19 2010 +0000 http: raise empty backtrace for HttpParserError It's expensive to generate a backtrace and this exception is only triggered by bad clients. So make it harder for them to DoS us by sending bad requests. commit c2975b85b9378797631d3ab133cac371f9fadf54 Author: Eric Wong Date: Tue Oct 5 21:38:47 2010 +0000 tests: do not invoke isolate in test install dest We don't want to waste time and bandwidth. commit ec1315c9e9175d755dfd7b4acb8398fa7c7a924e Author: Eric Wong Date: Tue Oct 5 21:29:51 2010 +0000 test_tee_input: use a socketpair() It's a much closer representation of what we'd expect in the real server than a mono-directional UNIX pipe. commit c639eef6b9c8d793c7f72fa5ac03adb5cf4d1e14 Author: Eric Wong Date: Tue Oct 5 19:22:09 2010 +0000 test_signals: enable test under Rubinius The bugs from signal handling were fixed in the Rubinius 1.1.0 release. commit 72dee9e4a8234af762b058a38132268d202c17bf Author: Eric Wong Date: Tue Oct 5 19:20:39 2010 +0000 tmpio: use super instead of an explicit method This is for compatibility with Ruby implementations such as Rubinius that use "IO.new" internally inside "IO.open" commit 7ca92025ececb4b71ec4420e03d5725f13c39cc4 Author: Eric Wong Date: Tue Oct 5 18:48:53 2010 +0000 update comment about non-blocking accept() Thanks to kgio, we no longer use accept_nonblock. commit fc820598da30509269ec84eeca598085ca296e38 Author: Eric Wong Date: Tue Oct 5 08:00:34 2010 +0000 util: uindent use less ambiguous constant scoping This hopefully makes things easier to read and follow. commit 3d147e9bcd8f99c94900a00181692c2a09c3c3c9 Author: Eric Wong Date: Tue Oct 5 07:54:13 2010 +0000 Unicorn::Util.tmpio => Unicorn::TmpIO.new This is slightly shorter and hopefully easier to find. commit e184b9d0fb45b31d80645475e22f0bbbecd195f9 Author: Eric Wong Date: Tue Oct 5 01:27:00 2010 +0000 doc: update TODO This gives us some things to think about. commit 29946368c45dce5da116adb426362ee93c507c4e Author: Eric Wong Date: Tue Oct 5 00:13:02 2010 +0000 start using kgio, the kinder, gentler I/O library This should hopefully make the non-blocking accept() situation more tolerable under Ruby 1.9.2. commit 9ef6b6f551a34922cfd831e2521495e89afe2f94 Author: Eric Wong Date: Mon Oct 4 23:55:31 2010 +0000 split out isolate usage/logic We'll be using more of Isolate in development. commit 018a9deff4bd9273e053f369d746256e5b3ac99b Author: Eric Wong Date: Mon Oct 4 21:06:41 2010 +0000 http_request: reformat and small reorg This hides more HTTP request logic inside our object. commit dfc5f5a5e4aec4578b79de68c91906da75472a5a Author: Eric Wong Date: Wed Sep 29 23:57:57 2010 -0700 tee_input: update interface to use HttpRequest This should ensure we have less typing to do. commit fe94d80cb37ee441762ad2a8f5c25092f8eb57a8 Author: Eric Wong Date: Mon Sep 27 22:39:02 2010 -0700 http_request: avoid globals Rainbows! will be able to reuse this. commit 5b6a97ff54d029d433b79eee1549e6f99464c48b Author: Eric Wong Date: Fri Aug 27 21:45:33 2010 +0000 split out worker to a separate file This hopefully makes things easier to read, follow, and find since it's mostly documentation... commit 50c11036dd4898ccfed8b3e0552e88c67b6c63a9 Author: Eric Wong Date: Fri Aug 27 20:29:55 2010 +0000 http_response: avoid singleton method There's no need for a response class or object since Rack just uses an array as the response. So use a procedural style which allows for easier understanding. We shall also support keepalive/pipelining in the future, too. commit 7a3efe8a03f85c1f2957130986c24ef7931ff44a Merge: 1a2363b 6151686 Author: Eric Wong Date: Mon Oct 4 20:34:29 2010 +0000 Merge commit 'v1.1.4' * commit 'v1.1.4': unicorn 1.1.4 - small bug fix and doc updates update Rails 3 tests to use Rails 3 final avoid unlinking actively listening sockets doc: update HACKING for documentation contributions doc: update Sandbox document for Bundler TUNING: more on socket buffer sizes commit 1a2363b17b1d06be6b35d347ebcaed6a0c940200 Author: Eric Wong Date: Mon Oct 4 04:17:31 2010 +0000 avoid unlinking actively listening sockets While we've always unlinked dead sockets from nuked/leftover processes, blindly unlinking them can cause unnecessary failures when an active process is already listening on them. We now make a simple connect(2) check to ensure the socket is not in use before unlinking it. Thanks to Jordan Ritter for the detailed bug report leading to this fix. ref: http://mid.gmane.org/8D95A44B-A098-43BE-B532-7D74BD957F31@darkridge.com commit 505a9e72d320fe3ae521ceb0f381c1c0f5ae4389 Author: Eric Wong Date: Wed Sep 15 14:57:27 2010 -0700 doc: update HACKING for documentation contributions We switched to RDoc 2.5.x long ago and this should clarify some documentation preferences I have. commit 1a75966a5d1a1f6307ed3386e2f91a28bbb72ca0 Author: Eric Wong Date: Wed Sep 15 14:42:54 2010 -0700 doc: update Sandbox document for Bundler Thanks to Lawrence Pit, Jamie Wilkinson, and Eirik Dentz Sinclair. ref: mid.gmane.org/4C8986DA.7090603@gmail.com ref: mid.gmane.org/5F1A02DB-CBDA-4302-9E26-8050C2D72433@efficiency20.com commit f9a7a19a361fd674bab4e2df7e0897015528bba7 Author: Eric Wong Date: Mon Aug 30 23:25:59 2010 -0700 TUNING: more on socket buffer sizes Large buffers can hurt as well as help. And the difference in real apps that do a lot of things other than I/O often makes it not worth it. commit da272fc48ffaa808456fe94dd7a3e01bc9799832 Author: Eric Wong Date: Mon Aug 30 08:11:44 2010 +0000 update Rails 3 tests to use Rails 3 final Rails 3 is out, and requires no code changes on our end to work (as far as our tests show :) commit 0aaa0afa49a2953b7c26c1596a284621e23d5fc4 Author: Eric Wong Date: Mon Aug 30 07:59:01 2010 +0000 remove nasty ugly hacks at startup These nasty hacks were breaking Rubinius compatibility. This can be further cleaned up, too. commit f3e1653b900596e054297675becd01d9985ad482 Merge: feab35f d634b06 Author: Eric Wong Date: Sun Aug 29 23:38:13 2010 +0000 Merge branch '1.1.x-stable' * 1.1.x-stable: unicorn 1.1.3 - small bug fixes make log reopens even more robust in threaded apps update Rails3 tests to use 3.0.0rc2 make log reopens more robust in multithreaded apps bin/*: more consistent --help output SIGTTIN works after SIGWINCH commit feab35fe531843066db3418598874cf9f9419614 Author: Eric Wong Date: Sat Aug 28 18:52:48 2010 +0000 make log reopens even more robust in threaded apps A follow-up to 4b23693b9082a84433a9e6c1f358b58420176b27 If multithreaded programming can be compared to juggling chainsaws, then multithreaded programming with signal handlers in play is akin to juggling chainsaws on a tightrope over shark-infested waters. commit 18968f6aff2fa5ba5a7e3e3d47c9cc05cd6c260d Author: Eric Wong Date: Sat Aug 28 07:07:14 2010 +0000 update Rails3 tests to use 3.0.0rc2 No code changes needed, thankfully. commit 4b23693b9082a84433a9e6c1f358b58420176b27 Author: Eric Wong Date: Sat Aug 28 05:30:46 2010 +0000 make log reopens more robust in multithreaded apps IOError may occur due to race conditions as another thread may close the file immediately after we call File#closed? to check. Errno::EBADF may occur in some applications that close a file descriptor without notifying Ruby (or if two IO objects refer to the same descriptor, possibly one of them using IO#for_fd). commit 096afc1a8e958cc09b4ce8b3bfe76ce056c7ed69 Author: Eric Wong Date: Tue Aug 24 06:21:00 2010 +0000 bin/*: more consistent --help output This fixes a long-standing bug in the output of "unicorn_rails" where the program name was missing. commit bdc79712e5ac53d39c51e80dfe50aff950e5053f Author: Eric Wong Date: Sat Aug 7 03:27:50 2010 +0000 miscellaneous loop and begin cleanups These are minor changes to remove unnecessary loop nesting and begin usage to reduce our code size and hopefully simplify flow for readers. commit e4d0b226391948ef433f1d0135814315e4c48535 Author: Eric Wong Date: Sat Aug 7 04:25:51 2010 +0000 log ERROR messages if workers exit with failure Something is wrong if workers exit with a non-zero status, so we'll increase the log level to help prevent people from missing it. commit f1d33c80dd6c5650f960f7087f4e08f809754d34 Author: Eric Wong Date: Fri Jul 16 08:25:32 2010 +0000 SIGTTIN works after SIGWINCH In addition to SIGHUP, it should be possible to gradually bring workers back up (to avoid overloading the machine) when rolling back upgrades after SIGWINCH. Noticed-by: Lawrence Pit ref: http://mid.gmane.org/4C3F8C9F.2090903@gmail.com commit 5a0506c2affd2f5abe6e7315121e67aa3e32b253 Author: Eric Wong Date: Fri Jul 16 08:25:32 2010 +0000 SIGTTIN works after SIGWINCH In addition to SIGHUP, it should be possible to gradually bring workers back up (to avoid overloading the machine) when rolling back upgrades after SIGWINCH. Noticed-by: Lawrence Pit ref: http://mid.gmane.org/4C3F8C9F.2090903@gmail.com (cherry picked from commit e75ee7615f9875db314a6403964e7b69a68b0521) commit 78ba3899eb24d6893e34984b9f1c479c7e6c9be3 Merge: c13bec3 d1818d2 Author: Eric Wong Date: Tue Jul 13 13:04:53 2010 -0700 Merge branch '1.1.x-stable' * 1.1.x-stable: (27 commits) unicorn 1.1.2 - fixing upgrade rollbacks unicorn 1.0.1 - bugfixes only SIGHUP deals w/ dual master pid path scenario launcher: do not re-daemonize when USR2 upgrading SIGHUP deals w/ dual master pid path scenario launcher: do not re-daemonize when USR2 upgrading unicorn 1.1.1 - fixing cleanups gone bad :x tee_input: fix constant resolution for client EOF unicorn 1.1.0 - small changes and cleanups cleanup "stringio" require tee_input: safer record separator ($/) handling prefer "[]" to "first"/"last" where possible tee_input: safer record separator ($/) handling socket_helper: disable documentation socket_helper: cleanup FreeBSD accf_* detection socket_helper: no reason to check for logger method configurator: cleanup RDoc, un-indent configurator: documentation for new accept options socket_helper: move defaults to the DEFAULTS constant doc: recommend absolute paths for -c/--config-file ... commit c13bec3449396b21795966101367838161612d61 Author: Eric Wong Date: Tue Jul 13 08:57:37 2010 +0000 SIGHUP deals w/ dual master pid path scenario As described in our SIGNALS documentation, sending SIGHUP to the old master (to respawn SIGWINCH-ed children) while the new master (spawned from SIGUSR2) is active is useful for backing out of an upgrade before sending SIGQUIT to the new master. Unfortunately, the SIGHUP signal to the old master will cause the ".oldbin" pid file to be reset to the non-".oldbin" version and thus attempt to clobber the pid file in use by the to-be-terminated new master process. Thanks to the previous commit to prevent redaemonization in the new master, the old master can reliably detect if the new master is active while it is reloading the config file. Thanks to Lawrence Pit for discovering this bug. ref: http://mid.gmane.org/4C3BEACF.7040301@gmail.com commit 3f0f9d6d72cf17b34c130b86eb933bbc513b24b3 Author: Eric Wong Date: Tue Jul 13 08:53:48 2010 +0000 launcher: do not re-daemonize when USR2 upgrading This was accidentally enabled when ready_pipe was developed. While re-daemonizing appears harmless in most cases this makes detecting backed-out upgrades from the original master process impossible. commit ac15513bb81a345cd12c67702a81a585b8b0514e Author: Eric Wong Date: Sun Jul 11 02:05:01 2010 +0000 tee_input: fix constant resolution for client EOF Noticed while hacking on a Zbatery-using application commit 0fea004ab093ec4f59d919915a505a136326bd8a Author: Eric Wong Date: Thu Jul 8 05:54:25 2010 +0000 cleanup "stringio" require "stringio" is part of the Ruby distro and we use it in multiple places, so avoid re-requiring it. commit 5ece8c1c33f10e6496dfe5ae1d0d368293278d2d Author: Eric Wong Date: Thu Jul 8 05:33:49 2010 +0000 prefer "[]" to "first"/"last" where possible "[]" is slightly faster under Ruby 1.9 (but slightly slower under 1.8). commit 1cd698f8c7938b1f19e9ba091708cb4515187939 Author: Eric Wong Date: Thu Jul 8 05:14:55 2010 +0000 tee_input: safer record separator ($/) handling Different threads may change $/ during execution, so cache it at function entry to a local variable for safety. $/ may also be of a non-binary encoding, so rely on Rack::Utils.bytesize to portably capture the correct size. Our string slicing is always safe from 1.9 encoding: both our socket and backing temporary file are opened in binary mode, so we'll always be dealing with binary strings in this class (in accordance to the Rack spec). commit 98c51edf8b6f031a655a93b52808c9f9b78fb6fa Author: Eric Wong Date: Tue Jul 6 14:17:02 2010 -0700 socket_helper: disable documentation for internals commit 2b4b15cf513f66dc7a5aabaae4491c17895c288c Author: Eric Wong Date: Tue Jul 6 12:59:45 2010 -0700 socket_helper: cleanup FreeBSD accf_* detection Instead of detecting at startup if filters may be used, just try anyways and log the error. It is better to ask for forgiveness than permission :) commit e0ea1e1548a807d152c0ffc175915e98addfe1f2 Author: Eric Wong Date: Tue Jul 6 12:51:24 2010 -0700 socket_helper: no reason to check for logger method We only use this module in HttpServer and our unit test mocks it properly. commit e4d2c7c302e96ee504d82376885ac6b1897c666a Author: Eric Wong Date: Tue Jul 6 12:49:48 2010 -0700 configurator: cleanup RDoc, un-indent No point in redeclaring the Unicorn module in here. commit 686281a90a9b47bac4dfd32a72a97e6e8d26afa1 Author: Eric Wong Date: Tue Jul 6 12:39:36 2010 -0700 configurator: documentation for new accept options The defaults should be reasonable, but there may be folks who want to experiment. commit ef8f888ba1bacc759156f7336d39ba9b947e3f9d Author: Eric Wong Date: Tue Jul 6 12:35:45 2010 -0700 socket_helper: move defaults to the DEFAULTS constant This is to allow Rainbows! to override the defaults. commit d7695c25c5e3b1c90e63bf15a5c5fdf68bfd0c34 Author: Eric Wong Date: Mon Jul 5 23:14:40 2010 +0000 doc: recommend absolute paths for -c/--config-file Suggested-by: Jeremy Evans ref: http://mid.gmane.org/AANLkTintT4vHGEdueuG45_RwJqFCToHi5pm2-WKDSUMz@mail.gmail.com commit 646cc762cc9297510102fc094f3af8a5a9e296c7 Author: Eric Wong Date: Sat Jul 3 09:30:57 2010 +0000 socket_helper: tunables for tcp_defer_accept/accept_filter Under Linux, this allows users to tune the time (in seconds) to defer connections before allowing them to be accepted. The behavior of TCP_DEFER_ACCEPT changed with Linux 2.6.32 and idle connections may still be accept()-ed after the specified value in seconds. A small value of '1' remains the default for Unicorn as Unicorn does not worry about slow clients. Higher values provide better DoS protection for Rainbows! but also increases kernel memory usage. Allowing "dataready" for FreeBSD accept filters will allow SSL sockets to be used in the future for HTTPS, too. commit 5769f313793ca84100f089b1911f2e22d0a31e9d Author: Eric Wong Date: Mon Jun 28 04:45:16 2010 +0000 http_response: this should be a module, not a class This affects Rainbows!, but Rainbows! is still using the Unicorn 1.x branch. While we're at it, avoid redeclaring the "Unicorn" module, it makes documentation noisier. commit cf63db66bca9acfd3416ab8fc8a7fd4f07927342 Author: Eric Wong Date: Fri Jun 25 11:29:13 2010 -0700 test-exec: prefer ENV['PWD'] in working_directory tests We do an extra check in the application dispatch to ensure ENV['PWD'] is set correctly to match Dir.pwd (even if the string path is different) as this is required for Capistrano deployments. These tests should now pass under OSX where /var is apparently a symlink to /private/var. commit e2503a78150f4be113ee2a19404ba6aec401c696 Author: Eric Wong Date: Thu Jun 24 05:47:27 2010 +0000 const: bump UNICORN_VERSION to 2.0.0pre commit b8b979d75519be1c84818f32b83d85f8ec5f6072 Author: Eric Wong Date: Thu Jun 24 04:31:37 2010 +0000 http: avoid (re-)declaring the Unicorn module It makes for messy documentation. commit 6f720afd95d8131a2657c643b97cb18c750ed9f8 Author: Eric Wong Date: Thu Jun 24 04:24:34 2010 +0000 tee_input: undent, avoid (re)-declaring "module Unicorn" It makes RDoc look better and cleaner, since we don't do anything in the Unicorn namespace. commit 9f48be69bfe579dab02b5fe8d6e728ae63fd24fc Author: Eric Wong Date: Thu Jun 24 04:11:35 2010 +0000 tee_input: allow tuning of client_body_buffer_size/io_size Some folks may require more fine-grained control of buffering and I/O chunk sizes, so we'll support them (unofficially, for now). commit 1a49a8295054a2e931f5288540acb858be8edcc8 Author: Eric Wong Date: Thu Jun 24 03:54:40 2010 +0000 tee_input: (nitpick) use IO#rewind instead of IO#seek(0) no need to pass an extra argument unicorn-4.7.0/KNOWN_ISSUES0000644000004100000410000000740012236653132015145 0ustar www-datawww-data= Known Issues Occasionally odd {issues}[link:ISSUES.html] arise without a transparent or acceptable solution. Those issues are documented here. * Some libraries/applications may install signal handlers which conflict with signal handlers unicorn uses. Leaving "preload_app false" (the default) will allow unicorn to always override existing signal handlers. * Issues with FreeBSD jails can be worked around as documented by Tatsuya Ono: http://mid.gmane.org/CAHBuKRj09FdxAgzsefJWotexw-7JYZGJMtgUp_dhjPz9VbKD6Q@mail.gmail.com * PRNGs (pseudo-random number generators) loaded before forking (e.g. "preload_app true") may need to have their internal state reset in the after_fork hook. Starting with \Unicorn 3.6.1, we have builtin workarounds for Kernel#rand and OpenSSL::Random users, but applications may use other PRNGs. * Under some versions of Ruby 1.8, it is necessary to call +srand+ in an after_fork hook to get correct random number generation. We have a builtin workaround for this starting with \Unicorn 3.6.1 See http://redmine.ruby-lang.org/issues/show/4338 * On Ruby 1.8 prior to Ruby 1.8.7-p248, *BSD platforms have a broken stdio that causes failure for file uploads larger than 112K. Upgrade your version of Ruby or continue using Unicorn 1.x/3.4.x. * For notes on sandboxing tools such as Bundler or Isolate, see the {Sandbox}[link:Sandbox.html] page. * nginx with "sendfile on" under FreeBSD 8 is broken when uploads are buffered to disk. Disabling sendfile is required to work around this bug which should be fixed in newer versions of FreeBSD. * When using "preload_app true", with apps using background threads need to restart them in the after_fork hook because threads are never shared with child processes. Additionally, any synchronization primitives (Mutexes, Monitors, ConditionVariables) should be reinitialized in case they are held during fork time to avoid deadlocks. The core Ruby Logger class needlessly uses a MonitorMutex which can be disabled with a {monkey patch}[link:examples/logger_mp_safe.rb] == Known Issues (Old) * Under Ruby 1.9.1, methods like Array#shuffle and Array#sample will segfault if called after forking. Upgrade to Ruby 1.9.2 or call "Kernel.rand" in your after_fork hook to reinitialize the random number generator. See http://redmine.ruby-lang.org/issues/show/2962 for more details * Rails 2.3.2 bundles its own version of Rack. This may cause subtle bugs when simultaneously loaded with the system-wide Rack Rubygem which Unicorn depends on. Upgrading to Rails 2.3.4 (or later) is strongly recommended for all Rails 2.3.x users for this (and security reasons). Rails 2.2.x series (or before) did not bundle Rack and are should be unnaffected. If there is any reason which forces your application to use Rails 2.3.2 and you have no other choice, then you may edit your Unicorn gemspec and remove the Rack dependency. ref: http://mid.gmane.org/20091014221552.GA30624@dcvr.yhbt.net Note: the workaround described in the article above only made the issue more subtle and we didn't notice them immediately. * WONTFIX: code reloading and restarts with Sinatra 0.3.x (and likely older versions) apps is broken. The workaround is to force production mode to disable code reloading as well as disabling "run" in your Sinatra application: set :env, :production set :run, false Since this is no longer an issue with Sinatra 0.9.x apps, this will not be fixed on our end. Since Unicorn is itself the application launcher, the at_exit handler used in old Sinatra always caused Mongrel to be launched whenever a Unicorn worker was about to exit. Also remember we're capable of replacing the running binary without dropping any connections regardless of framework :) unicorn-4.7.0/Documentation/0000755000004100000410000000000012236653132016043 5ustar www-datawww-dataunicorn-4.7.0/Documentation/GNUmakefile0000644000004100000410000000111512236653132020113 0ustar www-datawww-dataall:: PANDOC = pandoc PANDOC_OPTS = -f markdown --email-obfuscation=none pandoc = $(PANDOC) $(PANDOC_OPTS) pandoc_html = $(pandoc) --toc -t html --no-wrap man1 := $(addsuffix .1,unicorn unicorn_rails) html1 := $(addsuffix .html,$(man1)) all:: html man html: $(html1) man: $(man1) install-html: html mkdir -p ../doc/man1 install -m 644 $(html1) ../doc/man1 install-man: man mkdir -p ../man/man1 install -m 644 $(man1) ../man/man1 %.1: %.1.txt $(pandoc) -s -t man < $< > $@+ && mv $@+ $@ %.1.html: %.1.txt $(pandoc_html) < $< > $@+ && mv $@+ $@ clean:: $(RM) $(man1) $(html1) unicorn-4.7.0/Documentation/unicorn_rails.1.txt0000644000004100000410000001572512236653132021624 0ustar www-datawww-data% UNICORN_RAILS(1) Unicorn User Manual % The Unicorn Community % September 17, 2009 # NAME unicorn_rails - a script/server-like command to launch the Unicorn HTTP server # SYNOPSIS unicorn_rails [-c CONFIG_FILE] [-E RAILS_ENV] [-D] [RACKUP_FILE] # DESCRIPTION A rackup(1)-like command to launch Rails applications using Unicorn. It is expected to be started in your Rails application root (RAILS_ROOT), but the "working_directory" directive may be used in the CONFIG_FILE. It is designed to help Rails 1.x and 2.y users transition to Rack, but it is NOT needed for Rails 3 applications. Rails 3 users are encouraged to use unicorn(1) instead of unicorn_rails(1). Users of Rails 1.x/2.y may also use unicorn(1) instead of unicorn_rails(1). The outward interface resembles rackup(1), the internals and default middleware loading is designed like the `script/server` command distributed with Rails. While Unicorn takes a myriad of command-line options for compatibility with ruby(1) and rackup(1), it is recommended to stick to the few command-line options specified in the SYNOPSIS and use the CONFIG_FILE as much as possible. # UNICORN OPTIONS -c, \--config-file CONFIG_FILE : Path to the Unicorn-specific config file. The config file is implemented as a Ruby DSL, so Ruby code may executed. See the RDoc/ri for the *Unicorn::Configurator* class for the full list of directives available from the DSL. Using an absolute path for for CONFIG_FILE is recommended as it makes multiple instances of Unicorn easily distinguishable when viewing ps(1) output. -D, \--daemonize : Run daemonized in the background. The process is detached from the controlling terminal and stdin is redirected to "/dev/null". Unlike many common UNIX daemons, we do not chdir to \"/\" upon daemonization to allow more control over the startup/upgrade process. Unless specified in the CONFIG_FILE, stderr and stdout will also be redirected to "/dev/null". Daemonization will _skip_ loading of the *Rails::Rack::LogTailer* middleware under Rails \>\= 2.3.x. By default, unicorn\_rails(1) will create a PID file in _\"RAILS\_ROOT/tmp/pids/unicorn.pid\"_. You may override this by specifying the "pid" directive to override this Unicorn config file. -E, \--env RAILS_ENV : Run under the given RAILS_ENV. This sets the RAILS_ENV environment variable. Acceptable values are exactly those you expect in your Rails application, typically "development" or "production". -l, \--listen ADDRESS : Listens on a given ADDRESS. ADDRESS may be in the form of HOST:PORT or PATH, HOST:PORT is taken to mean a TCP socket and PATH is meant to be a path to a UNIX domain socket. Defaults to "0.0.0.0:8080" (all addresses on TCP port 8080). For production deployments, specifying the "listen" directive in CONFIG_FILE is recommended as it allows fine-tuning of socket options. # RACKUP COMPATIBILITY OPTIONS -o, \--host HOST : Listen on a TCP socket belonging to HOST, default is "0.0.0.0" (all addresses). If specified multiple times on the command-line, only the last-specified value takes effect. This option only exists for compatibility with the rackup(1) command, use of "-l"/"\--listen" switch is recommended instead. -p, \--port PORT : Listen on the specified TCP PORT, default is 8080. If specified multiple times on the command-line, only the last-specified value takes effect. This option only exists for compatibility with the rackup(1) command, use of "-l"/"\--listen" switch is recommended instead. \--path PATH : Mounts the Rails application at the given PATH (instead of "/"). This is equivalent to setting the RAILS_RELATIVE_URL_ROOT environment variable. This is only supported under Rails 2.3 or later at the moment. # RUBY OPTIONS -e, \--eval LINE : Evaluate a LINE of Ruby code. This evaluation happens immediately as the command-line is being parsed. -d, \--debug : Turn on debug mode, the $DEBUG variable is set to true. For Rails \>\= 2.3.x, this loads the *Rails::Rack::Debugger* middleware. -w, \--warn : Turn on verbose warnings, the $VERBOSE variable is set to true. -I, \--include PATH : specify $LOAD_PATH. PATH will be prepended to $LOAD_PATH. The \':\' character may be used to delimit multiple directories. This directive may be used more than once. Modifications to $LOAD_PATH take place immediately and in the order they were specified on the command-line. -r, \--require LIBRARY : require a specified LIBRARY before executing the application. The \"require\" statement will be executed immediately and in the order they were specified on the command-line. # RACKUP FILE This defaults to \"config.ru\" in RAILS_ROOT. It should be the same file used by rackup(1) and other Rack launchers, it uses the *Rack::Builder* DSL. Unlike many other Rack applications, RACKUP_FILE is completely _optional_ for Rails, but may be used to disable some of the default middleware for performance. Embedded command-line options are mostly parsed for compatibility with rackup(1) but strongly discouraged. # ENVIRONMENT VARIABLES The RAILS_ENV variable is set by the aforementioned \-E switch. The RAILS_RELATIVE_URL_ROOT is set by the aforementioned \--path switch. Either of these variables may also be set in the shell or the Unicorn CONFIG_FILE. All application or library-specific environment variables (e.g. TMPDIR, RAILS_ASSET_ID) may always be set in the Unicorn CONFIG_FILE in addition to the spawning shell. When transparently upgrading Unicorn, all environment variables set in the old master process are inherited by the new master process. Unicorn only uses (and will overwrite) the UNICORN_FD environment variable internally when doing transparent upgrades. # SIGNALS The following UNIX signals may be sent to the master process: * HUP - reload config file, app, and gracefully restart all workers * INT/TERM - quick shutdown, kills all workers immediately * QUIT - graceful shutdown, waits for workers to finish their current request before finishing. * USR1 - reopen all logs owned by the master and all workers See Unicorn::Util.reopen_logs for what is considered a log. * USR2 - reexecute the running binary. A separate QUIT should be sent to the original process once the child is verified to be up and running. * WINCH - gracefully stops workers but keep the master running. This will only work for daemonized processes. * TTIN - increment the number of worker processes by one * TTOU - decrement the number of worker processes by one See the [SIGNALS][4] document for full description of all signals used by Unicorn. # SEE ALSO * unicorn(1) * *Rack::Builder* ri/RDoc * *Unicorn::Configurator* ri/RDoc * [Unicorn RDoc][1] * [Rack RDoc][2] * [Rackup HowTo][3] [1]: http://unicorn.bogomips.org/ [2]: http://rack.rubyforge.org/doc/ [3]: http://wiki.github.com/rack/rack/tutorial-rackup-howto [4]: http://unicorn.bogomips.org/SIGNALS.html unicorn-4.7.0/Documentation/unicorn.1.txt0000644000004100000410000001544512236653132020431 0ustar www-datawww-data% UNICORN(1) Unicorn User Manual % The Unicorn Community % September 15, 2009 # NAME unicorn - a rackup-like command to launch the Unicorn HTTP server # SYNOPSIS unicorn [-c CONFIG_FILE] [-E RACK_ENV] [-D] [RACKUP_FILE] # DESCRIPTION A rackup(1)-like command to launch Rack applications using Unicorn. It is expected to be started in your application root (APP_ROOT), but the "working_directory" directive may be used in the CONFIG_FILE. While unicorn takes a myriad of command-line options for compatibility with ruby(1) and rackup(1), it is recommended to stick to the few command-line options specified in the SYNOPSIS and use the CONFIG_FILE as much as possible. # RACKUP FILE This defaults to \"config.ru\" in APP_ROOT. It should be the same file used by rackup(1) and other Rack launchers, it uses the *Rack::Builder* DSL. Embedded command-line options are mostly parsed for compatibility with rackup(1) but strongly discouraged. # UNICORN OPTIONS -c, \--config-file CONFIG_FILE : Path to the Unicorn-specific config file. The config file is implemented as a Ruby DSL, so Ruby code may executed. See the RDoc/ri for the *Unicorn::Configurator* class for the full list of directives available from the DSL. Using an absolute path for for CONFIG_FILE is recommended as it makes multiple instances of Unicorn easily distinguishable when viewing ps(1) output. -D, \--daemonize : Run daemonized in the background. The process is detached from the controlling terminal and stdin is redirected to "/dev/null". Unlike many common UNIX daemons, we do not chdir to \"/\" upon daemonization to allow more control over the startup/upgrade process. Unless specified in the CONFIG_FILE, stderr and stdout will also be redirected to "/dev/null". -E, \--env RACK_ENV : Run under the given RACK_ENV. See the RACK ENVIRONMENT section for more details. -l, \--listen ADDRESS : Listens on a given ADDRESS. ADDRESS may be in the form of HOST:PORT or PATH, HOST:PORT is taken to mean a TCP socket and PATH is meant to be a path to a UNIX domain socket. Defaults to "0.0.0.0:8080" (all addresses on TCP port 8080) For production deployments, specifying the "listen" directive in CONFIG_FILE is recommended as it allows fine-tuning of socket options. -N, \--no-default-middleware : Disables loading middleware implied by RACK_ENV. This bypasses the configuration documented in the RACK ENVIRONMENT section, but still allows RACK_ENV to be used for application/framework-specific purposes. # RACKUP COMPATIBILITY OPTIONS -o, \--host HOST : Listen on a TCP socket belonging to HOST, default is "0.0.0.0" (all addresses). If specified multiple times on the command-line, only the last-specified value takes effect. This option only exists for compatibility with the rackup(1) command, use of "-l"/"\--listen" switch is recommended instead. -p, \--port PORT : Listen on the specified TCP PORT, default is 8080. If specified multiple times on the command-line, only the last-specified value takes effect. This option only exists for compatibility with the rackup(1) command, use of "-l"/"\--listen" switch is recommended instead. -s, \--server SERVER : No-op, this exists only for compatibility with rackup(1). # RUBY OPTIONS -e, \--eval LINE : Evaluate a LINE of Ruby code. This evaluation happens immediately as the command-line is being parsed. -d, \--debug : Turn on debug mode, the $DEBUG variable is set to true. -w, \--warn : Turn on verbose warnings, the $VERBOSE variable is set to true. -I, \--include PATH : specify $LOAD_PATH. PATH will be prepended to $LOAD_PATH. The \':\' character may be used to delimit multiple directories. This directive may be used more than once. Modifications to $LOAD_PATH take place immediately and in the order they were specified on the command-line. -r, \--require LIBRARY : require a specified LIBRARY before executing the application. The \"require\" statement will be executed immediately and in the order they were specified on the command-line. # SIGNALS The following UNIX signals may be sent to the master process: * HUP - reload config file, app, and gracefully restart all workers * INT/TERM - quick shutdown, kills all workers immediately * QUIT - graceful shutdown, waits for workers to finish their current request before finishing. * USR1 - reopen all logs owned by the master and all workers See Unicorn::Util.reopen_logs for what is considered a log. * USR2 - reexecute the running binary. A separate QUIT should be sent to the original process once the child is verified to be up and running. * WINCH - gracefully stops workers but keep the master running. This will only work for daemonized processes. * TTIN - increment the number of worker processes by one * TTOU - decrement the number of worker processes by one See the [SIGNALS][4] document for full description of all signals used by Unicorn. # RACK ENVIRONMENT Accepted values of RACK_ENV and the middleware they automatically load (outside of RACKUP_FILE) are exactly as those in rackup(1): * development - loads Rack::CommonLogger, Rack::ShowExceptions, and Rack::Lint middleware * deployment - loads Rack::CommonLogger middleware * none - loads no middleware at all, relying entirely on RACKUP_FILE All unrecognized values for RACK_ENV are assumed to be "none". Production deployments are strongly encouraged to use "deployment" or "none" for maximum performance. As of Unicorn 0.94.0, RACK_ENV is exported as a process-wide environment variable as well. While not current a part of the Rack specification as of Rack 1.0.1, this has become a de facto standard in the Rack world. Note the Rack::ContentLength and Rack::Chunked middlewares are also loaded by "deployment" and "development", but no other values of RACK_ENV. If needed, they must be individually specified in the RACKUP_FILE, some frameworks do not require them. # ENVIRONMENT VARIABLES The RACK_ENV variable is set by the aforementioned \-E switch. All application or library-specific environment variables (e.g. TMPDIR) may always be set in the Unicorn CONFIG_FILE in addition to the spawning shell. When transparently upgrading Unicorn, all environment variables set in the old master process are inherited by the new master process. Unicorn only uses (and will overwrite) the UNICORN_FD environment variable internally when doing transparent upgrades. # SEE ALSO * unicorn_rails(1) * *Rack::Builder* ri/RDoc * *Unicorn::Configurator* ri/RDoc * [Unicorn RDoc][1] * [Rack RDoc][2] * [Rackup HowTo][3] [1]: http://unicorn.bogomips.org/ [2]: http://rack.rubyforge.org/doc/ [3]: http://wiki.github.com/rack/rack/tutorial-rackup-howto [4]: http://unicorn.bogomips.org/SIGNALS.html unicorn-4.7.0/Documentation/.gitignore0000644000004100000410000000003012236653132020024 0ustar www-datawww-data*.1 *.5 *.7 *.gz *.html unicorn-4.7.0/TODO0000644000004100000410000000013612236653132013722 0ustar www-datawww-data* Documentation improvements * improve test suite * Rack 2.x support (when Rack 2.x exists) unicorn-4.7.0/GNUmakefile0000644000004100000410000002020512236653132015303 0ustar www-datawww-data# use GNU Make to run tests in parallel, and without depending on RubyGems all:: test RLFLAGS = -G2 MRI = ruby RUBY = ruby RAKE = rake RAGEL = ragel RSYNC = rsync GIT-VERSION-FILE: .FORCE-GIT-VERSION-FILE @./GIT-VERSION-GEN -include GIT-VERSION-FILE -include local.mk ruby_bin := $(shell which $(RUBY)) ifeq ($(DLEXT),) # "so" for Linux DLEXT := $(shell $(RUBY) -rrbconfig -e 'puts RbConfig::CONFIG["DLEXT"]') endif ifeq ($(RUBY_VERSION),) RUBY_VERSION := $(shell $(RUBY) -e 'puts RUBY_VERSION') endif RUBY_ENGINE := $(shell $(RUBY) -e 'puts((RUBY_ENGINE rescue "ruby"))') isolate_libs := tmp/isolate/$(RUBY_ENGINE)-$(RUBY_VERSION).mk $(isolate_libs): script/isolate_for_tests @$(RUBY) script/isolate_for_tests -include $(isolate_libs) MYLIBS = $(RUBYLIB):$(ISOLATE_LIBS) # dunno how to implement this as concisely in Ruby, and hell, I love awk awk_slow := awk '/def test_/{print FILENAME"--"$$2".n"}' 2>/dev/null slow_tests := test/unit/test_server.rb test/exec/test_exec.rb \ test/unit/test_signals.rb test/unit/test_upload.rb log_suffix = .$(RUBY_ENGINE).$(RUBY_VERSION).log T := $(filter-out $(slow_tests), $(wildcard test/*/test*.rb)) T_n := $(shell $(awk_slow) $(slow_tests)) T_log := $(subst .rb,$(log_suffix),$(T)) T_n_log := $(subst .n,$(log_suffix),$(T_n)) test_prefix = $(CURDIR)/test/$(RUBY_ENGINE)-$(RUBY_VERSION) ext := ext/unicorn_http c_files := $(ext)/unicorn_http.c $(ext)/httpdate.c $(wildcard $(ext)/*.h) rl_files := $(wildcard $(ext)/*.rl) base_bins := unicorn unicorn_rails bins := $(addprefix bin/, $(base_bins)) man1_rdoc := $(addsuffix _1, $(base_bins)) man1_bins := $(addsuffix .1, $(base_bins)) man1_paths := $(addprefix man/man1/, $(man1_bins)) rb_files := $(bins) $(shell find lib ext -type f -name '*.rb') inst_deps := $(c_files) $(rb_files) GNUmakefile test/test_helper.rb ragel: $(ext)/unicorn_http.c $(ext)/unicorn_http.c: $(rl_files) cd $(@D) && $(RAGEL) unicorn_http.rl -C $(RLFLAGS) -o $(@F) $(ext)/Makefile: $(ext)/extconf.rb $(c_files) cd $(@D) && $(RUBY) extconf.rb $(ext)/unicorn_http.$(DLEXT): $(ext)/Makefile $(MAKE) -C $(@D) lib/unicorn_http.$(DLEXT): $(ext)/unicorn_http.$(DLEXT) @mkdir -p lib install -m644 $< $@ http: lib/unicorn_http.$(DLEXT) test-install: $(test_prefix)/.stamp $(test_prefix)/.stamp: $(inst_deps) mkdir -p $(test_prefix)/.ccache tar cf - $(inst_deps) GIT-VERSION-GEN | \ (cd $(test_prefix) && tar xf -) $(MAKE) -C $(test_prefix) clean $(MAKE) -C $(test_prefix) http shebang RUBY="$(RUBY)" > $@ # this is only intended to be run within $(test_prefix) shebang: $(bins) $(MRI) -i -p -e '$$_.gsub!(%r{^#!.*$$},"#!$(ruby_bin)")' $^ t_log := $(T_log) $(T_n_log) test: $(T) $(T_n) @cat $(t_log) | $(MRI) test/aggregate.rb @$(RM) $(t_log) test-exec: $(wildcard test/exec/test_*.rb) test-unit: $(wildcard test/unit/test_*.rb) $(slow_tests): $(test_prefix)/.stamp @$(MAKE) $(shell $(awk_slow) $@) test-integration: $(test_prefix)/.stamp $(MAKE) -C t test-all: test test-integration TEST_OPTS = -v check_test = grep '0 failures, 0 errors' $(t) >/dev/null ifndef V quiet_pre = @echo '* $(arg)$(extra)'; quiet_post = >$(t) 2>&1 && $(check_test) else # we can't rely on -o pipefail outside of bash 3+, # so we use a stamp file to indicate success and # have rm fail if the stamp didn't get created stamp = $@$(log_suffix).ok quiet_pre = @echo $(RUBY) $(arg) $(TEST_OPTS); ! test -f $(stamp) && ( quiet_post = && > $(stamp) )2>&1 | tee $(t); \ rm $(stamp) 2>/dev/null && $(check_test) endif # not all systems have setsid(8), we need it because we spam signals # stupidly in some tests... rb_setsid := $(RUBY) -e 'Process.setsid' -e 'exec *ARGV' # TRACER='strace -f -o $(t).strace -s 100000' run_test = $(quiet_pre) \ $(rb_setsid) $(TRACER) $(RUBY) -w $(arg) $(TEST_OPTS) $(quiet_post) || \ (sed "s,^,$(extra): ," >&2 < $(t); exit 1) %.n: arg = $(subst .n,,$(subst --, -n ,$@)) %.n: t = $(subst .n,$(log_suffix),$@) %.n: export PATH := $(test_prefix)/bin:$(PATH) %.n: export RUBYLIB := $(test_prefix):$(test_prefix)/lib:$(MYLIBS) %.n: $(test_prefix)/.stamp $(run_test) $(T): arg = $@ $(T): t = $(subst .rb,$(log_suffix),$@) $(T): export PATH := $(test_prefix)/bin:$(PATH) $(T): export RUBYLIB := $(test_prefix):$(test_prefix)/lib:$(MYLIBS) $(T): $(test_prefix)/.stamp $(run_test) install: $(bins) $(ext)/unicorn_http.c $(prep_setup_rb) $(RM) lib/unicorn_http.$(DLEXT) $(RM) -r .install-tmp mkdir .install-tmp cp -p bin/* .install-tmp $(RUBY) setup.rb all $(RM) $^ mv .install-tmp/* bin/ $(RM) -r .install-tmp $(prep_setup_rb) setup_rb_files := .config InstalledFiles prep_setup_rb := @-$(RM) $(setup_rb_files);$(MAKE) -C $(ext) clean clean: -$(MAKE) -C $(ext) clean -$(MAKE) -C Documentation clean $(RM) $(ext)/Makefile lib/unicorn_http.$(DLEXT) $(RM) $(setup_rb_files) $(t_log) $(RM) -r $(test_prefix) man man html: $(MAKE) -C Documentation install-$@ pkg_extra := GIT-VERSION-FILE lib/unicorn/version.rb ChangeLog LATEST NEWS \ $(ext)/unicorn_http.c $(man1_paths) ChangeLog: GIT-VERSION-FILE .wrongdoc.yml wrongdoc prepare .manifest: ChangeLog $(ext)/unicorn_http.c man (git ls-files && for i in $@ $(pkg_extra); do echo $$i; done) | \ LC_ALL=C sort > $@+ cmp $@+ $@ || mv $@+ $@ $(RM) $@+ doc: .document $(ext)/unicorn_http.c man html .wrongdoc.yml for i in $(man1_rdoc); do echo > $$i; done find bin lib -type f -name '*.rbc' -exec rm -f '{}' ';' $(RM) -r doc wrongdoc all install -m644 COPYING doc/COPYING install -m644 $(shell LC_ALL=C grep '^[A-Z]' .document) doc/ install -m644 $(man1_paths) doc/ tar cf - $$(git ls-files examples/) | (cd doc && tar xf -) $(RM) $(man1_rdoc) # publishes docs to http://unicorn.bogomips.org publish_doc: -git set-file-times $(MAKE) doc find doc/images -type f | \ TZ=UTC xargs touch -d '1970-01-01 00:00:02' doc/rdoc.css $(MAKE) doc_gz chmod 644 $$(find doc -type f) $(RSYNC) -av doc/ unicorn.bogomips.org:/srv/unicorn/ git ls-files | xargs touch # Create gzip variants of the same timestamp as the original so nginx # "gzip_static on" can serve the gzipped versions directly. doc_gz: docs = $(shell find doc -type f ! -regex '^.*\.\(gif\|jpg\|png\|gz\)$$') doc_gz: for i in $(docs); do \ gzip --rsyncable -9 < $$i > $$i.gz; touch -r $$i $$i.gz; done ifneq ($(VERSION),) rfproject := mongrel rfpackage := unicorn pkggem := pkg/$(rfpackage)-$(VERSION).gem pkgtgz := pkg/$(rfpackage)-$(VERSION).tgz release_notes := release_notes-$(VERSION) release_changes := release_changes-$(VERSION) release-notes: $(release_notes) release-changes: $(release_changes) $(release_changes): wrongdoc release_changes > $@+ $(VISUAL) $@+ && test -s $@+ && mv $@+ $@ $(release_notes): wrongdoc release_notes > $@+ $(VISUAL) $@+ && test -s $@+ && mv $@+ $@ # ensures we're actually on the tagged $(VERSION), only used for release verify: test x"$(shell umask)" = x0022 git rev-parse --verify refs/tags/v$(VERSION)^{} git diff-index --quiet HEAD^0 test `git rev-parse --verify HEAD^0` = \ `git rev-parse --verify refs/tags/v$(VERSION)^{}` fix-perms: git ls-tree -r HEAD | awk '/^100644 / {print $$NF}' | xargs chmod 644 git ls-tree -r HEAD | awk '/^100755 / {print $$NF}' | xargs chmod 755 gem: $(pkggem) install-gem: $(pkggem) gem install $(CURDIR)/$< $(pkggem): .manifest fix-perms gem build $(rfpackage).gemspec mkdir -p pkg mv $(@F) $@ $(pkgtgz): distdir = $(basename $@) $(pkgtgz): HEAD = v$(VERSION) $(pkgtgz): .manifest fix-perms @test -n "$(distdir)" $(RM) -r $(distdir) mkdir -p $(distdir) tar cf - $$(cat .manifest) | (cd $(distdir) && tar xf -) cd pkg && tar cf - $(basename $(@F)) | gzip -9 > $(@F)+ mv $@+ $@ package: $(pkgtgz) $(pkggem) release: verify package $(release_notes) $(release_changes) # make tgz release on RubyForge rubyforge add_release -f -n $(release_notes) -a $(release_changes) \ $(rfproject) $(rfpackage) $(VERSION) $(pkgtgz) # push gem to Gemcutter gem push $(pkggem) # in case of gem downloads from RubyForge releases page -rubyforge add_file \ $(rfproject) $(rfpackage) $(VERSION) $(pkggem) $(RAKE) fm_update VERSION=$(VERSION) else gem install-gem: GIT-VERSION-FILE $(MAKE) $@ VERSION=$(GIT_VERSION) endif .PHONY: .FORCE-GIT-VERSION-FILE doc $(T) $(slow_tests) man .PHONY: test-install unicorn-4.7.0/Application_Timeouts0000644000004100000410000000571412236653132017320 0ustar www-datawww-data= Application Timeouts This article focuses on _application_ setup for Rack applications, but can be expanded to all applications that connect to external resources and expect short response times. This article is not specific to \Unicorn, but exists to discourage the overuse of the built-in {timeout}[link:Unicorn/Configurator.html#method-i-timeout] directive in \Unicorn. == ALL External Resources Are Considered Unreliable Network reliability can _never_ be guaranteed. Network failures cannot be detected reliably by the client (Rack application) in a reasonable timeframe, not even on a LAN. Thus, application authors must configure timeouts when interacting with external resources. Most database adapters allow configurable timeouts. Net::HTTP and Net::SMTP in the Ruby standard library allow configurable timeouts. Even for things as fast as {memcached}[http://memcached.org/], {dalli}[http://rubygems.org/gems/dalli], {memcached}[http://rubygems.org/gems/memcached] and {memcache-client}[http://rubygems.org/gems/memcache-client] RubyGems all offer configurable timeouts. Consult the relevant documentation for the libraries you use on how to configure these timeouts. == Rolling Your Own Socket Code Use non-blocking I/O and IO.select with a timeout to wait on sockets. == Timeout module in the Ruby standard library Ruby offers a Timeout module in its standard library. It has several caveats and is not always reliable: * /Some/ Ruby C extensions are not interrupted/timed-out gracefully by this module (report these bugs to extension authors, please) but pure-Ruby components should be. * Long-running tasks may run inside `ensure' clauses after timeout fires, causing the timeout to be ineffective. The Timeout module is a second-to-last-resort solution, timeouts using IO.select (or similar) are more reliable. If you depend on libraries that do not offer timeouts when connecting to external resources, kindly ask those library authors to provide configurable timeouts. === A Note About Filesystems Most operations to regular files on POSIX filesystems are NOT interruptable. Thus, the "timeout" module in the Ruby standard library can not reliably timeout systems with massive amounts of iowait. If your app relies on the filesystem, ensure all the data your application works with is small enough to fit in the kernel page cache. Otherwise increase the amount of physical memory you have to match, or employ a fast, low-latency storage system (solid state). Volumes mounted over NFS (and thus a potentially unreliable network) must be mounted with timeouts and applications must be prepared to handle network/server failures. == The Last Line Of Defense The {timeout}[link:Unicorn/Configurator.html#method-i-timeout] mechanism in \Unicorn is an extreme solution that should be avoided whenever possible. It will help catch bugs in your application where and when your application forgets to use timeouts, but it is expensive as it kills and respawns a worker process. unicorn-4.7.0/t/0000755000004100000410000000000012236653132013475 5ustar www-datawww-dataunicorn-4.7.0/t/t0010-reap-logging.sh0000755000004100000410000000177512236653132017163 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 9 "reap worker logging messages" t_begin "setup and start" && { unicorn_setup cat >> $unicorn_config < $r_err } t_begin "kill 2nd worker gracefully" && { pid_2=$(curl http://$listen/) kill -QUIT $pid_2 } t_begin "wait for 3rd worker=0 to start " && { test '.' = $(cat $fifo) } t_begin "ensure log of 2nd reap is a INFO" && { grep 'INFO.*reaped.*worker=0' $r_err | grep $pid_2 > $r_err } t_begin "killing succeeds" && { kill $unicorn_pid wait kill -0 $unicorn_pid && false } t_begin "check stderr" && { check_stderr } t_done unicorn-4.7.0/t/t0002-parser-error.sh0000755000004100000410000000400112236653132017215 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 11 "parser error test" t_begin "setup and startup" && { unicorn_setup unicorn -D env.ru -c $unicorn_config unicorn_wait_start } t_begin "send a bad request" && { ( printf 'GET / HTTP/1/1\r\nHost: example.com\r\n\r\n' cat $fifo > $tmp & wait echo ok > $ok ) | socat - TCP:$listen > $fifo test xok = x$(cat $ok) } dbgcat tmp t_begin "response should be a 400" && { grep -F 'HTTP/1.1 400 Bad Request' $tmp } t_begin "send a huge Request URI (REQUEST_PATH > (12 * 1024))" && { rm -f $tmp cat $fifo > $tmp & ( set -e trap 'echo ok > $ok' EXIT printf 'GET /' for i in $(awk $fifo || : test xok = x$(cat $ok) wait } t_begin "response should be a 414 (REQUEST_PATH)" && { grep -F 'HTTP/1.1 414 Request-URI Too Long' $tmp } t_begin "send a huge Request URI (QUERY_STRING > (10 * 1024))" && { rm -f $tmp cat $fifo > $tmp & ( set -e trap 'echo ok > $ok' EXIT printf 'GET /hello-world?a' for i in $(awk $fifo || : test xok = x$(cat $ok) wait } t_begin "response should be a 414 (QUERY_STRING)" && { grep -F 'HTTP/1.1 414 Request-URI Too Long' $tmp } t_begin "send a huge Request URI (FRAGMENT > 1024)" && { rm -f $tmp cat $fifo > $tmp & ( set -e trap 'echo ok > $ok' EXIT printf 'GET /hello-world#a' for i in $(awk $fifo || : test xok = x$(cat $ok) wait } t_begin "response should be a 414 (FRAGMENT)" && { grep -F 'HTTP/1.1 414 Request-URI Too Long' $tmp } t_begin "server stderr should be clean" && check_stderr t_begin "term signal sent" && kill $unicorn_pid t_done unicorn-4.7.0/t/README0000644000004100000410000000241212236653132014354 0ustar www-datawww-data= Unicorn integration test suite These are all integration tests that start the server on random, unused TCP ports or Unix domain sockets. They're all designed to run concurrently with other tests to minimize test time, but tests may be run independently as well. We write our tests in Bourne shell because that's what we're comfortable writing integration tests with. == Requirements * {Ruby 1.8 or 1.9}[http://www.ruby-lang.org/] (duh!) * {GNU make}[http://www.gnu.org/software/make/] * {socat}[http://www.dest-unreach.org/socat/] * {curl}[http://curl.haxx.se/] * standard UNIX shell utilities (Bourne sh, awk, sed, grep, ...) We do not use bashisms or any non-portable, non-POSIX constructs in our shell code. We use the "pipefail" option if available and mainly test with {ksh}[http://kornshell.com/], but occasionally with {dash}[http://gondor.apana.org.au/~herbert/dash/] and {bash}[http://www.gnu.org/software/bash/], too. == Running Tests To run the entire test suite with 8 tests running at once: make -j8 To run one individual test: make t0000-simple-http.sh You may also increase verbosity by setting the "V" variable for GNU make. To disable trapping of stdout/stderr: make V=1 To enable the "set -x" option in shell scripts to trace execution make V=2 unicorn-4.7.0/t/t0001-reload-bad-config.sh0000755000004100000410000000174412236653132020041 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 7 "reload config.ru error with preload_app true" t_begin "setup and start" && { unicorn_setup rtmpfiles ru cat > $ru <<\EOF use Rack::ContentLength use Rack::ContentType, "text/plain" x = { "hello" => "world" } run lambda { |env| [ 200, {}, [ x.inspect << "\n" ] ] } EOF echo 'preload_app true' >> $unicorn_config unicorn -D -c $unicorn_config $ru unicorn_wait_start } t_begin "hit with curl" && { out=$(curl -sSf http://$listen/) test x"$out" = x'{"hello"=>"world"}' } t_begin "introduce syntax error in rackup file" && { echo '...' >> $ru } t_begin "reload signal succeeds" && { kill -HUP $unicorn_pid while ! egrep '(done|error) reloading' $r_err >/dev/null do sleep 1 done grep 'error reloading' $r_err >/dev/null > $r_err } t_begin "hit with curl" && { out=$(curl -sSf http://$listen/) test x"$out" = x'{"hello"=>"world"}' } t_begin "killing succeeds" && { kill $unicorn_pid } t_begin "check stderr" && { check_stderr } t_done unicorn-4.7.0/t/t0019-max_header_len.sh0000755000004100000410000000155212236653132017545 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 5 "max_header_len setting (only intended for Rainbows!)" t_begin "setup and start" && { unicorn_setup req='GET / HTTP/1.0\r\n\r\n' len=$(printf "$req" | count_bytes) echo Unicorn::HttpParser.max_header_len = $len >> $unicorn_config unicorn -D -c $unicorn_config env.ru unicorn_wait_start } t_begin "minimal request succeeds" && { rm -f $tmp ( cat $fifo > $tmp & printf "$req" wait echo ok > $ok ) | socat - TCP:$listen > $fifo test xok = x$(cat $ok) fgrep "HTTP/1.1 200 OK" $tmp } t_begin "big request fails" && { rm -f $tmp ( cat $fifo > $tmp & printf 'GET /xxxxxx HTTP/1.0\r\n\r\n' wait echo ok > $ok ) | socat - TCP:$listen > $fifo test xok = x$(cat $ok) fgrep "HTTP/1.1 413" $tmp } dbgcat tmp t_begin "killing succeeds" && { kill $unicorn_pid } t_begin "check stderr" && { check_stderr } t_done unicorn-4.7.0/t/broken-app.ru0000644000004100000410000000042412236653132016103 0ustar www-datawww-data# we do not want Rack::Lint or anything to protect us use Rack::ContentLength use Rack::ContentType, "text/plain" map "/" do run lambda { |env| [ 200, {}, [ "OK\n" ] ] } end map "/raise" do run lambda { |env| raise "BAD" } end map "/nil" do run lambda { |env| nil } end unicorn-4.7.0/t/t0021-process_detach.sh0000755000004100000410000000110112236653132017557 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 5 "Process.detach on forked background process works" t_begin "setup and startup" && { t_fifos process_detach unicorn_setup TEST_FIFO=$process_detach \ unicorn -E none -D detach.ru -c $unicorn_config unicorn_wait_start } t_begin "read detached PID with HTTP/1.0" && { detached_pid=$(curl -0 -sSf http://$listen/) t_info "detached_pid=$detached_pid" } t_begin "read background FIFO" && { test xHIHI = x"$(cat $process_detach)" } t_begin "killing succeeds" && { kill $unicorn_pid } t_begin "check stderr" && check_stderr t_done unicorn-4.7.0/t/t0017-trust-x-forwarded-true.sh0000755000004100000410000000125112236653132021162 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 5 "trust_x_forwarded=true configuration test" t_begin "setup and start" && { unicorn_setup echo "trust_x_forwarded true " >> $unicorn_config unicorn -D -c $unicorn_config env.ru unicorn_wait_start } t_begin "spoofed request with X-Forwarded-Proto sets 'https'" && { curl -H 'X-Forwarded-Proto: https' http://$listen/ | \ grep -F '"rack.url_scheme"=>"https"' } t_begin "spoofed request with X-Forwarded-SSL sets 'https'" && { curl -H 'X-Forwarded-SSL: on' http://$listen/ | \ grep -F '"rack.url_scheme"=>"https"' } t_begin "killing succeeds" && { kill $unicorn_pid } t_begin "check stderr has no errors" && { check_stderr } t_done unicorn-4.7.0/t/fails-rack-lint.ru0000644000004100000410000000041312236653132017023 0ustar www-datawww-data# This rack app returns an invalid status code, which will cause # Rack::Lint to throw an exception if it is present. This # is used to check whether Rack::Lint is in the stack or not. run lambda {|env| return [42, {}, ["Rack::Lint wasn't there if you see this"]]} unicorn-4.7.0/t/t0015-configurator-internals.sh0000755000004100000410000000102312236653132021276 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 4 "configurator internals tests (from FAQ)" t_begin "setup and start" && { unicorn_setup cat >> $unicorn_config <"https"' } t_begin "killing succeeds" && { kill $unicorn_pid } t_begin "no errors" && check_stderr t_done unicorn-4.7.0/t/t0013.ru0000644000004100000410000000033312236653132014613 0ustar www-datawww-data#\ -E none use Rack::ContentLength use Rack::ContentType, 'text/plain' app = lambda do |env| case env['rack.input'] when Unicorn::StreamInput [ 200, {}, %w(OK) ] else [ 500, {}, %w(NO) ] end end run app unicorn-4.7.0/t/write-on-close.ru0000644000004100000410000000037312236653132016717 0ustar www-datawww-dataclass WriteOnClose def each(&block) @callback = block end def close @callback.call "7\r\nGoodbye\r\n0\r\n\r\n" end end use Rack::ContentType, "text/plain" run(lambda { |_| [ 200, [%w(Transfer-Encoding chunked)], WriteOnClose.new ] }) unicorn-4.7.0/t/t0300-no-default-middleware.sh0000644000004100000410000000053412236653132020750 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 2 "test the -N / --no-default-middleware option" t_begin "setup and start" && { unicorn_setup unicorn -N -D -c $unicorn_config fails-rack-lint.ru unicorn_wait_start } t_begin "check exit status with Rack::Lint not present" && { test 42 -eq "$(curl -sf -o/dev/null -w'%{http_code}' http://$listen/)" } t_done unicorn-4.7.0/t/env.ru0000644000004100000410000000016612236653132014640 0ustar www-datawww-datause Rack::ContentLength use Rack::ContentType, "text/plain" run lambda { |env| [ 200, {}, [ env.inspect << "\n" ] ] } unicorn-4.7.0/t/oob_gc_path.ru0000644000004100000410000000070412236653132016312 0ustar www-datawww-data#\-E none require 'unicorn/oob_gc' use Rack::ContentLength use Rack::ContentType, "text/plain" use Unicorn::OobGC, 5, /BAD/ $gc_started = false # Mock GC.start def GC.start ObjectSpace.each_object(Kgio::Socket) do |x| x.closed? or abort "not closed #{x}" end $gc_started = true end run lambda { |env| if "/gc_reset" == env["PATH_INFO"] && "POST" == env["REQUEST_METHOD"] $gc_started = false end [ 200, {}, [ "#$gc_started\n" ] ] } unicorn-4.7.0/t/t0002-config-conflict.sh0000755000004100000410000000171612236653132017650 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 6 "config variables conflict with preload_app" t_begin "setup and start" && { unicorn_setup rtmpfiles ru rutmp cat > $ru <<\EOF use Rack::ContentLength use Rack::ContentType, "text/plain" config = ru = { "hello" => "world" } run lambda { |env| [ 200, {}, [ ru.inspect << "\n" ] ] } EOF echo 'preload_app true' >> $unicorn_config unicorn -D -c $unicorn_config $ru unicorn_wait_start } t_begin "hit with curl" && { out=$(curl -sSf http://$listen/) test x"$out" = x'{"hello"=>"world"}' } t_begin "modify rackup file" && { sed -e 's/world/WORLD/' < $ru > $rutmp mv $rutmp $ru } t_begin "reload signal succeeds" && { kill -HUP $unicorn_pid while ! egrep '(done|error) reloading' < $r_err >/dev/null do sleep 1 done grep 'done reloading' $r_err >/dev/null } t_begin "hit with curl" && { out=$(curl -sSf http://$listen/) test x"$out" = x'{"hello"=>"WORLD"}' } t_begin "killing succeeds" && { kill $unicorn_pid } t_done unicorn-4.7.0/t/rack-input-tests.ru0000644000004100000410000000112412236653132017260 0ustar www-datawww-data# SHA1 checksum generator require 'digest/sha1' use Rack::ContentLength cap = 16384 app = lambda do |env| /\A100-continue\z/i =~ env['HTTP_EXPECT'] and return [ 100, {}, [] ] digest = Digest::SHA1.new input = env['rack.input'] input.size if env["PATH_INFO"] == "/size_first" input.rewind if env["PATH_INFO"] == "/rewind_first" if buf = input.read(rand(cap)) begin raise "#{buf.size} > #{cap}" if buf.size > cap digest.update(buf) end while input.read(rand(cap), buf) end [ 200, {'Content-Type' => 'text/plain'}, [ digest.hexdigest << "\n" ] ] end run app unicorn-4.7.0/t/t0116-client_body_buffer_size.sh0000755000004100000410000000403612236653132021466 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 12 "client_body_buffer_size settings" t_begin "setup and start" && { unicorn_setup rtmpfiles unicorn_config_tmp one_meg dd if=/dev/zero bs=1M count=1 of=$one_meg cat >> $unicorn_config < $unicorn_config_tmp echo client_body_buffer_size 0 >> $unicorn_config unicorn -D -c $unicorn_config t0116.ru unicorn_wait_start fs_class=Unicorn::TmpIO mem_class=StringIO test x"$(cat $fifo)" = xSTART } t_begin "class for a zero-byte file should be StringIO" && { > $tmp test xStringIO = x"$(curl -T $tmp -sSf http://$listen/input_class)" } t_begin "class for a 1 byte file should be filesystem-backed" && { echo > $tmp test x$fs_class = x"$(curl -T $tmp -sSf http://$listen/tmp_class)" } t_begin "reload with default client_body_buffer_size" && { mv $unicorn_config_tmp $unicorn_config kill -HUP $unicorn_pid test x"$(cat $fifo)" = xSTART } t_begin "class for a 1 byte file should be memory-backed" && { echo > $tmp test x$mem_class = x"$(curl -T $tmp -sSf http://$listen/tmp_class)" } t_begin "class for a random blob file should be filesystem-backed" && { resp="$(curl -T random_blob -sSf http://$listen/tmp_class)" test x$fs_class = x"$resp" } t_begin "one megabyte file should be filesystem-backed" && { resp="$(curl -T $one_meg -sSf http://$listen/tmp_class)" test x$fs_class = x"$resp" } t_begin "reload with a big client_body_buffer_size" && { echo "client_body_buffer_size(1024 * 1024)" >> $unicorn_config kill -HUP $unicorn_pid test x"$(cat $fifo)" = xSTART } t_begin "one megabyte file should be memory-backed" && { resp="$(curl -T $one_meg -sSf http://$listen/tmp_class)" test x$mem_class = x"$resp" } t_begin "one megabyte + 1 byte file should be filesystem-backed" && { echo >> $one_meg resp="$(curl -T $one_meg -sSf http://$listen/tmp_class)" test x$fs_class = x"$resp" } t_begin "killing succeeds" && { kill $unicorn_pid } t_begin "check stderr" && { check_stderr } t_done unicorn-4.7.0/t/t0013-rewindable-input-false.sh0000755000004100000410000000067412236653132021151 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 4 "rewindable_input toggled to false" t_begin "setup and start" && { unicorn_setup echo rewindable_input false >> $unicorn_config unicorn -D -c $unicorn_config t0013.ru unicorn_wait_start } t_begin "ensure worker is started" && { test xOK = x$(curl -T t0013.ru -H Expect: -vsSf http://$listen/) } t_begin "killing succeeds" && { kill $unicorn_pid } t_begin "check stderr" && { check_stderr } t_done unicorn-4.7.0/t/t0011-active-unix-socket.sh0000755000004100000410000000315512236653132020325 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 11 "existing UNIX domain socket check" read_pid_unix () { x=$(printf 'GET / HTTP/1.0\r\n\r\n' | \ socat - UNIX:$unix_socket | \ tail -1) test -n "$x" y="$(expr "$x" : '\([0-9][0-9]*\)')" test x"$x" = x"$y" test -n "$y" echo "$y" } t_begin "setup and start" && { rtmpfiles unix_socket unix_config rm -f $unix_socket unicorn_setup grep -v ^listen < $unicorn_config > $unix_config echo "listen '$unix_socket'" >> $unix_config unicorn -D -c $unix_config pid.ru unicorn_wait_start orig_master_pid=$unicorn_pid } t_begin "get pid of worker" && { worker_pid=$(read_pid_unix) t_info "worker_pid=$worker_pid" } t_begin "fails to start with existing pid file" && { rm -f $ok unicorn -D -c $unix_config pid.ru || echo ok > $ok test x"$(cat $ok)" = xok } t_begin "worker pid unchanged" && { test x"$(read_pid_unix)" = x$worker_pid > $r_err } t_begin "fails to start with listening UNIX domain socket bound" && { rm $ok $pid unicorn -D -c $unix_config pid.ru || echo ok > $ok test x"$(cat $ok)" = xok > $r_err } t_begin "worker pid unchanged (again)" && { test x"$(read_pid_unix)" = x$worker_pid } t_begin "nuking the existing Unicorn succeeds" && { kill -9 $unicorn_pid $worker_pid while kill -0 $unicorn_pid do sleep 1 done check_stderr } t_begin "succeeds in starting with leftover UNIX domain socket bound" && { test -S $unix_socket unicorn -D -c $unix_config pid.ru unicorn_wait_start } t_begin "worker pid changed" && { test x"$(read_pid_unix)" != x$worker_pid } t_begin "killing succeeds" && { kill $unicorn_pid } t_begin "no errors" && check_stderr t_done unicorn-4.7.0/t/t0018-write-on-close.sh0000755000004100000410000000063612236653132017462 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 4 "write-on-close tests for funky response-bodies" t_begin "setup and start" && { unicorn_setup unicorn -D -c $unicorn_config write-on-close.ru unicorn_wait_start } t_begin "write-on-close response body succeeds" && { test xGoodbye = x"$(curl -sSf http://$listen/)" } t_begin "killing succeeds" && { kill $unicorn_pid } t_begin "check stderr" && { check_stderr } t_done unicorn-4.7.0/t/t0600-https-server-basic.sh0000755000004100000410000000152112236653132020327 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 7 "simple HTTPS connection tests" t_begin "setup and start" && { rtmpfiles curl_err unicorn_setup cat > $unicorn_config <> $curl_err >> $tmp dbgcat curl_err } t_begin "check stderr has no errors" && { check_stderr } t_begin "killing succeeds" && { kill $unicorn_pid } t_begin "check stderr has no errors" && { check_stderr } t_done unicorn-4.7.0/t/t0100-rack-input-tests.sh0000755000004100000410000000576512236653132020030 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh test -r random_blob || die "random_blob required, run with 'make $0'" t_plan 10 "rack.input read tests" t_begin "setup and startup" && { rtmpfiles curl_out curl_err unicorn_setup unicorn -E none -D rack-input-tests.ru -c $unicorn_config blob_sha1=$(rsha1 < random_blob) blob_size=$(count_bytes < random_blob) t_info "blob_sha1=$blob_sha1" unicorn_wait_start } t_begin "corked identity request" && { rm -f $tmp ( cat $fifo > $tmp & printf 'PUT / HTTP/1.0\r\n' printf 'Content-Length: %d\r\n\r\n' $blob_size cat random_blob wait echo ok > $ok ) | ( sleep 1 && socat - TCP4:$listen > $fifo ) test 1 -eq $(grep $blob_sha1 $tmp |count_lines) test x"$(cat $ok)" = xok } t_begin "corked chunked request" && { rm -f $tmp ( cat $fifo > $tmp & content-md5-put < random_blob wait echo ok > $ok ) | ( sleep 1 && socat - TCP4:$listen > $fifo ) test 1 -eq $(grep $blob_sha1 $tmp |count_lines) test x"$(cat $ok)" = xok } t_begin "corked identity request (input#size first)" && { rm -f $tmp ( cat $fifo > $tmp & printf 'PUT /size_first HTTP/1.0\r\n' printf 'Content-Length: %d\r\n\r\n' $blob_size cat random_blob wait echo ok > $ok ) | ( sleep 1 && socat - TCP4:$listen > $fifo ) test 1 -eq $(grep $blob_sha1 $tmp |count_lines) test x"$(cat $ok)" = xok } t_begin "corked identity request (input#rewind first)" && { rm -f $tmp ( cat $fifo > $tmp & printf 'PUT /rewind_first HTTP/1.0\r\n' printf 'Content-Length: %d\r\n\r\n' $blob_size cat random_blob wait echo ok > $ok ) | ( sleep 1 && socat - TCP4:$listen > $fifo ) test 1 -eq $(grep $blob_sha1 $tmp |count_lines) test x"$(cat $ok)" = xok } t_begin "corked chunked request (input#size first)" && { rm -f $tmp ( cat $fifo > $tmp & printf 'PUT /size_first HTTP/1.1\r\n' printf 'Host: example.com\r\n' printf 'Transfer-Encoding: chunked\r\n' printf 'Trailer: Content-MD5\r\n' printf '\r\n' content-md5-put --no-headers < random_blob wait echo ok > $ok ) | ( sleep 1 && socat - TCP4:$listen > $fifo ) test 1 -eq $(grep $blob_sha1 $tmp |count_lines) test 1 -eq $(grep $blob_sha1 $tmp |count_lines) test x"$(cat $ok)" = xok } t_begin "corked chunked request (input#rewind first)" && { rm -f $tmp ( cat $fifo > $tmp & printf 'PUT /rewind_first HTTP/1.1\r\n' printf 'Host: example.com\r\n' printf 'Transfer-Encoding: chunked\r\n' printf 'Trailer: Content-MD5\r\n' printf '\r\n' content-md5-put --no-headers < random_blob wait echo ok > $ok ) | ( sleep 1 && socat - TCP4:$listen > $fifo ) test 1 -eq $(grep $blob_sha1 $tmp |count_lines) test x"$(cat $ok)" = xok } t_begin "regular request" && { curl -sSf -T random_blob http://$listen/ > $curl_out 2> $curl_err test x$blob_sha1 = x$(cat $curl_out) test ! -s $curl_err } t_begin "chunked request" && { curl -sSf -T- < random_blob http://$listen/ > $curl_out 2> $curl_err test x$blob_sha1 = x$(cat $curl_out) test ! -s $curl_err } dbgcat r_err t_begin "shutdown" && { kill $unicorn_pid } t_done unicorn-4.7.0/t/GNUmakefile0000644000004100000410000000353612236653132015556 0ustar www-datawww-data# we can run tests in parallel with GNU make all:: pid := $(shell echo $$PPID) RUBY = ruby RAKE = rake -include ../local.mk ifeq ($(RUBY_VERSION),) RUBY_VERSION := $(shell $(RUBY) -e 'puts RUBY_VERSION') endif ifeq ($(RUBY_VERSION),) $(error unable to detect RUBY_VERSION) endif RUBY_ENGINE := $(shell $(RUBY) -e 'puts((RUBY_ENGINE rescue "ruby"))') export RUBY_ENGINE isolate_libs := ../tmp/isolate/$(RUBY_ENGINE)-$(RUBY_VERSION).mk $(isolate_libs): ../script/isolate_for_tests @cd .. && $(RUBY) script/isolate_for_tests -include $(isolate_libs) MYLIBS := $(RUBYLIB):$(ISOLATE_LIBS) T = $(wildcard t[0-9][0-9][0-9][0-9]-*.sh) all:: $(T) # can't rely on "set -o pipefail" since we don't require bash or ksh93 :< t_pfx = trash/$@-$(RUBY_ENGINE)-$(RUBY_VERSION) TEST_OPTS = # TRACER = strace -f -o $(t_pfx).strace -s 100000 # TRACER = /usr/bin/time -o $(t_pfx).time ifdef V ifeq ($(V),2) TEST_OPTS += --trace else TEST_OPTS += --verbose endif endif random_blob: dd if=/dev/urandom bs=1M count=30 of=$@.$(pid) mv $@.$(pid) $@ ssl-stamp: ./sslgen.sh > $@ $(T): random_blob ssl-stamp dependencies := socat curl deps := $(addprefix .dep+,$(dependencies)) $(deps): dep_bin = $(lastword $(subst +, ,$@)) $(deps): @which $(dep_bin) > $@.$(pid) 2>/dev/null || : @test -s $@.$(pid) || \ { echo >&2 "E '$(dep_bin)' not found in PATH=$(PATH)"; exit 1; } @mv $@.$(pid) $@ dep: $(deps) test_prefix := $(CURDIR)/../test/$(RUBY_ENGINE)-$(RUBY_VERSION) $(test_prefix)/.stamp: $(MAKE) -C .. test-install $(T): export RUBY := $(RUBY) $(T): export RAKE := $(RAKE) $(T): export PATH := $(test_prefix)/bin:$(PATH) $(T): export RUBYLIB := $(test_prefix)/lib:$(MYLIBS) $(T): dep $(test_prefix)/.stamp trash/.gitignore $(TRACER) $(SHELL) $(SH_TEST_OPTS) $@ $(TEST_OPTS) trash/.gitignore: mkdir -p $(@D) echo '*' > $@ clean: $(RM) -r trash/* .PHONY: $(T) clean unicorn-4.7.0/t/t0012-reload-empty-config.sh0000755000004100000410000000345012236653132020447 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 9 "reloading unset config resets defaults" t_begin "setup and start" && { unicorn_setup rtmpfiles unicorn_config_orig before_reload after_reload cat $unicorn_config > $unicorn_config_orig cat >> $unicorn_config < $tmp } t_begin "replace config file with original(-ish)" && { grep -v ^pid < $unicorn_config_orig > $unicorn_config cat >> $unicorn_config </dev/null do sleep 1 done while ! grep reaped < $r_err >/dev/null do sleep 1 done grep 'done reloading' $r_err >/dev/null } t_begin "ensure worker is started" && { curl -sSf http://$listen/ > $tmp } t_begin "pid file no longer exists" && { if test -f $pid then die "pid=$pid should not exist" fi } t_begin "killing succeeds" && { kill $unicorn_pid } t_begin "check stderr" && { check_stderr } t_begin "ensure reloading restored settings" && { awk < $after_reload -F'|' ' $1 != "before_fork" && $2 != $3 { print $0; exit(1) } ' } t_done unicorn-4.7.0/t/pid.ru0000644000004100000410000000015212236653132014617 0ustar www-datawww-datause Rack::ContentLength use Rack::ContentType, "text/plain" run lambda { |env| [ 200, {}, [ "#$$\n" ] ] } unicorn-4.7.0/t/listener_names.ru0000644000004100000410000000025112236653132017053 0ustar www-datawww-datause Rack::ContentLength use Rack::ContentType, "text/plain" names = Unicorn.listener_names.inspect # rely on preload_app=true run(lambda { |_| [ 200, {}, [ names ] ] }) unicorn-4.7.0/t/t0022-listener_names-preload_app.sh0000644000004100000410000000123712236653132022075 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh # Raindrops::Middleware depends on Unicorn.listener_names, # ensure we don't break Raindrops::Middleware when preload_app is true t_plan 4 "Unicorn.listener_names available with preload_app=true" t_begin "setup and startup" && { unicorn_setup echo preload_app true >> $unicorn_config unicorn -E none -D listener_names.ru -c $unicorn_config unicorn_wait_start } t_begin "read listener names includes listener" && { resp=$(curl -sSf http://$listen/) ok=false t_info "resp=$resp" case $resp in *\"$listen\"*) ok=true ;; esac $ok } t_begin "killing succeeds" && { kill $unicorn_pid } t_begin "check stderr" && check_stderr t_done unicorn-4.7.0/t/t0116.ru0000644000004100000410000000053412236653132014622 0ustar www-datawww-data#\ -E none use Rack::ContentLength use Rack::ContentType, 'text/plain' app = lambda do |env| input = env['rack.input'] case env["PATH_INFO"] when "/tmp_class" body = input.instance_variable_get(:@tmp).class.name when "/input_class" body = input.class.name else return [ 500, {}, [] ] end [ 200, {}, [ body ] ] end run app unicorn-4.7.0/t/t0014-rewindable-input-true.sh0000755000004100000410000000065612236653132021037 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 4 "rewindable_input toggled to true" t_begin "setup and start" && { unicorn_setup echo rewindable_input true >> $unicorn_config unicorn -D -c $unicorn_config t0014.ru unicorn_wait_start } t_begin "ensure worker is started" && { test xOK = x$(curl -T t0014.ru -sSf http://$listen/) } t_begin "killing succeeds" && { kill $unicorn_pid } t_begin "check stderr" && { check_stderr } t_done unicorn-4.7.0/t/t0004-heartbeat-timeout.sh0000755000004100000410000000274412236653132020233 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 11 "heartbeat/timeout test" t_begin "setup and startup" && { unicorn_setup echo timeout 3 >> $unicorn_config echo preload_app true >> $unicorn_config unicorn -D heartbeat-timeout.ru -c $unicorn_config unicorn_wait_start } t_begin "read worker PID" && { worker_pid=$(curl -sSf http://$listen/) t_info "worker_pid=$worker_pid" } t_begin "sleep for a bit, ensure worker PID does not change" && { sleep 4 test $(curl -sSf http://$listen/) -eq $worker_pid } t_begin "block the worker process to force it to die" && { rm $ok t0=$(unix_time) err="$(curl -sSf http://$listen/block-forever 2>&1 || > $ok)" t1=$(unix_time) elapsed=$(($t1 - $t0)) t_info "elapsed=$elapsed err=$err" test x"$err" != x"Should never get here" test x"$err" != x"$worker_pid" } t_begin "ensure worker was killed" && { test -e $ok test 1 -eq $(grep timeout $r_err | grep killing | count_lines) } t_begin "ensure timeout took at least 3 seconds" && { test $elapsed -ge 3 } t_begin "we get a fresh new worker process" && { new_worker_pid=$(curl -sSf http://$listen/) test $new_worker_pid -ne $worker_pid } t_begin "truncate the server error log" && { > $r_err } t_begin "SIGSTOP and SIGCONT on unicorn master does not kill worker" && { kill -STOP $unicorn_pid sleep 4 kill -CONT $unicorn_pid sleep 2 test $new_worker_pid -eq $(curl -sSf http://$listen/) } t_begin "stop server" && { kill -QUIT $unicorn_pid } t_begin "check stderr" && check_stderr dbgcat r_err t_done unicorn-4.7.0/t/detach.ru0000644000004100000410000000040012236653132015267 0ustar www-datawww-datause Rack::ContentType, "text/plain" fifo_path = ENV["TEST_FIFO"] or abort "TEST_FIFO not set" run lambda { |env| pid = fork do File.open(fifo_path, "wb") do |fp| fp.write "HIHI" end end Process.detach(pid) [ 200, {}, [ pid.to_s ] ] } unicorn-4.7.0/t/hijack.ru0000644000004100000410000000174612236653132015306 0ustar www-datawww-datause Rack::Lint use Rack::ContentLength use Rack::ContentType, "text/plain" class DieIfUsed def each abort "body.each called after response hijack\n" end def close abort "body.close called after response hijack\n" end end run lambda { |env| case env["PATH_INFO"] when "/hijack_req" if env["rack.hijack?"] io = env["rack.hijack"].call if io.respond_to?(:read_nonblock) && env["rack.hijack_io"].respond_to?(:read_nonblock) # exercise both, since we Rack::Lint may use different objects env["rack.hijack_io"].write("HTTP/1.0 200 OK\r\n\r\n") io.write("request.hijacked") io.close return [ 500, {}, DieIfUsed.new ] end end [ 500, {}, [ "hijack BAD\n" ] ] when "/hijack_res" r = "response.hijacked" [ 200, { "Content-Length" => r.bytesize.to_s, "rack.hijack" => proc do |io| io.write(r) io.close end }, DieIfUsed.new ] end } unicorn-4.7.0/t/sslgen.sh0000755000004100000410000000246512236653132015336 0ustar www-datawww-data#!/bin/sh set -e lock=$0.lock while ! mkdir $lock 2>/dev/null do echo >&2 "PID=$$ waiting for $lock" sleep 1 done pid=$$ trap 'if test $$ -eq $pid; then rmdir $lock; fi' EXIT certinfo() { echo US echo Hell echo A Very Special Place echo Monkeys echo Poo-Flingers echo 127.0.0.1 echo kgio@bogomips.org } certinfo2() { certinfo echo echo } ca_certinfo () { echo US echo Hell echo An Even More Special Place echo Deranged Monkeys echo Poo-Hurlers echo 127.6.6.6 echo unicorn@bogomips.org } openssl genrsa -out ca.key 1024 ca_certinfo | openssl req -new -x509 -days 666 -key ca.key -out ca.crt openssl genrsa -out bad-ca.key 1024 ca_certinfo | openssl req -new -x509 -days 666 -key bad-ca.key -out bad-ca.crt openssl genrsa -out server.key 1024 certinfo2 | openssl req -new -key server.key -out server.csr openssl x509 -req -days 666 \ -in server.csr -CA ca.crt -CAkey ca.key -set_serial 1 -out server.crt n=2 mk_client_cert () { CLIENT=$1 openssl genrsa -out $CLIENT.key 1024 certinfo2 | openssl req -new -key $CLIENT.key -out $CLIENT.csr openssl x509 -req -days 666 \ -in $CLIENT.csr -CA $CA.crt -CAkey $CA.key -set_serial $n \ -out $CLIENT.crt rm -f $CLIENT.csr n=$(($n + 1)) } CA=ca mk_client_cert client1 mk_client_cert client2 CA=bad-ca mk_client_cert bad-client rm -f server.csr echo OK unicorn-4.7.0/t/t0004-working_directory_broken.sh0000755000004100000410000000072512236653132021711 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 3 "config.ru is missing inside alt working_directory" t_begin "setup" && { unicorn_setup rtmpfiles unicorn_config_tmp ok rm -rf $t_pfx.app mkdir $t_pfx.app # the whole point of this exercise echo "working_directory '$t_pfx.app'" >> $unicorn_config_tmp } t_begin "fails to start up w/o config.ru" && { unicorn -c $unicorn_config_tmp || echo ok > $ok } t_begin "fallback code was run" && { test x"$(cat $ok)" = xok } t_done unicorn-4.7.0/t/test-lib.sh0000644000004100000410000000472612236653132015565 0ustar www-datawww-data#!/bin/sh # Copyright (c) 2009 Rainbows! hackers # Copyright (c) 2010 Unicorn hackers . ./my-tap-lib.sh set +u # sometimes we rely on http_proxy to avoid wasting bandwidth with Isolate # and multiple Ruby versions NO_PROXY=${UNICORN_TEST_ADDR-127.0.0.1} export NO_PROXY set -e RUBY="${RUBY-ruby}" RUBY_VERSION=${RUBY_VERSION-$($RUBY -e 'puts RUBY_VERSION')} RUBY_ENGINE=${RUBY_ENGINE-$($RUBY -e 'puts((RUBY_ENGINE rescue "ruby"))')} t_pfx=$PWD/trash/$T-$RUBY_ENGINE-$RUBY_VERSION set -u PATH=$PWD/bin:$PATH export PATH test -x $PWD/bin/unused_listen || die "must be run in 't' directory" wait_for_pid () { path="$1" nr=30 while ! test -s "$path" && test $nr -gt 0 do nr=$(($nr - 1)) sleep 1 done } # "unix_time" is not in POSIX, but in GNU, and FreeBSD 9.0 (possibly earlier) unix_time () { $RUBY -e 'puts Time.now.to_i' } # "wc -l" outputs leading whitespace on *BSDs, filter it out for portability count_lines () { wc -l | tr -d '[:space:]' } # "wc -c" outputs leading whitespace on *BSDs, filter it out for portability count_bytes () { wc -c | tr -d '[:space:]' } # given a list of variable names, create temporary files and assign # the pathnames to those variables rtmpfiles () { for id in "$@" do name=$id case $name in *fifo) _tmp=$t_pfx.$id eval "$id=$_tmp" rm -f $_tmp mkfifo $_tmp T_RM_LIST="$T_RM_LIST $_tmp" ;; *socket) _tmp="$(mktemp -t $id.$$.XXXXXXXX)" if test $(printf "$_tmp" |count_bytes) -gt 108 then echo >&2 "$_tmp too long, tests may fail" echo >&2 "Try to set TMPDIR to a shorter path" fi eval "$id=$_tmp" rm -f $_tmp T_RM_LIST="$T_RM_LIST $_tmp" ;; *) _tmp=$t_pfx.$id eval "$id=$_tmp" > $_tmp T_OK_RM_LIST="$T_OK_RM_LIST $_tmp" ;; esac done } dbgcat () { id=$1 eval '_file=$'$id echo "==> $id <==" sed -e "s/^/$id:/" < $_file } check_stderr () { set +u _r_err=${1-${r_err}} set -u if grep -v $T $_r_err | grep -i Error then die "Errors found in $_r_err" elif grep SIGKILL $_r_err then die "SIGKILL found in $_r_err" fi } # unicorn_setup unicorn_setup () { eval $(unused_listen) port=$(expr $listen : '[^:]*:\([0-9]\+\)') host=$(expr $listen : '\([^:]*\):[0-9]\+') rtmpfiles unicorn_config pid r_err r_out fifo tmp ok cat > $unicorn_config <= 0 end # since we'll end up closing the random port we just got, there's a race # condition could allow the random port we just chose to reselect itself # when running tests in parallel with gmake. Create a lock file while # we have the port here to ensure that does not happen. lock_path = "#{Dir::tmpdir}/unicorn_test.#{addr}:#{port}.lock" lock = File.open(lock_path, File::WRONLY|File::CREAT|File::EXCL, 0600) rescue Errno::EEXIST sock.close rescue nil retry end sock.close rescue nil puts %Q(listen=#{addr}:#{port} T_RM_LIST="$T_RM_LIST #{lock_path}") unicorn-4.7.0/t/t0000-http-basic.sh0000755000004100000410000000152212236653132016633 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 8 "simple HTTP connection tests" t_begin "setup and start" && { unicorn_setup unicorn -D -c $unicorn_config env.ru unicorn_wait_start } t_begin "single request" && { curl -sSfv http://$listen/ } t_begin "check stderr has no errors" && { check_stderr } t_begin "HTTP/0.9 request should not return headers" && { ( printf 'GET /\r\n' cat $fifo > $tmp & wait echo ok > $ok ) | socat - TCP:$listen > $fifo } t_begin "env.inspect should've put everything on one line" && { test 1 -eq $(count_lines < $tmp) } t_begin "no headers in output" && { if grep ^Connection: $tmp then die "Connection header found in $tmp" elif grep ^HTTP/ $tmp then die "HTTP/ found in $tmp" fi } t_begin "killing succeeds" && { kill $unicorn_pid } t_begin "check stderr has no errors" && { check_stderr } t_done unicorn-4.7.0/t/preread_input.ru0000644000004100000410000000056712236653132016716 0ustar www-datawww-data#\-E none require 'digest/sha1' require 'unicorn/preread_input' use Rack::ContentLength use Rack::ContentType, "text/plain" use Unicorn::PrereadInput nr = 0 run lambda { |env| $stderr.write "app dispatch: #{nr += 1}\n" input = env["rack.input"] dig = Digest::SHA1.new while buf = input.read(16384) dig.update(buf) end [ 200, {}, [ "#{dig.hexdigest}\n" ] ] } unicorn-4.7.0/t/t0009-broken-app.sh0000755000004100000410000000212112236653132016640 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 9 "graceful handling of broken apps" t_begin "setup and start" && { unicorn_setup unicorn -E none -D broken-app.ru -c $unicorn_config unicorn_wait_start } t_begin "normal response is alright" && { test xOK = x"$(curl -sSf http://$listen/)" } t_begin "app raised exception" && { curl -sSf http://$listen/raise 2> $tmp || : grep -F 500 $tmp > $tmp } t_begin "app exception logged and backtrace not swallowed" && { grep -F 'app error' $r_err grep -A1 -F 'app error' $r_err | tail -1 | grep broken-app.ru: dbgcat r_err > $r_err } t_begin "trigger bad response" && { curl -sSf http://$listen/nil 2> $tmp || : grep -F 500 $tmp > $tmp } t_begin "app exception logged" && { grep -F 'app error' $r_err > $r_err } t_begin "normal responses alright afterwards" && { > $tmp curl -sSf http://$listen/ >> $tmp & curl -sSf http://$listen/ >> $tmp & curl -sSf http://$listen/ >> $tmp & curl -sSf http://$listen/ >> $tmp & wait test xOK = x$(sort < $tmp | uniq) } t_begin "teardown" && { kill $unicorn_pid } t_begin "check stderr" && check_stderr t_done unicorn-4.7.0/t/t0007-working_directory_no_embed_cli.sh0000755000004100000410000000163512236653132023034 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 4 "config.ru inside alt working_directory (no embedded switches)" t_begin "setup and start" && { unicorn_setup rm -rf $t_pfx.app mkdir $t_pfx.app cat > $t_pfx.app/config.ru <> $unicorn_config # allows ppid to be 1 in before_fork echo "preload_app true" >> $unicorn_config cat >> $unicorn_config <<\EOF before_fork do |server,worker| $master_ppid = Process.ppid # should be zero to detect daemonization end EOF cd / unicorn -D -c $unicorn_config unicorn_wait_start } t_begin "hit with curl" && { body=$(curl -sSf http://$listen/) } t_begin "killing succeeds" && { kill $unicorn_pid } t_begin "response body ppid == 1 (daemonized)" && { test "$body" -eq 1 } t_done unicorn-4.7.0/t/t0009-winch_ttin.sh0000755000004100000410000000246712236653132016765 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 8 "SIGTTIN succeeds after SIGWINCH" t_begin "setup and start" && { unicorn_setup cat >> $unicorn_config </dev/null do i=$(( $i + 1 )) test $i -lt 600 || die "timed out" sleep 1 done } t_begin "start one worker back up" && { kill -TTIN $unicorn_pid } t_begin "wait for new worker to start" && { test 0 -eq $(cat $fifo) || die "worker.nr != 0" new_worker_pid=$(curl -sSf http://$listen/) test -n "$new_worker_pid" && kill -0 $new_worker_pid test $orig_worker_pid -ne $new_worker_pid || \ die "worker wasn't replaced" } t_begin "killing succeeds" && { kill $unicorn_pid } t_begin "check stderr" && check_stderr dbgcat r_err t_done unicorn-4.7.0/t/t0008-back_out_of_upgrade.sh0000755000004100000410000000414412236653132020572 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 13 "backout of USR2 upgrade" worker_wait_start () { test xSTART = x"$(cat $fifo)" unicorn_pid=$(cat $pid) } t_begin "setup and start" && { unicorn_setup rm -f $pid.oldbin cat >> $unicorn_config </dev/null do i=$(( $i + 1 )) test $i -lt 600 || die "timed out" sleep 1 done } t_begin "capture pid of new worker" && { new_worker_pid=$(curl -sSf http://$listen/) } t_begin "reload old master process" && { kill -HUP $orig_master_pid worker_wait_start } t_begin "gracefully kill new master and ensure it dies" && { kill -QUIT $new_master_pid i=0 while kill -0 $new_worker_pid 2>/dev/null do i=$(( $i + 1 )) test $i -lt 600 || die "timed out" sleep 1 done } t_begin "ensure $pid.oldbin does not exist" && { i=0 while test -s $pid.oldbin do i=$(( $i + 1 )) test $i -lt 600 || die "timed out" sleep 1 done while ! test -s $pid do i=$(( $i + 1 )) test $i -lt 600 || die "timed out" sleep 1 done } t_begin "ensure $pid is correct" && { cur_master_pid=$(cat $pid) test $orig_master_pid -eq $cur_master_pid } t_begin "killing succeeds" && { kill $orig_master_pid } dbgcat r_err t_done unicorn-4.7.0/t/t9002-oob_gc-path.sh0000755000004100000410000000463212236653132016777 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 12 "OobGC test with limited path" t_begin "setup and start" && { unicorn_setup unicorn -D -c $unicorn_config oob_gc_path.ru unicorn_wait_start } t_begin "test default is noop" && { test xfalse = x$(curl -vsSf http://$listen/ 2>> $tmp) test xfalse = x$(curl -vsSf http://$listen/ 2>> $tmp) test xfalse = x$(curl -vsSf http://$listen/ 2>> $tmp) test xfalse = x$(curl -vsSf http://$listen/ 2>> $tmp) test xfalse = x$(curl -vsSf http://$listen/ 2>> $tmp) test xfalse = x$(curl -vsSf http://$listen/ 2>> $tmp) test xfalse = x$(curl -vsSf http://$listen/ 2>> $tmp) test xfalse = x$(curl -vsSf http://$listen/ 2>> $tmp) test xfalse = x$(curl -vsSf http://$listen/ 2>> $tmp) } t_begin "4 bad requests to bump counter" && { test xfalse = x$(curl -vsSf http://$listen/BAD 2>> $tmp) test xfalse = x$(curl -vsSf http://$listen/BAD 2>> $tmp) test xfalse = x$(curl -vsSf http://$listen/BAD 2>> $tmp) test xfalse = x$(curl -vsSf http://$listen/BAD 2>> $tmp) } t_begin "GC-starting request returns immediately" && { test xfalse = x$(curl -vsSf http://$listen/BAD 2>> $tmp) } t_begin "GC was started after 5 requests" && { test xtrue = x$(curl -vsSf http://$listen/ 2>> $tmp) } t_begin "reset GC" && { test xfalse = x$(curl -vsSf -X POST http://$listen/gc_reset 2>> $tmp) } t_begin "test default is noop" && { test xfalse = x$(curl -vsSf http://$listen/ 2>> $tmp) test xfalse = x$(curl -vsSf http://$listen/ 2>> $tmp) test xfalse = x$(curl -vsSf http://$listen/ 2>> $tmp) test xfalse = x$(curl -vsSf http://$listen/ 2>> $tmp) test xfalse = x$(curl -vsSf http://$listen/ 2>> $tmp) test xfalse = x$(curl -vsSf http://$listen/ 2>> $tmp) test xfalse = x$(curl -vsSf http://$listen/ 2>> $tmp) test xfalse = x$(curl -vsSf http://$listen/ 2>> $tmp) test xfalse = x$(curl -vsSf http://$listen/ 2>> $tmp) } t_begin "4 bad requests to bump counter" && { test xfalse = x$(curl -vsSf http://$listen/BAD 2>> $tmp) test xfalse = x$(curl -vsSf http://$listen/BAD 2>> $tmp) test xfalse = x$(curl -vsSf http://$listen/BAD 2>> $tmp) test xfalse = x$(curl -vsSf http://$listen/BAD 2>> $tmp) } t_begin "GC-starting request returns immediately" && { test xfalse = x$(curl -vsSf http://$listen/BAD 2>> $tmp) } t_begin "GC was started after 5 requests" && { test xtrue = x$(curl -vsSf http://$listen/ 2>> $tmp) } t_begin "killing succeeds" && { kill -QUIT $unicorn_pid } t_begin "check_stderr" && check_stderr t_done unicorn-4.7.0/t/my-tap-lib.sh0000644000004100000410000001064512236653132016012 0ustar www-datawww-data#!/bin/sh # Copyright (c) 2009, 2010 Eric Wong # # TAP-producing shell library for POSIX-compliant Bourne shells We do # not _rely_ on Bourne Again features, though we will use "set -o # pipefail" from ksh93 or bash 3 if available # # Only generic, non-project/non-language-specific stuff goes here. We # only have POSIX dependencies for the core tests (without --verbose), # though we'll enable useful non-POSIX things if they're available. # # This test library is intentionally unforgiving, it does not support # skipping tests nor continuing after any failure. Any failures # immediately halt execution as do any references to undefined # variables. # # When --verbose is specified, we always prefix stdout/stderr # output with "#" to avoid confusing TAP consumers. Otherwise # the normal stdout/stderr streams are redirected to /dev/null # dup normal stdout(fd=1) and stderr (fd=2) to fd=3 and fd=4 respectively # normal TAP output goes to fd=3, nothing should go to fd=4 exec 3>&1 4>&2 # ensure a sane environment TZ=UTC LC_ALL=C LANG=C export LANG LC_ALL TZ unset CDPATH # pipefail is non-POSIX, but very useful in ksh93/bash ( set -o pipefail 2>/dev/null ) && set -o pipefail SED=${SED-sed} # Unlike other test frameworks, we are unforgiving and bail immediately # on any failures. We do this because we're lazy about error handling # and also because we believe anything broken should not be allowed to # propagate throughout the rest of the test set -e set -u # name of our test T=${0##*/} t_expect_nr=-1 t_nr=0 t_current= t_complete=false # list of files to remove unconditionally on exit T_RM_LIST= # list of files to remove only on successful exit T_OK_RM_LIST= # emit output to stdout, it'll be parsed by the TAP consumer # so it must be TAP-compliant output t_echo () { echo >&3 "$@" } # emits non-parsed information to stdout, it will be prefixed with a '#' # to not throw off TAP consumers t_info () { t_echo '#' "$@" } # exit with an error and print a diagnostic die () { echo >&2 "$@" exit 1 } # our at_exit handler, it'll fire for all exits except SIGKILL (unavoidable) t_at_exit () { code=$? set +e if test $code -eq 0 then $t_complete || { t_info "t_done not called" code=1 } elif test -n "$t_current" then t_echo "not ok $t_nr - $t_current" fi if test $t_expect_nr -ne -1 then test $t_expect_nr -eq $t_nr || { t_info "planned $t_expect_nr tests but ran $t_nr" test $code -ne 0 || code=1 } fi $t_complete || { t_info "unexpected test failure" test $code -ne 0 || code=1 } rm -f $T_RM_LIST test $code -eq 0 && rm -f $T_OK_RM_LIST set +x exec >&3 2>&4 t_close_fds exit $code } # close test-specific extra file descriptors t_close_fds () { exec 3>&- 4>&- } # call this at the start of your test to specify the number of tests # you plan to run t_plan () { test "$1" -ge 1 || die "must plan at least one test" test $t_expect_nr -eq -1 || die "tried to plan twice in one test" t_expect_nr=$1 shift t_echo 1..$t_expect_nr "#" "$@" trap t_at_exit EXIT } _t_checkup () { test $t_expect_nr -le 0 && die "no tests planned" test -n "$t_current" && t_echo "ok $t_nr - $t_current" true } # finalizes any previously test and starts a new one t_begin () { _t_checkup t_nr=$(( $t_nr + 1 )) t_current="$1" # just in case somebody wanted to cheat us: set -e } # finalizes the current test without starting a new one t_end () { _t_checkup t_current= } # run this to signify the end of your test t_done () { _t_checkup t_current= t_complete=true test $t_expect_nr -eq $t_nr || exit 1 exit 0 } # create and assign named-pipes to variable _names_ passed to this function t_fifos () { for _id in "$@" do _name=$_id _tmp=$(mktemp -t $T.$$.$_id.XXXXXXXX) eval "$_id=$_tmp" rm -f $_tmp mkfifo $_tmp T_RM_LIST="$T_RM_LIST $_tmp" done } t_verbose=false t_trace=false while test "$#" -ne 0 do arg="$1" shift case $arg in -v|--verbose) t_verbose=true ;; --trace) t_trace=true t_verbose=true ;; *) die "Unknown option: $arg" ;; esac done # we always only setup stdout, nothing should end up in the "real" stderr if $t_verbose then if test x"$(which mktemp 2>/dev/null)" = x then die "mktemp(1) not available for --verbose" fi t_fifos t_stdout t_stderr ( # use a subshell so seds are not waitable $SED -e 's/^/#: /' < $t_stdout & $SED -e 's/^/#! /' < $t_stderr & ) & wait exec > $t_stdout 2> $t_stderr else exec > /dev/null 2> /dev/null fi $t_trace && set -x true unicorn-4.7.0/t/t0014.ru0000644000004100000410000000033012236653132014611 0ustar www-datawww-data#\ -E none use Rack::ContentLength use Rack::ContentType, 'text/plain' app = lambda do |env| case env['rack.input'] when Unicorn::TeeInput [ 200, {}, %w(OK) ] else [ 500, {}, %w(NO) ] end end run app unicorn-4.7.0/t/t0006.ru0000644000004100000410000000046412236653132014622 0ustar www-datawww-datause Rack::ContentLength use Rack::ContentType, "text/plain" run lambda { |env| # our File objects for stderr/stdout should always have #path # and be sync=true ok = $stderr.sync && $stdout.sync && String === $stderr.path && String === $stdout.path [ 200, {}, [ "#{ok}\n" ] ] } unicorn-4.7.0/t/t9001-oob_gc.sh0000755000004100000410000000225112236653132016037 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 9 "OobGC test" t_begin "setup and start" && { unicorn_setup unicorn -D -c $unicorn_config oob_gc.ru unicorn_wait_start } t_begin "test default interval (4 requests)" && { test xfalse = x$(curl -vsSf http://$listen/ 2>> $tmp) test xfalse = x$(curl -vsSf http://$listen/ 2>> $tmp) test xfalse = x$(curl -vsSf http://$listen/ 2>> $tmp) test xfalse = x$(curl -vsSf http://$listen/ 2>> $tmp) } t_begin "GC starting-request returns immediately" && { test xfalse = x$(curl -vsSf http://$listen/ 2>> $tmp) } t_begin "GC is started after 5 requests" && { test xtrue = x$(curl -vsSf http://$listen/ 2>> $tmp) } t_begin "reset GC" && { test xfalse = x$(curl -vsSf -X POST http://$listen/gc_reset 2>> $tmp) } t_begin "test default interval again (3 requests)" && { test xfalse = x$(curl -vsSf http://$listen/ 2>> $tmp) test xfalse = x$(curl -vsSf http://$listen/ 2>> $tmp) test xfalse = x$(curl -vsSf http://$listen/ 2>> $tmp) } t_begin "GC is started after 5 requests" && { test xtrue = x$(curl -vsSf http://$listen/ 2>> $tmp) } t_begin "killing succeeds" && { kill -QUIT $unicorn_pid } t_begin "check_stderr" && check_stderr dbgcat r_err t_done unicorn-4.7.0/t/t9000-preread-input.sh0000755000004100000410000000172412236653132017371 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 9 "PrereadInput middleware tests" t_begin "setup and start" && { random_blob_sha1=$(rsha1 < random_blob) unicorn_setup unicorn -D -c $unicorn_config preread_input.ru unicorn_wait_start } t_begin "single identity request" && { curl -sSf -T random_blob http://$listen/ > $tmp } t_begin "sha1 matches" && { test x"$(cat $tmp)" = x"$random_blob_sha1" } t_begin "single chunked request" && { curl -sSf -T- < random_blob http://$listen/ > $tmp } t_begin "sha1 matches" && { test x"$(cat $tmp)" = x"$random_blob_sha1" } t_begin "app only dispatched twice" && { test 2 -eq "$(grep 'app dispatch:' < $r_err | count_lines )" } t_begin "aborted chunked request" && { rm -f $tmp curl -sSf -T- < $fifo http://$listen/ > $tmp & curl_pid=$! kill -9 $curl_pid wait } t_begin "app only dispatched twice" && { test 2 -eq "$(grep 'app dispatch:' < $r_err | count_lines )" } t_begin "killing succeeds" && { kill -QUIT $unicorn_pid } t_done unicorn-4.7.0/t/t0003-working_directory.sh0000755000004100000410000000221012236653132020337 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 4 "config.ru inside alt working_directory" t_begin "setup and start" && { unicorn_setup rtmpfiles unicorn_config_tmp rm -rf $t_pfx.app mkdir $t_pfx.app cat > $t_pfx.app/config.ru < $unicorn_config_tmp # the whole point of this exercise echo "working_directory '$t_pfx.app'" >> $unicorn_config_tmp # allows ppid to be 1 in before_fork echo "preload_app true" >> $unicorn_config_tmp cat >> $unicorn_config_tmp <<\EOF before_fork do |server,worker| $master_ppid = Process.ppid # should be zero to detect daemonization end EOF mv $unicorn_config_tmp $unicorn_config # rely on --daemonize switch, no & or -D unicorn -c $unicorn_config unicorn_wait_start } t_begin "hit with curl" && { body=$(curl -sSf http://$listen/) } t_begin "killing succeeds" && { kill $unicorn_pid } t_begin "response body ppid == 1 (daemonized)" && { test "$body" -eq 1 } t_done unicorn-4.7.0/t/heartbeat-timeout.ru0000644000004100000410000000050112236653132017464 0ustar www-datawww-datause Rack::ContentLength headers = { 'Content-Type' => 'text/plain' } run lambda { |env| case env['PATH_INFO'] when "/block-forever" Process.kill(:STOP, $$) sleep # in case STOP signal is not received in time [ 500, headers, [ "Should never get here\n" ] ] else [ 200, headers, [ "#$$\n" ] ] end } unicorn-4.7.0/t/t0020-at_exit-handler.sh0000755000004100000410000000211312236653132017644 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 5 "at_exit/END handlers work as expected" t_begin "setup and startup" && { unicorn_setup cat >> $unicorn_config </dev/null 2>&1 do sleep 1 done } t_begin "check stderr" && check_stderr dbgcat r_err dbgcat r_out t_begin "all at_exit handlers ran" && { grep "$worker_pid BOTH" $r_out grep "$unicorn_pid BOTH" $r_out grep "$worker_pid END BOTH" $r_out grep "$unicorn_pid END BOTH" $r_out grep "$worker_pid WORKER ONLY" $r_out grep "$worker_pid END WORKER ONLY" $r_out } t_done unicorn-4.7.0/t/t0006-reopen-logs.sh0000755000004100000410000000320612236653132017036 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 15 "reopen rotated logs" t_begin "setup and startup" && { rtmpfiles curl_out curl_err r_rot unicorn_setup unicorn -D t0006.ru -c $unicorn_config unicorn_wait_start } t_begin "ensure server is responsive" && { test xtrue = x$(curl -sSf http://$listen/ 2> $curl_err) } t_begin "ensure stderr log is clean" && check_stderr t_begin "external log rotation" && { rm -f $r_rot mv $r_err $r_rot } t_begin "send reopen log signal (USR1)" && { kill -USR1 $unicorn_pid } t_begin "wait for rotated log to reappear" && { nr=60 while ! test -f $r_err && test $nr -ge 0 do sleep 1 nr=$(( $nr - 1 )) done } t_begin "ensure server is still responsive" && { test xtrue = x$(curl -sSf http://$listen/ 2> $curl_err) } t_begin "wait for worker to reopen logs" && { nr=60 re="worker=.* done reopening logs" while ! grep "$re" < $r_err >/dev/null && test $nr -ge 0 do sleep 1 nr=$(( $nr - 1 )) done } dbgcat r_rot dbgcat r_err t_begin "ensure no errors from curl" && { test ! -s $curl_err } t_begin "current server stderr is clean" && check_stderr t_begin "rotated stderr is clean" && { check_stderr $r_rot } t_begin "server is now writing logs to new stderr" && { before_rot=$(count_bytes < $r_rot) before_err=$(count_bytes < $r_err) test xtrue = x$(curl -sSf http://$listen/ 2> $curl_err) after_rot=$(count_bytes < $r_rot) after_err=$(count_bytes < $r_err) test $after_rot -eq $before_rot test $after_err -gt $before_err } t_begin "stop server" && { kill $unicorn_pid } dbgcat r_err t_begin "current server stderr is clean" && check_stderr t_begin "rotated stderr is clean" && check_stderr $r_rot t_done unicorn-4.7.0/t/t0005-working_directory_app.rb.sh0000755000004100000410000000141512236653132021611 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 4 "fooapp.rb inside alt working_directory" t_begin "setup and start" && { unicorn_setup rm -rf $t_pfx.app mkdir $t_pfx.app cat > $t_pfx.app/fooapp.rb <<\EOF class Fooapp def self.call(env) # Rack::Lint in 1.5.0 requires headers to be a hash h = [%w(Content-Type text/plain), %w(Content-Length 2)] h = Rack::Utils::HeaderHash.new(h) [ 200, h, %w(HI) ] end end EOF # the whole point of this exercise echo "working_directory '$t_pfx.app'" >> $unicorn_config cd / unicorn -D -c $unicorn_config -I. fooapp.rb unicorn_wait_start } t_begin "hit with curl" && { body=$(curl -sSf http://$listen/) } t_begin "killing succeeds" && { kill $unicorn_pid } t_begin "response body expected" && { test x"$body" = xHI } t_done unicorn-4.7.0/t/t0200-rack-hijack.sh0000755000004100000410000000102412236653132016743 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 5 "rack.hijack tests (Rack 1.5+ (Rack::VERSION >= [ 1,2]))" t_begin "setup and start" && { unicorn_setup unicorn -D -c $unicorn_config hijack.ru unicorn_wait_start } t_begin "check request hijack" && { test "xrequest.hijacked" = x"$(curl -sSfv http://$listen/hijack_req)" } t_begin "check response hijack" && { test "xresponse.hijacked" = x"$(curl -sSfv http://$listen/hijack_res)" } t_begin "killing succeeds" && { kill $unicorn_pid } t_begin "check stderr" && { check_stderr } t_done unicorn-4.7.0/t/.gitignore0000644000004100000410000000005612236653132015466 0ustar www-datawww-data/random_blob /.dep+* /*.crt /*.key /ssl-stamp unicorn-4.7.0/t/t0016-trust-x-forwarded-false.sh0000755000004100000410000000126012236653132021274 0ustar www-datawww-data#!/bin/sh . ./test-lib.sh t_plan 5 "trust_x_forwarded=false configuration test" t_begin "setup and start" && { unicorn_setup echo "trust_x_forwarded false" >> $unicorn_config unicorn -D -c $unicorn_config env.ru unicorn_wait_start } t_begin "spoofed request with X-Forwarded-Proto does not trigger" && { curl -H 'X-Forwarded-Proto: https' http://$listen/ | \ grep -F '"rack.url_scheme"=>"http"' } t_begin "spoofed request with X-Forwarded-SSL does not trigger" && { curl -H 'X-Forwarded-SSL: on' http://$listen/ | \ grep -F '"rack.url_scheme"=>"http"' } t_begin "killing succeeds" && { kill $unicorn_pid } t_begin "check stderr has no errors" && { check_stderr } t_done unicorn-4.7.0/t/oob_gc.ru0000644000004100000410000000067212236653132015302 0ustar www-datawww-data#\-E none require 'unicorn/oob_gc' use Rack::ContentLength use Rack::ContentType, "text/plain" use Unicorn::OobGC $gc_started = false # Mock GC.start def GC.start ObjectSpace.each_object(Kgio::Socket) do |x| x.closed? or abort "not closed #{x}" end $gc_started = true end run lambda { |env| if "/gc_reset" == env["PATH_INFO"] && "POST" == env["REQUEST_METHOD"] $gc_started = false end [ 200, {}, [ "#$gc_started\n" ] ] } unicorn-4.7.0/.CHANGELOG.old0000644000004100000410000000273312236653132015304 0ustar www-datawww-datav0.91.0 - HTTP/0.9 support, multiline header support, small fixes v0.90.0 - switch chunking+trailer handling to Ragel, v0.8.4 fixes v0.9.2 - Ruby 1.9.2 preview1 compatibility v0.9.1 - FD_CLOEXEC portability fix (v0.8.2 port) v0.9.0 - bodies: "Transfer-Encoding: chunked", rewindable streaming v0.8.4 - pass through unknown HTTP status codes v0.8.3 - Ruby 1.9.2 preview1 compatibility v0.8.2 - socket handling bugfixes and usability tweaks v0.8.1 - safer timeout handling, more consistent reload behavior v0.8.0 - enforce Rack dependency, minor performance improvements and fixes v0.7.1 - minor fixes, cleanups and documentation improvements v0.7.0 - rack.version is 1.0 v0.6.0 - cleanups + optimizations, signals to {in,de}crement processes v0.5.4 - fix data corruption with some small uploads (not curl) v0.5.3 - fix 100% CPU usage when idle, small cleanups v0.5.2 - force Status: header for compat, small cleanups v0.5.1 - exit correctly on INT/TERM, QUIT is still recommended, however v0.5.0 - {after,before}_fork API change, small tweaks/fixes v0.4.2 - fix Rails ARStore, FD leak prevention, descriptive proctitles v0.4.1 - Rails support, per-listener backlog and {snd,rcv}buf v0.2.3 - Unlink Tempfiles after use (they were closed, just not unlinked) v0.2.2 - small bug fixes, fix Rack multi-value headers (Set-Cookie:) v0.2.1 - Fix broken Manifest that cause unicorn_rails to not be bundled v0.2.0 - unicorn_rails launcher script. v0.1.0 - Unicorn - UNIX-only fork of Mongrel free of threading unicorn-4.7.0/unicorn.gemspec0000644000004100000410000000314212236653132016254 0ustar www-datawww-data# -*- encoding: binary -*- ENV["VERSION"] or abort "VERSION= must be specified" manifest = File.readlines('.manifest').map! { |x| x.chomp! } require 'wrongdoc' extend Wrongdoc::Gemspec name, summary, title = readme_metadata # don't bother with tests that fork, not worth our time to get working # with `gem check -t` ... (of course we care for them when testing with # GNU make when they can run in parallel) test_files = manifest.grep(%r{\Atest/unit/test_.*\.rb\z}).map do |f| File.readlines(f).grep(/\bfork\b/).empty? ? f : nil end.compact Gem::Specification.new do |s| s.name = %q{unicorn} s.version = ENV["VERSION"].dup s.authors = ["#{name} hackers"] s.summary = summary s.date = Time.now.utc.strftime('%Y-%m-%d') s.description = readme_description s.email = %q{mongrel-unicorn@rubyforge.org} s.executables = %w(unicorn unicorn_rails) s.extensions = %w(ext/unicorn_http/extconf.rb) s.extra_rdoc_files = extra_rdoc_files(manifest) s.files = manifest s.homepage = Wrongdoc.config[:rdoc_url] s.rdoc_options = rdoc_options s.rubyforge_project = %q{mongrel} s.test_files = test_files # for people that are absolutely stuck on Rails 2.3.2 and can't # up/downgrade to any other version, the Rack dependency may be # commented out. Nevertheless, upgrading to Rails 2.3.4 or later is # *strongly* recommended for security reasons. s.add_dependency(%q) s.add_dependency(%q, '~> 2.6') s.add_dependency(%q, '~> 0.7') s.add_development_dependency('isolate', '~> 3.2') s.add_development_dependency('wrongdoc', '~> 1.6.1') s.licenses = ["GPLv2+", "Ruby 1.8"] end unicorn-4.7.0/TUNING0000644000004100000410000001053612236653132014166 0ustar www-datawww-data= Tuning \Unicorn \Unicorn performance is generally as good as a (mostly) Ruby web server can provide. Most often the performance bottleneck is in the web application running on Unicorn rather than Unicorn itself. == \Unicorn Configuration See Unicorn::Configurator for details on the config file format. +worker_processes+ is the most-commonly needed tuning parameter. === Unicorn::Configurator#worker_processes * worker_processes should be scaled to the number of processes your backend system(s) can support. DO NOT scale it to the number of external network clients your application expects to be serving. \Unicorn is NOT for serving slow clients, that is the job of nginx. * worker_processes should be *at* *least* the number of CPU cores on a dedicated server. If your application has occasionally slow responses that are /not/ CPU-intensive, you may increase this to workaround those inefficiencies. * worker_processes may be increased for Unicorn::OobGC users to provide more consistent response times. * Never, ever, increase worker_processes to the point where the system runs out of physical memory and hits swap. Production servers should never see heavy swap activity. === Unicorn::Configurator#listen Options * Setting a very low value for the :backlog parameter in "listen" directives can allow failover to happen more quickly if your cluster is configured for it. * If you're doing extremely simple benchmarks and getting connection errors under high request rates, increasing your :backlog parameter above the already-generous default of 1024 can help avoid connection errors. Keep in mind this is not recommended for real traffic if you have another machine to failover to (see above). * :rcvbuf and :sndbuf parameters generally do not need to be set for TCP listeners under Linux 2.6 because auto-tuning is enabled. UNIX domain sockets do not have auto-tuning buffer sizes; so increasing those will allow syscalls and task switches to be saved for larger requests and responses. If your app only generates small responses or expects small requests, you may shrink the buffer sizes to save memory, too. * Having socket buffers too large can also be detrimental or have little effect. Huge buffers can put more pressure on the allocator and may also thrash CPU caches, cancelling out performance gains one would normally expect. * UNIX domain sockets are slightly faster than TCP sockets, but only work if nginx is on the same machine. == Other \Unicorn settings * Setting "preload_app true" can allow copy-on-write-friendly GC to be used to save memory. It will probably not work out of the box with applications that open sockets or perform random I/O on files. Databases like TokyoCabinet use concurrency-safe pread()/pwrite() functions for safe sharing of database file descriptors across processes. * On POSIX-compliant filesystems, it is safe for multiple threads or processes to append to one log file as long as all the processes are have them unbuffered (File#sync = true) or they are record(line)-buffered in userspace before any writes. == Kernel Parameters (Linux sysctl) WARNING: Do not change system parameters unless you know what you're doing! * net.core.rmem_max and net.core.wmem_max can increase the allowed size of :rcvbuf and :sndbuf respectively. This is mostly only useful for UNIX domain sockets which do not have auto-tuning buffer sizes. * For load testing/benchmarking with UNIX domain sockets, you should consider increasing net.core.somaxconn or else nginx will start failing to connect under heavy load. You may also consider setting a higher :backlog to listen on as noted earlier. * If you're running out of local ports, consider lowering net.ipv4.tcp_fin_timeout to 20-30 (default: 60 seconds). Also consider widening the usable port range by changing net.ipv4.ip_local_port_range. * Setting net.ipv4.tcp_timestamps=1 will also allow setting net.ipv4.tcp_tw_reuse=1 and net.ipv4.tcp_tw_recycle=1, which along with the above settings can slow down port exhaustion. Not all networks are compatible with these settings, check with your friendly network administrator before changing these. * Increasing the MTU size can reduce framing overhead for larger transfers. One often-overlooked detail is that the loopback device (usually "lo") can have its MTU increased, too. unicorn-4.7.0/DESIGN0000644000004100000410000001116712236653132014134 0ustar www-datawww-data== Design * Simplicity: Unicorn is a traditional UNIX prefork web server. No threads are used at all, this makes applications easier to debug and fix. When your application goes awry, a BOFH can just "kill -9" the runaway worker process without worrying about tearing all clients down, just one. Only UNIX-like systems supporting fork() and file descriptor inheritance are supported. * The Ragel+C HTTP parser is taken from Mongrel. This is the only non-Ruby part and there are no plans to add any more non-Ruby components. * All HTTP parsing and I/O is done much like Mongrel: 1. read/parse HTTP request headers in full 2. call Rack application 3. write HTTP response back to the client * Like Mongrel, neither keepalive nor pipelining are supported. These aren't needed since Unicorn is only designed to serve fast, low-latency clients directly. Do one thing, do it well; let nginx handle slow clients. * Configuration is purely in Ruby and eval(). Ruby is less ambiguous than YAML and lets lambdas for before_fork/after_fork/before_exec hooks be defined inline. An optional, separate config_file may be used to modify supported configuration changes (and also gives you plenty of rope if you RTFS :>) * One master process spawns and reaps worker processes. The Rack application itself is called only within the worker process (but can be loaded within the master). A copy-on-write friendly garbage collector like the one found in Ruby 2.0.0dev or Ruby Enterprise Edition can be used to minimize memory usage along with the "preload_app true" directive (see Unicorn::Configurator). * The number of worker processes should be scaled to the number of CPUs, memory or even spindles you have. If you have an existing Mongrel cluster on a single-threaded app, using the same amount of processes should work. Let a full-HTTP-request-buffering reverse proxy like nginx manage concurrency to thousands of slow clients for you. Unicorn scaling should only be concerned about limits of your backend system(s). * Load balancing between worker processes is done by the OS kernel. All workers share a common set of listener sockets and does non-blocking accept() on them. The kernel will decide which worker process to give a socket to and workers will sleep if there is nothing to accept(). * Since non-blocking accept() is used, there can be a thundering herd when an occasional client connects when application *is not busy*. The thundering herd problem should not affect applications that are running all the time since worker processes will only select()/accept() outside of the application dispatch. * Additionally, thundering herds are much smaller than with configurations using existing prefork servers. Process counts should only be scaled to backend resources, _never_ to the number of expected clients like is typical with blocking prefork servers. So while we've seen instances of popular prefork servers configured to run many hundreds of worker processes, Unicorn deployments are typically only 2-4 processes per-core. * On-demand scaling of worker processes never happens automatically. Again, Unicorn is concerned about scaling to backend limits and should never configured in a fashion where it could be waiting on slow clients. For extremely rare circumstances, we provide TTIN and TTOU signal handlers to increment/decrement your process counts without reloading. Think of it as driving a car with manual transmission: you have a lot more control if you know what you're doing. * Blocking I/O is used for clients. This allows a simpler code path to be followed within the Ruby interpreter and fewer syscalls. Applications that use threads continue to work if Unicorn is only serving LAN or localhost clients. * SIGKILL is used to terminate the timed-out workers from misbehaving apps as reliably as possible on a UNIX system. The default timeout is a generous 60 seconds (same default as in Mongrel). * The poor performance of select() on large FD sets is avoided as few file descriptors are used in each worker. There should be no gain from moving to highly scalable but unportable event notification solutions for watching few file descriptors. * If the master process dies unexpectedly for any reason, workers will notice within :timeout/2 seconds and follow the master to its death. * There is never any explicit real-time dependency or communication between the worker processes nor to the master process. Synchronization is handled entirely by the OS kernel and shared resources are never accessed by the worker when it is servicing a client. unicorn-4.7.0/LATEST0000644000004100000410000000337012236653132014154 0ustar www-datawww-data=== unicorn 4.7.0 - minor updates, license tweak / 2013-11-04 06:59 UTC * support SO_REUSEPORT on new listeners (:reuseport) This allows users to start an independent instance of unicorn on a the same port as a running unicorn (as long as both instances use :reuseport). ref: https://lwn.net/Articles/542629/ * unicorn is now GPLv2-or-later and Ruby 1.8-licensed (instead of GPLv2-only, GPLv3-only, and Ruby 1.8-licensed) This changes nothing at the moment. Once the FSF publishes the next version of the GPL, users may choose the newer GPL version without the unicorn BDFL approving it. Two years ago when I got permission to add GPLv3 to the license options, I also got permission from all past contributors to approve future versions of the GPL. So now I'm approving all future versions of the GPL for use with unicorn. Reasoning below: In case the GPLv4 arrives and I am not alive to approve/review it, the lesser of evils is have give blanket approval of all future GPL versions (as published by the FSF). The worse evil is to be stuck with a license which cannot guarantee the Free-ness of this project in the future. This unfortunately means the FSF can theoretically come out with license terms I do not agree with, but the GPLv2 and GPLv3 will always be an option to all users. Note: we currently prefer GPLv3 Two improvements thanks to Ernest W. Durbin III: * USR2 redirects fixed for Ruby 1.8.6 (broken since 4.1.0) * unicorn(1) and unicorn_rails(1) enforces valid integer for -p/--port A few more odd, minor tweaks and fixes: * attempt to rename PID file when possible (on USR2) * workaround reopen atomicity issues for stdio vs non-stdio * improve handling of client-triggerable socket errors unicorn-4.7.0/bin/0000755000004100000410000000000012236653132014002 5ustar www-datawww-dataunicorn-4.7.0/bin/unicorn0000755000004100000410000000702512236653132015411 0ustar www-datawww-data#!/this/will/be/overwritten/or/wrapped/anyways/do/not/worry/ruby # -*- encoding: binary -*- require 'unicorn/launcher' require 'optparse' ENV["RACK_ENV"] ||= "development" rackup_opts = Unicorn::Configurator::RACKUP options = rackup_opts[:options] op = OptionParser.new("", 24, ' ') do |opts| cmd = File.basename($0) opts.banner = "Usage: #{cmd} " \ "[ruby options] [#{cmd} options] [rackup config file]" opts.separator "Ruby options:" lineno = 1 opts.on("-e", "--eval LINE", "evaluate a LINE of code") do |line| eval line, TOPLEVEL_BINDING, "-e", lineno lineno += 1 end opts.on("-d", "--debug", "set debugging flags (set $DEBUG to true)") do $DEBUG = true end opts.on("-w", "--warn", "turn warnings on for your script") do $-w = true end opts.on("-I", "--include PATH", "specify $LOAD_PATH (may be used more than once)") do |path| $LOAD_PATH.unshift(*path.split(/:/)) end opts.on("-r", "--require LIBRARY", "require the library, before executing your script") do |library| require library end opts.separator "#{cmd} options:" # some of these switches exist for rackup command-line compatibility, opts.on("-o", "--host HOST", "listen on HOST (default: #{Unicorn::Const::DEFAULT_HOST})") do |h| rackup_opts[:host] = h rackup_opts[:set_listener] = true end opts.on("-p", "--port PORT", Integer, "use PORT (default: #{Unicorn::Const::DEFAULT_PORT})") do |port| rackup_opts[:port] = port rackup_opts[:set_listener] = true end opts.on("-E", "--env RACK_ENV", "use RACK_ENV for defaults (default: development)") do |e| ENV["RACK_ENV"] = e end opts.on("-N", "--no-default-middleware", "do not load middleware implied by RACK_ENV") do |e| rackup_opts[:no_default_middleware] = true end opts.on("-D", "--daemonize", "run daemonized in the background") do |d| rackup_opts[:daemonize] = !!d end opts.on("-P", "--pid FILE", "DEPRECATED") do |f| warn %q{Use of --pid/-P is strongly discouraged} warn %q{Use the 'pid' directive in the Unicorn config file instead} options[:pid] = f end opts.on("-s", "--server SERVER", "this flag only exists for compatibility") do |s| warn "-s/--server only exists for compatibility with rackup" end # Unicorn-specific stuff opts.on("-l", "--listen {HOST:PORT|PATH}", "listen on HOST:PORT or PATH", "this may be specified multiple times", "(default: #{Unicorn::Const::DEFAULT_LISTEN})") do |address| options[:listeners] << address end opts.on("-c", "--config-file FILE", "Unicorn-specific config file") do |f| options[:config_file] = f end # I'm avoiding Unicorn-specific config options on the command-line. # IMNSHO, config options on the command-line are redundant given # config files and make things unnecessarily complicated with multiple # places to look for a config option. opts.separator "Common options:" opts.on_tail("-h", "--help", "Show this message") do puts opts.to_s.gsub(/^.*DEPRECATED.*$/s, '') exit end opts.on_tail("-v", "--version", "Show version") do puts "#{cmd} v#{Unicorn::Const::UNICORN_VERSION}" exit end opts.parse! ARGV end app = Unicorn.builder(ARGV[0] || 'config.ru', op) op = nil if $DEBUG require 'pp' pp({ :unicorn_options => options, :app => app, :daemonize => rackup_opts[:daemonize], }) end Unicorn::Launcher.daemonize!(options) if rackup_opts[:daemonize] Unicorn::HttpServer.new(app, options).start.join unicorn-4.7.0/bin/unicorn_rails0000755000004100000410000001415312236653132016603 0ustar www-datawww-data#!/this/will/be/overwritten/or/wrapped/anyways/do/not/worry/ruby # -*- encoding: binary -*- require 'unicorn/launcher' require 'optparse' require 'fileutils' ENV['RAILS_ENV'] ||= "development" rackup_opts = Unicorn::Configurator::RACKUP options = rackup_opts[:options] op = OptionParser.new("", 24, ' ') do |opts| cmd = File.basename($0) opts.banner = "Usage: #{cmd} " \ "[ruby options] [#{cmd} options] [rackup config file]" opts.separator "Ruby options:" lineno = 1 opts.on("-e", "--eval LINE", "evaluate a LINE of code") do |line| eval line, TOPLEVEL_BINDING, "-e", lineno lineno += 1 end opts.on("-d", "--debug", "set debugging flags (set $DEBUG to true)") do $DEBUG = true end opts.on("-w", "--warn", "turn warnings on for your script") do $-w = true end opts.on("-I", "--include PATH", "specify $LOAD_PATH (may be used more than once)") do |path| $LOAD_PATH.unshift(*path.split(/:/)) end opts.on("-r", "--require LIBRARY", "require the library, before executing your script") do |library| require library end opts.separator "#{cmd} options:" # some of these switches exist for rackup command-line compatibility, opts.on("-o", "--host HOST", "listen on HOST (default: #{Unicorn::Const::DEFAULT_HOST})") do |h| rackup_opts[:host] = h rackup_opts[:set_listener] = true end opts.on("-p", "--port PORT", Integer, "use PORT (default: #{Unicorn::Const::DEFAULT_PORT})") do |port| rackup_opts[:port] = port rackup_opts[:set_listener] = true end opts.on("-E", "--env RAILS_ENV", "use RAILS_ENV for defaults (default: development)") do |e| ENV['RAILS_ENV'] = e end opts.on("-D", "--daemonize", "run daemonized in the background") do |d| rackup_opts[:daemonize] = !!d end # Unicorn-specific stuff opts.on("-l", "--listen {HOST:PORT|PATH}", "listen on HOST:PORT or PATH", "this may be specified multiple times", "(default: #{Unicorn::Const::DEFAULT_LISTEN})") do |address| options[:listeners] << address end opts.on("-c", "--config-file FILE", "Unicorn-specific config file") do |f| options[:config_file] = f end opts.on("-P PATH", "DEPRECATED") do |v| warn %q{Use of -P is ambiguous and discouraged} warn %q{Use --path or RAILS_RELATIVE_URL_ROOT instead} ENV['RAILS_RELATIVE_URL_ROOT'] = v end opts.on("--path PATH", "Runs Rails app mounted at a specific path.", "(default: /)") do |v| ENV['RAILS_RELATIVE_URL_ROOT'] = v end # I'm avoiding Unicorn-specific config options on the command-line. # IMNSHO, config options on the command-line are redundant given # config files and make things unnecessarily complicated with multiple # places to look for a config option. opts.separator "Common options:" opts.on_tail("-h", "--help", "Show this message") do puts opts.to_s.gsub(/^.*DEPRECATED.*$/s, '') exit end opts.on_tail("-v", "--version", "Show version") do puts "#{cmd} v#{Unicorn::Const::UNICORN_VERSION}" exit end opts.parse! ARGV end def rails_dispatcher if ::Rails::VERSION::MAJOR >= 3 && ::File.exist?('config/application.rb') if ::File.read('config/application.rb') =~ /^module\s+([\w:]+)\s*$/ app_module = Object.const_get($1) begin result = app_module::Application rescue NameError end end end if result.nil? && defined?(ActionController::Dispatcher) result = ActionController::Dispatcher.new end result || abort("Unable to locate the application dispatcher class") end def rails_builder(ru, op, daemonize) return Unicorn.builder(ru, op) if ru # allow Configurator to parse cli switches embedded in the ru file Unicorn::Configurator::RACKUP.update(:file => :rails, :optparse => op) # this lambda won't run until after forking if preload_app is false # this runs after config file reloading lambda do || # Rails 3 includes a config.ru, use it if we find it after # working_directory is bound. ::File.exist?('config.ru') and return Unicorn.builder('config.ru', op).call # Load Rails and (possibly) the private version of Rack it bundles. begin require ::File.expand_path('config/boot') require ::File.expand_path('config/environment') rescue LoadError => err abort "#$0 must be run inside RAILS_ROOT: #{err.inspect}" end defined?(::Rails::VERSION::STRING) or abort "Rails::VERSION::STRING not defined by config/{boot,environment}" # it seems Rails >=2.2 support Rack, but only >=2.3 requires it old_rails = case ::Rails::VERSION::MAJOR when 0, 1 then true when 2 then Rails::VERSION::MINOR < 3 ? true : false else false end Rack::Builder.new do map_path = ENV['RAILS_RELATIVE_URL_ROOT'] || '/' if old_rails if map_path != '/' # patches + tests welcome, but I really cbf to deal with this # since all apps I've ever dealt with just use "/" ... warn "relative URL roots may not work for older Rails" end warn "LogTailer not available for Rails < 2.3" unless daemonize warn "Debugger not available" if $DEBUG require 'unicorn/app/old_rails' map(map_path) do use Unicorn::App::OldRails::Static run Unicorn::App::OldRails.new end else use Rails::Rack::LogTailer unless daemonize use Rails::Rack::Debugger if $DEBUG map(map_path) do unless defined?(ActionDispatch::Static) use Rails::Rack::Static end run rails_dispatcher end end end.to_app end end app = rails_builder(ARGV[0], op, rackup_opts[:daemonize]) op = nil if $DEBUG require 'pp' pp({ :unicorn_options => options, :app => app, :daemonize => rackup_opts[:daemonize], }) end # ensure Rails standard tmp paths exist options[:after_reload] = lambda do FileUtils.mkdir_p(%w(cache pids sessions sockets).map! { |d| "tmp/#{d}" }) end if rackup_opts[:daemonize] options[:pid] = "tmp/pids/unicorn.pid" Unicorn::Launcher.daemonize!(options) end Unicorn::HttpServer.new(app, options).start.join unicorn-4.7.0/Links0000644000004100000410000000353012236653132014236 0ustar www-datawww-data= Related Projects If you're interested in \Unicorn, you may be interested in some of the projects listed below. If you have any links to add/change/remove, please tell us at mailto:mongrel-unicorn@rubyforge.org! == Disclaimer The \Unicorn project is not responsible for the content in these links. Furthermore, the \Unicorn project has never, does not and will never endorse: * any for-profit entities or services * any non-{Free Software}[http://www.gnu.org/philosophy/free-sw.html] The existence of these links does not imply endorsement of any entities or services behind them. === For use with \Unicorn * {Bluepill}[https://github.com/arya/bluepill] - a simple process monitoring tool written in Ruby * {golden_brindle}[https://github.com/simonoff/golden_brindle] - tool to manage multiple \Unicorn instances/applications on a single server * {raindrops}[http://raindrops.bogomips.org/] - real-time stats for preforking Rack servers * {UnXF}[http://bogomips.org/unxf/] Un-X-Forward* the Rack environment, useful since unicorn is designed to be deployed behind a reverse proxy. === \Unicorn is written to work with * {Rack}[http://rack.rubyforge.org/] - a minimal interface between webservers supporting Ruby and Ruby frameworks * {Ruby}[http://ruby-lang.org/] - the programming language of Rack and \Unicorn * {nginx}[http://nginx.org/] - the reverse proxy for use with \Unicorn * {kgio}[http://bogomips.org/kgio/] - the I/O library written for \Unicorn === Derivatives * {Green Unicorn}[http://gunicorn.org/] - a Python version of \Unicorn * {Rainbows!}[http://rainbows.rubyforge.org/] - \Unicorn for sleepy apps and slow clients. === Prior Work * {Mongrel}[http://mongrel.rubyforge.org/] - the awesome webserver \Unicorn is based on * {david}[http://bogomips.org/david.git] - a tool to explain why you need nginx in front of \Unicorn unicorn-4.7.0/.document0000644000004100000410000000061512236653132015053 0ustar www-datawww-dataFAQ README TUNING PHILOSOPHY HACKING DESIGN CONTRIBUTORS LICENSE SIGNALS KNOWN_ISSUES TODO NEWS ChangeLog LATEST lib/unicorn.rb lib/unicorn/configurator.rb lib/unicorn/http_server.rb lib/unicorn/preread_input.rb lib/unicorn/stream_input.rb lib/unicorn/tee_input.rb lib/unicorn/util.rb lib/unicorn/oob_gc.rb lib/unicorn/worker.rb unicorn_1 unicorn_rails_1 ISSUES Sandbox Links Application_Timeouts unicorn-4.7.0/Sandbox0000644000004100000410000000735212236653132014562 0ustar www-datawww-data= Tips for using \Unicorn with Sandbox installation tools Since unicorn includes executables and is usually used to start a Ruby process, there are certain caveats to using it with tools that sandbox RubyGems installations such as {Bundler}[http://gembundler.com/] or {Isolate}[http://github.com/jbarnette/isolate]. == General deployment If you're sandboxing your unicorn installation and using Capistrano (or similar), it's required that you sandbox your RubyGems in a per-application shared directory that can be used between different revisions. unicorn will stash its original command-line at startup for the USR2 upgrades, and cleaning up old revisions will cause revision-specific installations of unicorn to go missing and upgrades to fail. If you find yourself in this situation and can't afford downtime, you can override the existing unicorn executable path in the config file like this: Unicorn::HttpServer::START_CTX[0] = "/some/path/to/bin/unicorn" Then use HUP to reload, and then continue with the USR2+QUIT upgrade sequence. Environment variable pollution when exec-ing a new process (with USR2) is the primary issue with sandboxing tools such as Bundler and Isolate. == Bundler === Running If you're bundling unicorn, use "bundle exec unicorn" (or "bundle exec unicorn_rails") to start unicorn with the correct environment variables ref: http://mid.gmane.org/9ECF07C4-5216-47BE-961D-AFC0F0C82060@internetfamo.us Otherwise (if you choose to not sandbox your unicorn installation), we expect the tips for Isolate (below) apply, too. === RUBYOPT pollution from SIGUSR2 upgrades This is no longer be an issue as of bundler 0.9.17 ref: http://mid.gmane.org/8FC34B23-5994-41CC-B5AF-7198EF06909E@tramchase.com === BUNDLE_GEMFILE for Capistrano users You may need to set or reset the BUNDLE_GEMFILE environment variable in the before_exec hook: before_exec do |server| ENV["BUNDLE_GEMFILE"] = "/path/to/app/current/Gemfile" end === Other ENV pollution issues If you're using an older Bundler version (0.9.x), you may need to set or reset GEM_HOME, GEM_PATH and PATH environment variables in the before_exec hook as illustrated by http://gist.github.com/534668 === Ruby 2.0.0 close-on-exec and SIGUSR2 incompatibility Ruby 2.0.0 enforces FD_CLOEXEC on file descriptors by default. unicorn has been prepared for this behavior since unicorn 4.1.0, but we forgot to remind the Bundler developers. This issue is being tracked here: https://github.com/bundler/bundler/issues/2628 == Isolate === Running Installing "unicorn" as a system-wide Rubygem and using the isolate gem may cause issues if you're using any of the bundled application-level libraries in unicorn/app/* (for compatibility with CGI-based applications, Rails <= 2.2.2, or ExecCgi). For now workarounds include doing one of the following: 1. Isolating unicorn, setting GEM_HOME to your Isolate path, and running the isolated version of unicorn. You *must* set GEM_HOME before running your isolated unicorn install in this way. 2. Installing the same version of unicorn as a system-wide Rubygem *and* isolating unicorn as well. 3. Explicitly setting RUBYLIB or $LOAD_PATH to include any gem path where the unicorn gem is installed (e.g. /usr/lib/ruby/gems/1.9.1/gems/unicorn-VERSION/lib) === RUBYOPT pollution from SIGUSR2 upgrades If you are using Isolate, using Isolate 2.x is strongly recommended as environment modifications are idempotent. If you are stuck with 1.x versions of Isolate, it is recommended that you disable it with the before_exec hook prevent the PATH and RUBYOPT environment variable modifications from propagating between upgrades in your Unicorn config file: before_exec do |server| Isolate.disable end unicorn-4.7.0/man/0000755000004100000410000000000012236653132014005 5ustar www-datawww-dataunicorn-4.7.0/man/man1/0000755000004100000410000000000012236653132014641 5ustar www-datawww-dataunicorn-4.7.0/man/man1/unicorn.10000644000004100000410000001604012236653132016401 0ustar www-datawww-data.TH UNICORN 1 "September 15, 2009" "Unicorn User Manual" .SH NAME .PP unicorn - a rackup-like command to launch the Unicorn HTTP server .SH SYNOPSIS .PP unicorn [-c CONFIG_FILE] [-E RACK_ENV] [-D] [RACKUP_FILE] .SH DESCRIPTION .PP A rackup(1)-like command to launch Rack applications using Unicorn. It is expected to be started in your application root (APP_ROOT), but the \[lq]working_directory\[rq] directive may be used in the CONFIG_FILE. .PP While unicorn takes a myriad of command-line options for compatibility with ruby(1) and rackup(1), it is recommended to stick to the few command-line options specified in the SYNOPSIS and use the CONFIG_FILE as much as possible. .SH RACKUP FILE .PP This defaults to "config.ru" in APP_ROOT. It should be the same file used by rackup(1) and other Rack launchers, it uses the \f[I]Rack::Builder\f[] DSL. .PP Embedded command-line options are mostly parsed for compatibility with rackup(1) but strongly discouraged. .SH UNICORN OPTIONS .TP .B -c, --config-file CONFIG_FILE Path to the Unicorn-specific config file. The config file is implemented as a Ruby DSL, so Ruby code may executed. See the RDoc/ri for the \f[I]Unicorn::Configurator\f[] class for the full list of directives available from the DSL. Using an absolute path for for CONFIG_FILE is recommended as it makes multiple instances of Unicorn easily distinguishable when viewing ps(1) output. .RS .RE .TP .B -D, --daemonize Run daemonized in the background. The process is detached from the controlling terminal and stdin is redirected to \[lq]/dev/null\[rq]. Unlike many common UNIX daemons, we do not chdir to "/" upon daemonization to allow more control over the startup/upgrade process. Unless specified in the CONFIG_FILE, stderr and stdout will also be redirected to \[lq]/dev/null\[rq]. .RS .RE .TP .B -E, --env RACK_ENV Run under the given RACK_ENV. See the RACK ENVIRONMENT section for more details. .RS .RE .TP .B -l, --listen ADDRESS Listens on a given ADDRESS. ADDRESS may be in the form of HOST:PORT or PATH, HOST:PORT is taken to mean a TCP socket and PATH is meant to be a path to a UNIX domain socket. Defaults to \[lq]0.0.0.0:8080\[rq] (all addresses on TCP port 8080) For production deployments, specifying the \[lq]listen\[rq] directive in CONFIG_FILE is recommended as it allows fine-tuning of socket options. .RS .RE .TP .B -N, --no-default-middleware Disables loading middleware implied by RACK_ENV. This bypasses the configuration documented in the RACK ENVIRONMENT section, but still allows RACK_ENV to be used for application/framework-specific purposes. .RS .RE .SH RACKUP COMPATIBILITY OPTIONS .TP .B -o, --host HOST Listen on a TCP socket belonging to HOST, default is \[lq]0.0.0.0\[rq] (all addresses). If specified multiple times on the command-line, only the last-specified value takes effect. This option only exists for compatibility with the rackup(1) command, use of \[lq]-l\[rq]/\[lq]--listen\[rq] switch is recommended instead. .RS .RE .TP .B -p, --port PORT Listen on the specified TCP PORT, default is 8080. If specified multiple times on the command-line, only the last-specified value takes effect. This option only exists for compatibility with the rackup(1) command, use of \[lq]-l\[rq]/\[lq]--listen\[rq] switch is recommended instead. .RS .RE .TP .B -s, --server SERVER No-op, this exists only for compatibility with rackup(1). .RS .RE .SH RUBY OPTIONS .TP .B -e, --eval LINE Evaluate a LINE of Ruby code. This evaluation happens immediately as the command-line is being parsed. .RS .RE .TP .B -d, --debug Turn on debug mode, the $DEBUG variable is set to true. .RS .RE .TP .B -w, --warn Turn on verbose warnings, the $VERBOSE variable is set to true. .RS .RE .TP .B -I, --include PATH specify $LOAD_PATH. PATH will be prepended to $LOAD_PATH. The \[aq]:\[aq] character may be used to delimit multiple directories. This directive may be used more than once. Modifications to $LOAD_PATH take place immediately and in the order they were specified on the command-line. .RS .RE .TP .B -r, --require LIBRARY require a specified LIBRARY before executing the application. The "require" statement will be executed immediately and in the order they were specified on the command-line. .RS .RE .SH SIGNALS .PP The following UNIX signals may be sent to the master process: .IP \[bu] 2 HUP - reload config file, app, and gracefully restart all workers .IP \[bu] 2 INT/TERM - quick shutdown, kills all workers immediately .IP \[bu] 2 QUIT - graceful shutdown, waits for workers to finish their current request before finishing. .IP \[bu] 2 USR1 - reopen all logs owned by the master and all workers See Unicorn::Util.reopen_logs for what is considered a log. .IP \[bu] 2 USR2 - reexecute the running binary. A separate QUIT should be sent to the original process once the child is verified to be up and running. .IP \[bu] 2 WINCH - gracefully stops workers but keep the master running. This will only work for daemonized processes. .IP \[bu] 2 TTIN - increment the number of worker processes by one .IP \[bu] 2 TTOU - decrement the number of worker processes by one .PP See the SIGNALS (http://unicorn.bogomips.org/SIGNALS.html) document for full description of all signals used by Unicorn. .SH RACK ENVIRONMENT .PP Accepted values of RACK_ENV and the middleware they automatically load (outside of RACKUP_FILE) are exactly as those in rackup(1): .IP \[bu] 2 development - loads Rack::CommonLogger, Rack::ShowExceptions, and Rack::Lint middleware .IP \[bu] 2 deployment - loads Rack::CommonLogger middleware .IP \[bu] 2 none - loads no middleware at all, relying entirely on RACKUP_FILE .PP All unrecognized values for RACK_ENV are assumed to be \[lq]none\[rq]. Production deployments are strongly encouraged to use \[lq]deployment\[rq] or \[lq]none\[rq] for maximum performance. .PP As of Unicorn 0.94.0, RACK_ENV is exported as a process-wide environment variable as well. While not current a part of the Rack specification as of Rack 1.0.1, this has become a de facto standard in the Rack world. .PP Note the Rack::ContentLength and Rack::Chunked middlewares are also loaded by \[lq]deployment\[rq] and \[lq]development\[rq], but no other values of RACK_ENV. If needed, they must be individually specified in the RACKUP_FILE, some frameworks do not require them. .SH ENVIRONMENT VARIABLES .PP The RACK_ENV variable is set by the aforementioned -E switch. All application or library-specific environment variables (e.g. TMPDIR) may always be set in the Unicorn CONFIG_FILE in addition to the spawning shell. When transparently upgrading Unicorn, all environment variables set in the old master process are inherited by the new master process. Unicorn only uses (and will overwrite) the UNICORN_FD environment variable internally when doing transparent upgrades. .SH SEE ALSO .IP \[bu] 2 unicorn_rails(1) .IP \[bu] 2 \f[I]Rack::Builder\f[] ri/RDoc .IP \[bu] 2 \f[I]Unicorn::Configurator\f[] ri/RDoc .IP \[bu] 2 Unicorn RDoc (http://unicorn.bogomips.org/) .IP \[bu] 2 Rack RDoc (http://rack.rubyforge.org/doc/) .IP \[bu] 2 Rackup HowTo (http://wiki.github.com/rack/rack/tutorial-rackup-howto) .SH AUTHORS The Unicorn Community . unicorn-4.7.0/man/man1/unicorn_rails.10000644000004100000410000001627212236653132017602 0ustar www-datawww-data.TH UNICORN_RAILS 1 "September 17, 2009" "Unicorn User Manual" .SH NAME .PP unicorn_rails - a script/server-like command to launch the Unicorn HTTP server .SH SYNOPSIS .PP unicorn_rails [-c CONFIG_FILE] [-E RAILS_ENV] [-D] [RACKUP_FILE] .SH DESCRIPTION .PP A rackup(1)-like command to launch Rails applications using Unicorn. It is expected to be started in your Rails application root (RAILS_ROOT), but the \[lq]working_directory\[rq] directive may be used in the CONFIG_FILE. .PP It is designed to help Rails 1.x and 2.y users transition to Rack, but it is NOT needed for Rails 3 applications. Rails 3 users are encouraged to use unicorn(1) instead of unicorn_rails(1). Users of Rails 1.x/2.y may also use unicorn(1) instead of unicorn_rails(1). .PP The outward interface resembles rackup(1), the internals and default middleware loading is designed like the \f[B]script/server\f[] command distributed with Rails. .PP While Unicorn takes a myriad of command-line options for compatibility with ruby(1) and rackup(1), it is recommended to stick to the few command-line options specified in the SYNOPSIS and use the CONFIG_FILE as much as possible. .SH UNICORN OPTIONS .TP .B -c, --config-file CONFIG_FILE Path to the Unicorn-specific config file. The config file is implemented as a Ruby DSL, so Ruby code may executed. See the RDoc/ri for the \f[I]Unicorn::Configurator\f[] class for the full list of directives available from the DSL. Using an absolute path for for CONFIG_FILE is recommended as it makes multiple instances of Unicorn easily distinguishable when viewing ps(1) output. .RS .RE .TP .B -D, --daemonize Run daemonized in the background. The process is detached from the controlling terminal and stdin is redirected to \[lq]/dev/null\[rq]. Unlike many common UNIX daemons, we do not chdir to "/" upon daemonization to allow more control over the startup/upgrade process. Unless specified in the CONFIG_FILE, stderr and stdout will also be redirected to \[lq]/dev/null\[rq]. Daemonization will \f[I]skip\f[] loading of the \f[I]Rails::Rack::LogTailer\f[] middleware under Rails >= 2.3.x. By default, unicorn_rails(1) will create a PID file in \f[I]"RAILS_ROOT/tmp/pids/unicorn.pid"\f[]. You may override this by specifying the \[lq]pid\[rq] directive to override this Unicorn config file. .RS .RE .TP .B -E, --env RAILS_ENV Run under the given RAILS_ENV. This sets the RAILS_ENV environment variable. Acceptable values are exactly those you expect in your Rails application, typically \[lq]development\[rq] or \[lq]production\[rq]. .RS .RE .TP .B -l, --listen ADDRESS Listens on a given ADDRESS. ADDRESS may be in the form of HOST:PORT or PATH, HOST:PORT is taken to mean a TCP socket and PATH is meant to be a path to a UNIX domain socket. Defaults to \[lq]0.0.0.0:8080\[rq] (all addresses on TCP port 8080). For production deployments, specifying the \[lq]listen\[rq] directive in CONFIG_FILE is recommended as it allows fine-tuning of socket options. .RS .RE .SH RACKUP COMPATIBILITY OPTIONS .TP .B -o, --host HOST Listen on a TCP socket belonging to HOST, default is \[lq]0.0.0.0\[rq] (all addresses). If specified multiple times on the command-line, only the last-specified value takes effect. This option only exists for compatibility with the rackup(1) command, use of \[lq]-l\[rq]/\[lq]--listen\[rq] switch is recommended instead. .RS .RE .TP .B -p, --port PORT Listen on the specified TCP PORT, default is 8080. If specified multiple times on the command-line, only the last-specified value takes effect. This option only exists for compatibility with the rackup(1) command, use of \[lq]-l\[rq]/\[lq]--listen\[rq] switch is recommended instead. .RS .RE .TP .B --path PATH Mounts the Rails application at the given PATH (instead of \[lq]/\[rq]). This is equivalent to setting the RAILS_RELATIVE_URL_ROOT environment variable. This is only supported under Rails 2.3 or later at the moment. .RS .RE .SH RUBY OPTIONS .TP .B -e, --eval LINE Evaluate a LINE of Ruby code. This evaluation happens immediately as the command-line is being parsed. .RS .RE .TP .B -d, --debug Turn on debug mode, the $DEBUG variable is set to true. For Rails >= 2.3.x, this loads the \f[I]Rails::Rack::Debugger\f[] middleware. .RS .RE .TP .B -w, --warn Turn on verbose warnings, the $VERBOSE variable is set to true. .RS .RE .TP .B -I, --include PATH specify $LOAD_PATH. PATH will be prepended to $LOAD_PATH. The \[aq]:\[aq] character may be used to delimit multiple directories. This directive may be used more than once. Modifications to $LOAD_PATH take place immediately and in the order they were specified on the command-line. .RS .RE .TP .B -r, --require LIBRARY require a specified LIBRARY before executing the application. The "require" statement will be executed immediately and in the order they were specified on the command-line. .RS .RE .SH RACKUP FILE .PP This defaults to "config.ru" in RAILS_ROOT. It should be the same file used by rackup(1) and other Rack launchers, it uses the \f[I]Rack::Builder\f[] DSL. Unlike many other Rack applications, RACKUP_FILE is completely \f[I]optional\f[] for Rails, but may be used to disable some of the default middleware for performance. .PP Embedded command-line options are mostly parsed for compatibility with rackup(1) but strongly discouraged. .SH ENVIRONMENT VARIABLES .PP The RAILS_ENV variable is set by the aforementioned -E switch. The RAILS_RELATIVE_URL_ROOT is set by the aforementioned --path switch. Either of these variables may also be set in the shell or the Unicorn CONFIG_FILE. All application or library-specific environment variables (e.g. TMPDIR, RAILS_ASSET_ID) may always be set in the Unicorn CONFIG_FILE in addition to the spawning shell. When transparently upgrading Unicorn, all environment variables set in the old master process are inherited by the new master process. Unicorn only uses (and will overwrite) the UNICORN_FD environment variable internally when doing transparent upgrades. .SH SIGNALS .PP The following UNIX signals may be sent to the master process: .IP \[bu] 2 HUP - reload config file, app, and gracefully restart all workers .IP \[bu] 2 INT/TERM - quick shutdown, kills all workers immediately .IP \[bu] 2 QUIT - graceful shutdown, waits for workers to finish their current request before finishing. .IP \[bu] 2 USR1 - reopen all logs owned by the master and all workers See Unicorn::Util.reopen_logs for what is considered a log. .IP \[bu] 2 USR2 - reexecute the running binary. A separate QUIT should be sent to the original process once the child is verified to be up and running. .IP \[bu] 2 WINCH - gracefully stops workers but keep the master running. This will only work for daemonized processes. .IP \[bu] 2 TTIN - increment the number of worker processes by one .IP \[bu] 2 TTOU - decrement the number of worker processes by one .PP See the SIGNALS (http://unicorn.bogomips.org/SIGNALS.html) document for full description of all signals used by Unicorn. .SH SEE ALSO .IP \[bu] 2 unicorn(1) .IP \[bu] 2 \f[I]Rack::Builder\f[] ri/RDoc .IP \[bu] 2 \f[I]Unicorn::Configurator\f[] ri/RDoc .IP \[bu] 2 Unicorn RDoc (http://unicorn.bogomips.org/) .IP \[bu] 2 Rack RDoc (http://rack.rubyforge.org/doc/) .IP \[bu] 2 Rackup HowTo (http://wiki.github.com/rack/rack/tutorial-rackup-howto) .SH AUTHORS The Unicorn Community . unicorn-4.7.0/metadata.yml0000644000004100000410000002075512236653132015546 0ustar www-datawww-data--- !ruby/object:Gem::Specification name: !binary |- dW5pY29ybg== version: !ruby/object:Gem::Version version: 4.7.0 prerelease: platform: ruby authors: - Unicorn hackers autorequire: bindir: bin cert_chain: [] date: 2013-11-04 00:00:00.000000000 Z dependencies: - !ruby/object:Gem::Dependency name: !binary |- cmFjaw== requirement: !ruby/object:Gem::Requirement none: false requirements: - - ! '>=' - !ruby/object:Gem::Version version: '0' type: :runtime prerelease: false version_requirements: !ruby/object:Gem::Requirement none: false requirements: - - ! '>=' - !ruby/object:Gem::Version version: '0' - !ruby/object:Gem::Dependency name: !binary |- a2dpbw== requirement: !ruby/object:Gem::Requirement none: false requirements: - - !binary |- fj4= - !ruby/object:Gem::Version version: !binary |- Mi42 type: :runtime prerelease: false version_requirements: !ruby/object:Gem::Requirement none: false requirements: - - !binary |- fj4= - !ruby/object:Gem::Version version: !binary |- Mi42 - !ruby/object:Gem::Dependency name: !binary |- cmFpbmRyb3Bz requirement: !ruby/object:Gem::Requirement none: false requirements: - - !binary |- fj4= - !ruby/object:Gem::Version version: !binary |- MC43 type: :runtime prerelease: false version_requirements: !ruby/object:Gem::Requirement none: false requirements: - - !binary |- fj4= - !ruby/object:Gem::Version version: !binary |- MC43 - !ruby/object:Gem::Dependency name: !binary |- aXNvbGF0ZQ== requirement: !ruby/object:Gem::Requirement none: false requirements: - - !binary |- fj4= - !ruby/object:Gem::Version version: !binary |- My4y type: :development prerelease: false version_requirements: !ruby/object:Gem::Requirement none: false requirements: - - !binary |- fj4= - !ruby/object:Gem::Version version: !binary |- My4y - !ruby/object:Gem::Dependency name: !binary |- d3Jvbmdkb2M= requirement: !ruby/object:Gem::Requirement none: false requirements: - - !binary |- fj4= - !ruby/object:Gem::Version version: !binary |- MS42LjE= type: :development prerelease: false version_requirements: !ruby/object:Gem::Requirement none: false requirements: - - !binary |- fj4= - !ruby/object:Gem::Version version: !binary |- MS42LjE= description: ! '\Unicorn is an HTTP server for Rack applications designed to only serve fast clients on low-latency, high-bandwidth connections and take advantage of features in Unix/Unix-like kernels. Slow clients should only be served by placing a reverse proxy capable of fully buffering both the the request and response in between \Unicorn and slow clients.' email: !binary |- bW9uZ3JlbC11bmljb3JuQHJ1Ynlmb3JnZS5vcmc= executables: - !binary |- dW5pY29ybg== - !binary |- dW5pY29ybl9yYWlscw== extensions: - !binary |- ZXh0L3VuaWNvcm5faHR0cC9leHRjb25mLnJi extra_rdoc_files: - FAQ - README - TUNING - PHILOSOPHY - HACKING - DESIGN - CONTRIBUTORS - LICENSE - SIGNALS - KNOWN_ISSUES - TODO - NEWS - ChangeLog - LATEST - lib/unicorn.rb - lib/unicorn/configurator.rb - lib/unicorn/http_server.rb - lib/unicorn/preread_input.rb - lib/unicorn/stream_input.rb - lib/unicorn/tee_input.rb - lib/unicorn/util.rb - lib/unicorn/oob_gc.rb - lib/unicorn/worker.rb - ISSUES - Sandbox - Links - Application_Timeouts files: - .CHANGELOG.old - .document - .gitignore - .mailmap - .manifest - .wrongdoc.yml - Application_Timeouts - CONTRIBUTORS - COPYING - ChangeLog - DESIGN - Documentation/.gitignore - Documentation/GNUmakefile - Documentation/unicorn.1.txt - Documentation/unicorn_rails.1.txt - FAQ - GIT-VERSION-FILE - GIT-VERSION-GEN - GNUmakefile - HACKING - ISSUES - KNOWN_ISSUES - LATEST - LICENSE - Links - NEWS - PHILOSOPHY - README - Rakefile - SIGNALS - Sandbox - TODO - TUNING - bin/unicorn - bin/unicorn_rails - examples/big_app_gc.rb - examples/echo.ru - examples/git.ru - examples/init.sh - examples/logger_mp_safe.rb - examples/logrotate.conf - examples/nginx.conf - examples/unicorn.conf.minimal.rb - examples/unicorn.conf.rb - ext/unicorn_http/CFLAGS - ext/unicorn_http/c_util.h - ext/unicorn_http/common_field_optimization.h - ext/unicorn_http/ext_help.h - ext/unicorn_http/extconf.rb - ext/unicorn_http/global_variables.h - ext/unicorn_http/httpdate.c - ext/unicorn_http/unicorn_http.c - ext/unicorn_http/unicorn_http.rl - ext/unicorn_http/unicorn_http_common.rl - lib/unicorn.rb - lib/unicorn/app/exec_cgi.rb - lib/unicorn/app/inetd.rb - lib/unicorn/app/old_rails.rb - lib/unicorn/app/old_rails/static.rb - lib/unicorn/cgi_wrapper.rb - lib/unicorn/configurator.rb - lib/unicorn/const.rb - lib/unicorn/http_request.rb - lib/unicorn/http_response.rb - lib/unicorn/http_server.rb - lib/unicorn/launcher.rb - lib/unicorn/oob_gc.rb - lib/unicorn/preread_input.rb - lib/unicorn/socket_helper.rb - lib/unicorn/ssl_client.rb - lib/unicorn/ssl_configurator.rb - lib/unicorn/ssl_server.rb - lib/unicorn/stream_input.rb - lib/unicorn/tee_input.rb - lib/unicorn/tmpio.rb - lib/unicorn/util.rb - lib/unicorn/version.rb - lib/unicorn/worker.rb - local.mk.sample - man/man1/unicorn.1 - man/man1/unicorn_rails.1 - script/isolate_for_tests - setup.rb - t/.gitignore - t/GNUmakefile - t/README - t/bin/content-md5-put - t/bin/sha1sum.rb - t/bin/unused_listen - t/broken-app.ru - t/detach.ru - t/env.ru - t/fails-rack-lint.ru - t/heartbeat-timeout.ru - t/hijack.ru - t/listener_names.ru - t/my-tap-lib.sh - t/oob_gc.ru - t/oob_gc_path.ru - t/pid.ru - t/preread_input.ru - t/rack-input-tests.ru - t/sslgen.sh - t/t0000-http-basic.sh - t/t0001-reload-bad-config.sh - t/t0002-config-conflict.sh - t/t0002-parser-error.sh - t/t0003-working_directory.sh - t/t0004-heartbeat-timeout.sh - t/t0004-working_directory_broken.sh - t/t0005-working_directory_app.rb.sh - t/t0006-reopen-logs.sh - t/t0006.ru - t/t0007-working_directory_no_embed_cli.sh - t/t0008-back_out_of_upgrade.sh - t/t0009-broken-app.sh - t/t0009-winch_ttin.sh - t/t0010-reap-logging.sh - t/t0011-active-unix-socket.sh - t/t0012-reload-empty-config.sh - t/t0013-rewindable-input-false.sh - t/t0013.ru - t/t0014-rewindable-input-true.sh - t/t0014.ru - t/t0015-configurator-internals.sh - t/t0016-trust-x-forwarded-false.sh - t/t0017-trust-x-forwarded-true.sh - t/t0018-write-on-close.sh - t/t0019-max_header_len.sh - t/t0020-at_exit-handler.sh - t/t0021-process_detach.sh - t/t0022-listener_names-preload_app.sh - t/t0100-rack-input-tests.sh - t/t0116-client_body_buffer_size.sh - t/t0116.ru - t/t0200-rack-hijack.sh - t/t0300-no-default-middleware.sh - t/t0600-https-server-basic.sh - t/t9000-preread-input.sh - t/t9001-oob_gc.sh - t/t9002-oob_gc-path.sh - t/test-lib.sh - t/write-on-close.ru - test/aggregate.rb - test/benchmark/README - test/benchmark/dd.ru - test/benchmark/stack.ru - test/exec/README - test/exec/test_exec.rb - test/test_helper.rb - test/unit/test_configurator.rb - test/unit/test_droplet.rb - test/unit/test_http_parser.rb - test/unit/test_http_parser_ng.rb - test/unit/test_http_parser_xftrust.rb - test/unit/test_request.rb - test/unit/test_response.rb - test/unit/test_server.rb - test/unit/test_signals.rb - test/unit/test_sni_hostnames.rb - test/unit/test_socket_helper.rb - test/unit/test_stream_input.rb - test/unit/test_tee_input.rb - test/unit/test_upload.rb - test/unit/test_util.rb - unicorn.gemspec homepage: http://unicorn.bogomips.org/ licenses: - !binary |- R1BMdjIr - !binary |- UnVieSAxLjg= post_install_message: rdoc_options: - -t - ! 'Unicorn: Rack HTTP server for fast clients and Unix' - -W - http://bogomips.org/unicorn.git/tree/%s require_paths: - lib required_ruby_version: !ruby/object:Gem::Requirement none: false requirements: - - ! '>=' - !ruby/object:Gem::Version version: '0' required_rubygems_version: !ruby/object:Gem::Requirement none: false requirements: - - ! '>=' - !ruby/object:Gem::Version version: '0' requirements: [] rubyforge_project: !binary |- bW9uZ3JlbA== rubygems_version: 1.8.23 signing_key: specification_version: 3 summary: Rack HTTP server for fast clients and Unix test_files: - test/unit/test_configurator.rb - test/unit/test_http_parser.rb - test/unit/test_http_parser_ng.rb - test/unit/test_http_parser_xftrust.rb - test/unit/test_request.rb - test/unit/test_response.rb - test/unit/test_server.rb - test/unit/test_sni_hostnames.rb - test/unit/test_util.rb unicorn-4.7.0/.manifest0000644000004100000410000000706512236653132015051 0ustar www-datawww-data.CHANGELOG.old .document .gitignore .mailmap .manifest .wrongdoc.yml Application_Timeouts CONTRIBUTORS COPYING ChangeLog DESIGN Documentation/.gitignore Documentation/GNUmakefile Documentation/unicorn.1.txt Documentation/unicorn_rails.1.txt FAQ GIT-VERSION-FILE GIT-VERSION-GEN GNUmakefile HACKING ISSUES KNOWN_ISSUES LATEST LICENSE Links NEWS PHILOSOPHY README Rakefile SIGNALS Sandbox TODO TUNING bin/unicorn bin/unicorn_rails examples/big_app_gc.rb examples/echo.ru examples/git.ru examples/init.sh examples/logger_mp_safe.rb examples/logrotate.conf examples/nginx.conf examples/unicorn.conf.minimal.rb examples/unicorn.conf.rb ext/unicorn_http/CFLAGS ext/unicorn_http/c_util.h ext/unicorn_http/common_field_optimization.h ext/unicorn_http/ext_help.h ext/unicorn_http/extconf.rb ext/unicorn_http/global_variables.h ext/unicorn_http/httpdate.c ext/unicorn_http/unicorn_http.c ext/unicorn_http/unicorn_http.rl ext/unicorn_http/unicorn_http_common.rl lib/unicorn.rb lib/unicorn/app/exec_cgi.rb lib/unicorn/app/inetd.rb lib/unicorn/app/old_rails.rb lib/unicorn/app/old_rails/static.rb lib/unicorn/cgi_wrapper.rb lib/unicorn/configurator.rb lib/unicorn/const.rb lib/unicorn/http_request.rb lib/unicorn/http_response.rb lib/unicorn/http_server.rb lib/unicorn/launcher.rb lib/unicorn/oob_gc.rb lib/unicorn/preread_input.rb lib/unicorn/socket_helper.rb lib/unicorn/ssl_client.rb lib/unicorn/ssl_configurator.rb lib/unicorn/ssl_server.rb lib/unicorn/stream_input.rb lib/unicorn/tee_input.rb lib/unicorn/tmpio.rb lib/unicorn/util.rb lib/unicorn/version.rb lib/unicorn/worker.rb local.mk.sample man/man1/unicorn.1 man/man1/unicorn_rails.1 script/isolate_for_tests setup.rb t/.gitignore t/GNUmakefile t/README t/bin/content-md5-put t/bin/sha1sum.rb t/bin/unused_listen t/broken-app.ru t/detach.ru t/env.ru t/fails-rack-lint.ru t/heartbeat-timeout.ru t/hijack.ru t/listener_names.ru t/my-tap-lib.sh t/oob_gc.ru t/oob_gc_path.ru t/pid.ru t/preread_input.ru t/rack-input-tests.ru t/sslgen.sh t/t0000-http-basic.sh t/t0001-reload-bad-config.sh t/t0002-config-conflict.sh t/t0002-parser-error.sh t/t0003-working_directory.sh t/t0004-heartbeat-timeout.sh t/t0004-working_directory_broken.sh t/t0005-working_directory_app.rb.sh t/t0006-reopen-logs.sh t/t0006.ru t/t0007-working_directory_no_embed_cli.sh t/t0008-back_out_of_upgrade.sh t/t0009-broken-app.sh t/t0009-winch_ttin.sh t/t0010-reap-logging.sh t/t0011-active-unix-socket.sh t/t0012-reload-empty-config.sh t/t0013-rewindable-input-false.sh t/t0013.ru t/t0014-rewindable-input-true.sh t/t0014.ru t/t0015-configurator-internals.sh t/t0016-trust-x-forwarded-false.sh t/t0017-trust-x-forwarded-true.sh t/t0018-write-on-close.sh t/t0019-max_header_len.sh t/t0020-at_exit-handler.sh t/t0021-process_detach.sh t/t0022-listener_names-preload_app.sh t/t0100-rack-input-tests.sh t/t0116-client_body_buffer_size.sh t/t0116.ru t/t0200-rack-hijack.sh t/t0300-no-default-middleware.sh t/t0600-https-server-basic.sh t/t9000-preread-input.sh t/t9001-oob_gc.sh t/t9002-oob_gc-path.sh t/test-lib.sh t/write-on-close.ru test/aggregate.rb test/benchmark/README test/benchmark/dd.ru test/benchmark/stack.ru test/exec/README test/exec/test_exec.rb test/test_helper.rb test/unit/test_configurator.rb test/unit/test_droplet.rb test/unit/test_http_parser.rb test/unit/test_http_parser_ng.rb test/unit/test_http_parser_xftrust.rb test/unit/test_request.rb test/unit/test_response.rb test/unit/test_server.rb test/unit/test_signals.rb test/unit/test_sni_hostnames.rb test/unit/test_socket_helper.rb test/unit/test_stream_input.rb test/unit/test_tee_input.rb test/unit/test_upload.rb test/unit/test_util.rb unicorn.gemspec unicorn-4.7.0/setup.rb0000644000004100000410000010653512236653132014731 0ustar www-datawww-data# -*- encoding: binary -*- # # setup.rb # # Copyright (c) 2000-2005 Minero Aoki # # This program is free software. # You can distribute/modify this program under the terms of # the GNU LGPL, Lesser General Public License version 2.1. # unless Enumerable.method_defined?(:map) # Ruby 1.4.6 module Enumerable alias map collect end end unless File.respond_to?(:read) # Ruby 1.6 def File.read(fname) open(fname) {|f| return f.read } end end unless Errno.const_defined?(:ENOTEMPTY) # Windows? module Errno class ENOTEMPTY # We do not raise this exception, implementation is not needed. end end end def File.binread(fname) open(fname, 'rb') {|f| return f.read } end # for corrupted Windows' stat(2) def File.dir?(path) File.directory?((path[-1,1] == '/') ? path : path + '/') end class ConfigTable include Enumerable def initialize(rbconfig) @rbconfig = rbconfig @items = [] @table = {} # options @install_prefix = nil @config_opt = nil @verbose = true @no_harm = false end attr_accessor :install_prefix attr_accessor :config_opt attr_writer :verbose def verbose? @verbose end attr_writer :no_harm def no_harm? @no_harm end def [](key) lookup(key).resolve(self) end def []=(key, val) lookup(key).set val end def names @items.map {|i| i.name } end def each(&block) @items.each(&block) end def key?(name) @table.key?(name) end def lookup(name) @table[name] or setup_rb_error "no such config item: #{name}" end def add(item) @items.push item @table[item.name] = item end def remove(name) item = lookup(name) @items.delete_if {|i| i.name == name } @table.delete_if {|name, i| i.name == name } item end def load_script(path, inst = nil) if File.file?(path) MetaConfigEnvironment.new(self, inst).instance_eval File.read(path), path end end def savefile '.config' end def load_savefile begin File.foreach(savefile()) do |line| k, v = *line.split(/=/, 2) self[k] = v.strip end rescue Errno::ENOENT setup_rb_error $!.message + "\n#{File.basename($0)} config first" end end def save @items.each {|i| i.value } File.open(savefile(), 'w') {|f| @items.each do |i| f.printf "%s=%s\n", i.name, i.value if i.value? and i.value end } end def load_standard_entries standard_entries(@rbconfig).each do |ent| add ent end end def standard_entries(rbconfig) c = rbconfig rubypath = File.join(c['bindir'], c['ruby_install_name'] + c['EXEEXT']) major = c['MAJOR'].to_i minor = c['MINOR'].to_i teeny = c['TEENY'].to_i version = "#{major}.#{minor}" # ruby ver. >= 1.4.4? newpath_p = ((major >= 2) or ((major == 1) and ((minor >= 5) or ((minor == 4) and (teeny >= 4))))) if c['rubylibdir'] # V > 1.6.3 libruby = "#{c['prefix']}/lib/ruby" librubyver = c['rubylibdir'] librubyverarch = c['archdir'] siteruby = c['sitedir'] siterubyver = c['sitelibdir'] siterubyverarch = c['sitearchdir'] elsif newpath_p # 1.4.4 <= V <= 1.6.3 libruby = "#{c['prefix']}/lib/ruby" librubyver = "#{c['prefix']}/lib/ruby/#{version}" librubyverarch = "#{c['prefix']}/lib/ruby/#{version}/#{c['arch']}" siteruby = c['sitedir'] siterubyver = "$siteruby/#{version}" siterubyverarch = "$siterubyver/#{c['arch']}" else # V < 1.4.4 libruby = "#{c['prefix']}/lib/ruby" librubyver = "#{c['prefix']}/lib/ruby/#{version}" librubyverarch = "#{c['prefix']}/lib/ruby/#{version}/#{c['arch']}" siteruby = "#{c['prefix']}/lib/ruby/#{version}/site_ruby" siterubyver = siteruby siterubyverarch = "$siterubyver/#{c['arch']}" end parameterize = lambda {|path| path.sub(/\A#{Regexp.quote(c['prefix'])}/, '$prefix') } if arg = c['configure_args'].split.detect {|arg| /--with-make-prog=/ =~ arg } makeprog = arg.sub(/'/, '').split(/=/, 2)[1] else makeprog = 'make' end [ ExecItem.new('installdirs', 'std/site/home', 'std: install under libruby; site: install under site_ruby; home: install under $HOME')\ {|val, table| case val when 'std' table['rbdir'] = '$librubyver' table['sodir'] = '$librubyverarch' when 'site' table['rbdir'] = '$siterubyver' table['sodir'] = '$siterubyverarch' when 'home' setup_rb_error '$HOME was not set' unless ENV['HOME'] table['prefix'] = ENV['HOME'] table['rbdir'] = '$libdir/ruby' table['sodir'] = '$libdir/ruby' end }, PathItem.new('prefix', 'path', c['prefix'], 'path prefix of target environment'), PathItem.new('bindir', 'path', parameterize.call(c['bindir']), 'the directory for commands'), PathItem.new('libdir', 'path', parameterize.call(c['libdir']), 'the directory for libraries'), PathItem.new('datadir', 'path', parameterize.call(c['datadir']), 'the directory for shared data'), PathItem.new('mandir', 'path', parameterize.call(c['mandir']), 'the directory for man pages'), PathItem.new('sysconfdir', 'path', parameterize.call(c['sysconfdir']), 'the directory for system configuration files'), PathItem.new('localstatedir', 'path', parameterize.call(c['localstatedir']), 'the directory for local state data'), PathItem.new('libruby', 'path', libruby, 'the directory for ruby libraries'), PathItem.new('librubyver', 'path', librubyver, 'the directory for standard ruby libraries'), PathItem.new('librubyverarch', 'path', librubyverarch, 'the directory for standard ruby extensions'), PathItem.new('siteruby', 'path', siteruby, 'the directory for version-independent aux ruby libraries'), PathItem.new('siterubyver', 'path', siterubyver, 'the directory for aux ruby libraries'), PathItem.new('siterubyverarch', 'path', siterubyverarch, 'the directory for aux ruby binaries'), PathItem.new('rbdir', 'path', '$siterubyver', 'the directory for ruby scripts'), PathItem.new('sodir', 'path', '$siterubyverarch', 'the directory for ruby extentions'), PathItem.new('rubypath', 'path', rubypath, 'the path to set to #! line'), ProgramItem.new('rubyprog', 'name', rubypath, 'the ruby program using for installation'), ProgramItem.new('makeprog', 'name', makeprog, 'the make program to compile ruby extentions'), SelectItem.new('shebang', 'all/ruby/never', 'ruby', 'shebang line (#!) editing mode'), BoolItem.new('without-ext', 'yes/no', 'no', 'does not compile/install ruby extentions') ] end private :standard_entries def load_multipackage_entries multipackage_entries().each do |ent| add ent end end def multipackage_entries [ PackageSelectionItem.new('with', 'name,name...', '', 'ALL', 'package names that you want to install'), PackageSelectionItem.new('without', 'name,name...', '', 'NONE', 'package names that you do not want to install') ] end private :multipackage_entries ALIASES = { 'std-ruby' => 'librubyver', 'stdruby' => 'librubyver', 'rubylibdir' => 'librubyver', 'archdir' => 'librubyverarch', 'site-ruby-common' => 'siteruby', # For backward compatibility 'site-ruby' => 'siterubyver', # For backward compatibility 'bin-dir' => 'bindir', 'bin-dir' => 'bindir', 'rb-dir' => 'rbdir', 'so-dir' => 'sodir', 'data-dir' => 'datadir', 'ruby-path' => 'rubypath', 'ruby-prog' => 'rubyprog', 'ruby' => 'rubyprog', 'make-prog' => 'makeprog', 'make' => 'makeprog' } def fixup ALIASES.each do |ali, name| @table[ali] = @table[name] end @items.freeze @table.freeze @options_re = /\A--(#{@table.keys.join('|')})(?:=(.*))?\z/ end def parse_opt(opt) m = @options_re.match(opt) or setup_rb_error "config: unknown option #{opt}" m.to_a[1,2] end def dllext @rbconfig['DLEXT'] end def value_config?(name) lookup(name).value? end class Item def initialize(name, template, default, desc) @name = name.freeze @template = template @value = default @default = default @description = desc end attr_reader :name attr_reader :description attr_accessor :default alias help_default default def help_opt "--#{@name}=#{@template}" end def value? true end def value @value end def resolve(table) @value.gsub(%r<\$([^/]+)>) { table[$1] } end def set(val) @value = check(val) end private def check(val) setup_rb_error "config: --#{name} requires argument" unless val val end end class BoolItem < Item def config_type 'bool' end def help_opt "--#{@name}" end private def check(val) return 'yes' unless val case val when /\Ay(es)?\z/i, /\At(rue)?\z/i then 'yes' when /\An(o)?\z/i, /\Af(alse)\z/i then 'no' else setup_rb_error "config: --#{@name} accepts only yes/no for argument" end end end class PathItem < Item def config_type 'path' end private def check(path) setup_rb_error "config: --#{@name} requires argument" unless path path[0,1] == '$' ? path : File.expand_path(path) end end class ProgramItem < Item def config_type 'program' end end class SelectItem < Item def initialize(name, selection, default, desc) super @ok = selection.split('/') end def config_type 'select' end private def check(val) unless @ok.include?(val.strip) setup_rb_error "config: use --#{@name}=#{@template} (#{val})" end val.strip end end class ExecItem < Item def initialize(name, selection, desc, &block) super name, selection, nil, desc @ok = selection.split('/') @action = block end def config_type 'exec' end def value? false end def resolve(table) setup_rb_error "$#{name()} wrongly used as option value" end undef set def evaluate(val, table) v = val.strip.downcase unless @ok.include?(v) setup_rb_error "invalid option --#{@name}=#{val} (use #{@template})" end @action.call v, table end end class PackageSelectionItem < Item def initialize(name, template, default, help_default, desc) super name, template, default, desc @help_default = help_default end attr_reader :help_default def config_type 'package' end private def check(val) unless File.dir?("packages/#{val}") setup_rb_error "config: no such package: #{val}" end val end end class MetaConfigEnvironment def initialize(config, installer) @config = config @installer = installer end def config_names @config.names end def config?(name) @config.key?(name) end def bool_config?(name) @config.lookup(name).config_type == 'bool' end def path_config?(name) @config.lookup(name).config_type == 'path' end def value_config?(name) @config.lookup(name).config_type != 'exec' end def add_config(item) @config.add item end def add_bool_config(name, default, desc) @config.add BoolItem.new(name, 'yes/no', default ? 'yes' : 'no', desc) end def add_path_config(name, default, desc) @config.add PathItem.new(name, 'path', default, desc) end def set_config_default(name, default) @config.lookup(name).default = default end def remove_config(name) @config.remove(name) end # For only multipackage def packages raise '[setup.rb fatal] multi-package metaconfig API packages() called for single-package; contact application package vendor' unless @installer @installer.packages end # For only multipackage def declare_packages(list) raise '[setup.rb fatal] multi-package metaconfig API declare_packages() called for single-package; contact application package vendor' unless @installer @installer.packages = list end end end # class ConfigTable # This module requires: #verbose?, #no_harm? module FileOperations def mkdir_p(dirname, prefix = nil) dirname = prefix + File.expand_path(dirname) if prefix $stderr.puts "mkdir -p #{dirname}" if verbose? return if no_harm? # Does not check '/', it's too abnormal. dirs = File.expand_path(dirname).split(%r<(?=/)>) if /\A[a-z]:\z/i =~ dirs[0] disk = dirs.shift dirs[0] = disk + dirs[0] end dirs.each_index do |idx| path = dirs[0..idx].join('') Dir.mkdir path unless File.dir?(path) end end def rm_f(path) $stderr.puts "rm -f #{path}" if verbose? return if no_harm? force_remove_file path end def rm_rf(path) $stderr.puts "rm -rf #{path}" if verbose? return if no_harm? remove_tree path end def remove_tree(path) if File.symlink?(path) remove_file path elsif File.dir?(path) remove_tree0 path else force_remove_file path end end def remove_tree0(path) Dir.foreach(path) do |ent| next if ent == '.' next if ent == '..' entpath = "#{path}/#{ent}" if File.symlink?(entpath) remove_file entpath elsif File.dir?(entpath) remove_tree0 entpath else force_remove_file entpath end end begin Dir.rmdir path rescue Errno::ENOTEMPTY # directory may not be empty end end def move_file(src, dest) force_remove_file dest begin File.rename src, dest rescue File.open(dest, 'wb') {|f| f.write File.binread(src) } File.chmod File.stat(src).mode, dest File.unlink src end end def force_remove_file(path) begin remove_file path rescue end end def remove_file(path) File.chmod 0777, path File.unlink path end def install(from, dest, mode, prefix = nil) $stderr.puts "install #{from} #{dest}" if verbose? return if no_harm? realdest = prefix ? prefix + File.expand_path(dest) : dest realdest = File.join(realdest, File.basename(from)) if File.dir?(realdest) str = File.binread(from) if diff?(str, realdest) verbose_off { rm_f realdest if File.exist?(realdest) } File.open(realdest, 'wb') {|f| f.write str } File.chmod mode, realdest File.open("#{objdir_root()}/InstalledFiles", 'a') {|f| if prefix f.puts realdest.sub(prefix, '') else f.puts realdest end } end end def diff?(new_content, path) return true unless File.exist?(path) new_content != File.binread(path) end def command(*args) $stderr.puts args.join(' ') if verbose? system(*args) or raise RuntimeError, "system(#{args.map{|a| a.inspect }.join(' ')}) failed" end def ruby(*args) command config('rubyprog'), *args end def make(task = nil) command(*[config('makeprog'), task].compact) end def extdir?(dir) File.exist?("#{dir}/MANIFEST") or File.exist?("#{dir}/extconf.rb") end def files_of(dir) Dir.open(dir) {|d| return d.select {|ent| File.file?("#{dir}/#{ent}") } } end DIR_REJECT = %w( . .. CVS SCCS RCS CVS.adm .svn ) def directories_of(dir) Dir.open(dir) {|d| return d.select {|ent| File.dir?("#{dir}/#{ent}") } - DIR_REJECT } end end # This module requires: #srcdir_root, #objdir_root, #relpath module HookScriptAPI def get_config(key) @config[key] end alias config get_config # obsolete: use metaconfig to change configuration def set_config(key, val) @config[key] = val end # # srcdir/objdir (works only in the package directory) # def curr_srcdir "#{srcdir_root()}/#{relpath()}" end def curr_objdir "#{objdir_root()}/#{relpath()}" end def srcfile(path) "#{curr_srcdir()}/#{path}" end def srcexist?(path) File.exist?(srcfile(path)) end def srcdirectory?(path) File.dir?(srcfile(path)) end def srcfile?(path) File.file?(srcfile(path)) end def srcentries(path = '.') Dir.open("#{curr_srcdir()}/#{path}") {|d| return d.to_a - %w(. ..) } end def srcfiles(path = '.') srcentries(path).select {|fname| File.file?(File.join(curr_srcdir(), path, fname)) } end def srcdirectories(path = '.') srcentries(path).select {|fname| File.dir?(File.join(curr_srcdir(), path, fname)) } end end class ToplevelInstaller Version = '3.4.1' Copyright = 'Copyright (c) 2000-2005 Minero Aoki' TASKS = [ [ 'all', 'do config, setup, then install' ], [ 'config', 'saves your configurations' ], [ 'show', 'shows current configuration' ], [ 'setup', 'compiles ruby extentions and others' ], [ 'install', 'installs files' ], [ 'test', 'run all tests in test/' ], [ 'clean', "does `make clean' for each extention" ], [ 'distclean',"does `make distclean' for each extention" ] ] def ToplevelInstaller.invoke config = ConfigTable.new(load_rbconfig()) config.load_standard_entries config.load_multipackage_entries if multipackage? config.fixup klass = (multipackage?() ? ToplevelInstallerMulti : ToplevelInstaller) klass.new(File.dirname($0), config).invoke end def ToplevelInstaller.multipackage? File.dir?(File.dirname($0) + '/packages') end def ToplevelInstaller.load_rbconfig if arg = ARGV.detect {|arg| /\A--rbconfig=/ =~ arg } ARGV.delete(arg) load File.expand_path(arg.split(/=/, 2)[1]) $".push 'rbconfig.rb' else require 'rbconfig' end ::Config::CONFIG end def initialize(ardir_root, config) @ardir = File.expand_path(ardir_root) @config = config # cache @valid_task_re = nil end def config(key) @config[key] end def inspect "#<#{self.class} #{__id__()}>" end def invoke run_metaconfigs case task = parsearg_global() when nil, 'all' parsearg_config init_installers exec_config exec_setup exec_install else case task when 'config', 'test' ; when 'clean', 'distclean' @config.load_savefile if File.exist?(@config.savefile) else @config.load_savefile end __send__ "parsearg_#{task}" init_installers __send__ "exec_#{task}" end end def run_metaconfigs @config.load_script "#{@ardir}/metaconfig" end def init_installers @installer = Installer.new(@config, @ardir, File.expand_path('.')) end # # Hook Script API bases # def srcdir_root @ardir end def objdir_root '.' end def relpath '.' end # # Option Parsing # def parsearg_global while arg = ARGV.shift case arg when /\A\w+\z/ setup_rb_error "invalid task: #{arg}" unless valid_task?(arg) return arg when '-q', '--quiet' @config.verbose = false when '--verbose' @config.verbose = true when '--help' print_usage $stdout exit 0 when '--version' puts "#{File.basename($0)} version #{Version}" exit 0 when '--copyright' puts Copyright exit 0 else setup_rb_error "unknown global option '#{arg}'" end end nil end def valid_task?(t) valid_task_re() =~ t end def valid_task_re @valid_task_re ||= /\A(?:#{TASKS.map {|task,desc| task }.join('|')})\z/ end def parsearg_no_options unless ARGV.empty? task = caller(0).first.slice(%r<`parsearg_(\w+)'>, 1) setup_rb_error "#{task}: unknown options: #{ARGV.join(' ')}" end end alias parsearg_show parsearg_no_options alias parsearg_setup parsearg_no_options alias parsearg_test parsearg_no_options alias parsearg_clean parsearg_no_options alias parsearg_distclean parsearg_no_options def parsearg_config evalopt = [] set = [] @config.config_opt = [] while i = ARGV.shift if /\A--?\z/ =~ i @config.config_opt = ARGV.dup break end name, value = *@config.parse_opt(i) if @config.value_config?(name) @config[name] = value else evalopt.push [name, value] end set.push name end evalopt.each do |name, value| @config.lookup(name).evaluate value, @config end # Check if configuration is valid set.each do |n| @config[n] if @config.value_config?(n) end end def parsearg_install @config.no_harm = false @config.install_prefix = '' while a = ARGV.shift case a when '--no-harm' @config.no_harm = true when /\A--prefix=/ path = a.split(/=/, 2)[1] path = File.expand_path(path) unless path[0,1] == '/' @config.install_prefix = path else setup_rb_error "install: unknown option #{a}" end end end def print_usage(out) out.puts 'Typical Installation Procedure:' out.puts " $ ruby #{File.basename $0} config" out.puts " $ ruby #{File.basename $0} setup" out.puts " # ruby #{File.basename $0} install (may require root privilege)" out.puts out.puts 'Detailed Usage:' out.puts " ruby #{File.basename $0} " out.puts " ruby #{File.basename $0} [] []" fmt = " %-24s %s\n" out.puts out.puts 'Global options:' out.printf fmt, '-q,--quiet', 'suppress message outputs' out.printf fmt, ' --verbose', 'output messages verbosely' out.printf fmt, ' --help', 'print this message' out.printf fmt, ' --version', 'print version and quit' out.printf fmt, ' --copyright', 'print copyright and quit' out.puts out.puts 'Tasks:' TASKS.each do |name, desc| out.printf fmt, name, desc end fmt = " %-24s %s [%s]\n" out.puts out.puts 'Options for CONFIG or ALL:' @config.each do |item| out.printf fmt, item.help_opt, item.description, item.help_default end out.printf fmt, '--rbconfig=path', 'rbconfig.rb to load',"running ruby's" out.puts out.puts 'Options for INSTALL:' out.printf fmt, '--no-harm', 'only display what to do if given', 'off' out.printf fmt, '--prefix=path', 'install path prefix', '' out.puts end # # Task Handlers # def exec_config @installer.exec_config @config.save # must be final end def exec_setup @installer.exec_setup end def exec_install @installer.exec_install end def exec_test @installer.exec_test end def exec_show @config.each do |i| printf "%-20s %s\n", i.name, i.value if i.value? end end def exec_clean @installer.exec_clean end def exec_distclean @installer.exec_distclean end end # class ToplevelInstaller class ToplevelInstallerMulti < ToplevelInstaller include FileOperations def initialize(ardir_root, config) super @packages = directories_of("#{@ardir}/packages") raise 'no package exists' if @packages.empty? @root_installer = Installer.new(@config, @ardir, File.expand_path('.')) end def run_metaconfigs @config.load_script "#{@ardir}/metaconfig", self @packages.each do |name| @config.load_script "#{@ardir}/packages/#{name}/metaconfig" end end attr_reader :packages def packages=(list) raise 'package list is empty' if list.empty? list.each do |name| raise "directory packages/#{name} does not exist"\ unless File.dir?("#{@ardir}/packages/#{name}") end @packages = list end def init_installers @installers = {} @packages.each do |pack| @installers[pack] = Installer.new(@config, "#{@ardir}/packages/#{pack}", "packages/#{pack}") end with = extract_selection(config('with')) without = extract_selection(config('without')) @selected = @installers.keys.select {|name| (with.empty? or with.include?(name)) \ and not without.include?(name) } end def extract_selection(list) a = list.split(/,/) a.each do |name| setup_rb_error "no such package: #{name}" unless @installers.key?(name) end a end def print_usage(f) super f.puts 'Inluded packages:' f.puts ' ' + @packages.sort.join(' ') f.puts end # # Task Handlers # def exec_config run_hook 'pre-config' each_selected_installers {|inst| inst.exec_config } run_hook 'post-config' @config.save # must be final end def exec_setup run_hook 'pre-setup' each_selected_installers {|inst| inst.exec_setup } run_hook 'post-setup' end def exec_install run_hook 'pre-install' each_selected_installers {|inst| inst.exec_install } run_hook 'post-install' end def exec_test run_hook 'pre-test' each_selected_installers {|inst| inst.exec_test } run_hook 'post-test' end def exec_clean rm_f @config.savefile run_hook 'pre-clean' each_selected_installers {|inst| inst.exec_clean } run_hook 'post-clean' end def exec_distclean rm_f @config.savefile run_hook 'pre-distclean' each_selected_installers {|inst| inst.exec_distclean } run_hook 'post-distclean' end # # lib # def each_selected_installers Dir.mkdir 'packages' unless File.dir?('packages') @selected.each do |pack| $stderr.puts "Processing the package `#{pack}' ..." if verbose? Dir.mkdir "packages/#{pack}" unless File.dir?("packages/#{pack}") Dir.chdir "packages/#{pack}" yield @installers[pack] Dir.chdir '../..' end end def run_hook(id) @root_installer.run_hook id end # module FileOperations requires this def verbose? @config.verbose? end # module FileOperations requires this def no_harm? @config.no_harm? end end # class ToplevelInstallerMulti class Installer FILETYPES = %w( bin lib ext data conf man ) include FileOperations include HookScriptAPI def initialize(config, srcroot, objroot) @config = config @srcdir = File.expand_path(srcroot) @objdir = File.expand_path(objroot) @currdir = '.' end def inspect "#<#{self.class} #{File.basename(@srcdir)}>" end def noop(rel) end # # Hook Script API base methods # def srcdir_root @srcdir end def objdir_root @objdir end def relpath @currdir end # # Config Access # # module FileOperations requires this def verbose? @config.verbose? end # module FileOperations requires this def no_harm? @config.no_harm? end def verbose_off begin save, @config.verbose = @config.verbose?, false yield ensure @config.verbose = save end end # # TASK config # def exec_config exec_task_traverse 'config' end alias config_dir_bin noop alias config_dir_lib noop def config_dir_ext(rel) extconf if extdir?(curr_srcdir()) end alias config_dir_data noop alias config_dir_conf noop alias config_dir_man noop def extconf ruby "#{curr_srcdir()}/extconf.rb", *@config.config_opt end # # TASK setup # def exec_setup exec_task_traverse 'setup' end def setup_dir_bin(rel) files_of(curr_srcdir()).each do |fname| update_shebang_line "#{curr_srcdir()}/#{fname}" end end alias setup_dir_lib noop def setup_dir_ext(rel) make if extdir?(curr_srcdir()) end alias setup_dir_data noop alias setup_dir_conf noop alias setup_dir_man noop def update_shebang_line(path) return if no_harm? return if config('shebang') == 'never' old = Shebang.load(path) if old $stderr.puts "warning: #{path}: Shebang line includes too many args. It is not portable and your program may not work." if old.args.size > 1 new = new_shebang(old) return if new.to_s == old.to_s else return unless config('shebang') == 'all' new = Shebang.new(config('rubypath')) end $stderr.puts "updating shebang: #{File.basename(path)}" if verbose? open_atomic_writer(path) {|output| File.open(path, 'rb') {|f| f.gets if old # discard output.puts new.to_s output.print f.read } } end def new_shebang(old) if /\Aruby/ =~ File.basename(old.cmd) Shebang.new(config('rubypath'), old.args) elsif File.basename(old.cmd) == 'env' and old.args.first == 'ruby' Shebang.new(config('rubypath'), old.args[1..-1]) else return old unless config('shebang') == 'all' Shebang.new(config('rubypath')) end end def open_atomic_writer(path, &block) tmpfile = File.basename(path) + '.tmp' begin File.open(tmpfile, 'wb', &block) File.rename tmpfile, File.basename(path) ensure File.unlink tmpfile if File.exist?(tmpfile) end end class Shebang def Shebang.load(path) line = nil File.open(path) {|f| line = f.gets } return nil unless /\A#!/ =~ line parse(line) end def Shebang.parse(line) cmd, *args = *line.strip.sub(/\A\#!/, '').split(' ') new(cmd, args) end def initialize(cmd, args = []) @cmd = cmd @args = args end attr_reader :cmd attr_reader :args def to_s "#! #{@cmd}" + (@args.empty? ? '' : " #{@args.join(' ')}") end end # # TASK install # def exec_install rm_f 'InstalledFiles' exec_task_traverse 'install' end def install_dir_bin(rel) install_files targetfiles(), "#{config('bindir')}/#{rel}", 0755 end def install_dir_lib(rel) install_files libfiles(), "#{config('rbdir')}/#{rel}", 0644 end def install_dir_ext(rel) return unless extdir?(curr_srcdir()) install_files rubyextentions('.'), "#{config('sodir')}/#{File.dirname(rel)}", 0555 end def install_dir_data(rel) install_files targetfiles(), "#{config('datadir')}/#{rel}", 0644 end def install_dir_conf(rel) # FIXME: should not remove current config files # (rename previous file to .old/.org) install_files targetfiles(), "#{config('sysconfdir')}/#{rel}", 0644 end def install_dir_man(rel) install_files targetfiles(), "#{config('mandir')}/#{rel}", 0644 end def install_files(list, dest, mode) mkdir_p dest, @config.install_prefix list.each do |fname| install fname, dest, mode, @config.install_prefix end end def libfiles glob_reject(%w(*.y *.output), targetfiles()) end def rubyextentions(dir) ents = glob_select("*.#{@config.dllext}", targetfiles()) if ents.empty? setup_rb_error "no ruby extention exists: 'ruby #{$0} setup' first" end ents end def targetfiles mapdir(existfiles() - hookfiles()) end def mapdir(ents) ents.map {|ent| if File.exist?(ent) then ent # objdir else "#{curr_srcdir()}/#{ent}" # srcdir end } end # picked up many entries from cvs-1.11.1/src/ignore.c JUNK_FILES = %w( core RCSLOG tags TAGS .make.state .nse_depinfo #* .#* cvslog.* ,* .del-* *.olb *~ *.old *.bak *.BAK *.orig *.rej _$* *$ *.org *.in .* ) def existfiles glob_reject(JUNK_FILES, (files_of(curr_srcdir()) | files_of('.'))) end def hookfiles %w( pre-%s post-%s pre-%s.rb post-%s.rb ).map {|fmt| %w( config setup install clean ).map {|t| sprintf(fmt, t) } }.flatten end def glob_select(pat, ents) re = globs2re([pat]) ents.select {|ent| re =~ ent } end def glob_reject(pats, ents) re = globs2re(pats) ents.reject {|ent| re =~ ent } end GLOB2REGEX = { '.' => '\.', '$' => '\$', '#' => '\#', '*' => '.*' } def globs2re(pats) /\A(?:#{ pats.map {|pat| pat.gsub(/[\.\$\#\*]/) {|ch| GLOB2REGEX[ch] } }.join('|') })\z/ end # # TASK test # TESTDIR = 'test' def exec_test unless File.directory?('test') $stderr.puts 'no test in this package' if verbose? return end $stderr.puts 'Running tests...' if verbose? begin require 'test/unit' rescue LoadError setup_rb_error 'test/unit cannot loaded. You need Ruby 1.8 or later to invoke this task.' end runner = Test::Unit::AutoRunner.new(true) runner.to_run << TESTDIR runner.run end # # TASK clean # def exec_clean exec_task_traverse 'clean' rm_f @config.savefile rm_f 'InstalledFiles' end alias clean_dir_bin noop alias clean_dir_lib noop alias clean_dir_data noop alias clean_dir_conf noop alias clean_dir_man noop def clean_dir_ext(rel) return unless extdir?(curr_srcdir()) make 'clean' if File.file?('Makefile') end # # TASK distclean # def exec_distclean exec_task_traverse 'distclean' rm_f @config.savefile rm_f 'InstalledFiles' end alias distclean_dir_bin noop alias distclean_dir_lib noop def distclean_dir_ext(rel) return unless extdir?(curr_srcdir()) make 'distclean' if File.file?('Makefile') end alias distclean_dir_data noop alias distclean_dir_conf noop alias distclean_dir_man noop # # Traversing # def exec_task_traverse(task) run_hook "pre-#{task}" FILETYPES.each do |type| if type == 'ext' and config('without-ext') == 'yes' $stderr.puts 'skipping ext/* by user option' if verbose? next end traverse task, type, "#{task}_dir_#{type}" end run_hook "post-#{task}" end def traverse(task, rel, mid) dive_into(rel) { run_hook "pre-#{task}" __send__ mid, rel.sub(%r[\A.*?(?:/|\z)], '') directories_of(curr_srcdir()).each do |d| traverse task, "#{rel}/#{d}", mid end run_hook "post-#{task}" } end def dive_into(rel) return unless File.dir?("#{@srcdir}/#{rel}") dir = File.basename(rel) Dir.mkdir dir unless File.dir?(dir) prevdir = Dir.pwd Dir.chdir dir $stderr.puts '---> ' + rel if verbose? @currdir = rel yield Dir.chdir prevdir $stderr.puts '<--- ' + rel if verbose? @currdir = File.dirname(rel) end def run_hook(id) path = [ "#{curr_srcdir()}/#{id}", "#{curr_srcdir()}/#{id}.rb" ].detect {|cand| File.file?(cand) } return unless path begin instance_eval File.read(path), path, 1 rescue raise if $DEBUG setup_rb_error "hook #{path} failed:\n" + $!.message end end end # class Installer class SetupError < StandardError; end def setup_rb_error(msg) raise SetupError, msg end if $0 == __FILE__ begin ToplevelInstaller.invoke rescue SetupError raise if $DEBUG $stderr.puts $!.message $stderr.puts "Try 'ruby #{$0} --help' for detailed usage." exit 1 end end unicorn-4.7.0/COPYING0000644000004100000410000010436712236653132014300 0ustar www-datawww-data GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Affero General Public License. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . Also add information on how to contact you by electronic and paper mail. If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: Copyright (C) This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box". You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see . The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read . unicorn-4.7.0/GIT-VERSION-FILE0000644000004100000410000000002412236653132015434 0ustar www-datawww-dataGIT_VERSION = 4.7.0 unicorn-4.7.0/PHILOSOPHY0000644000004100000410000001514312236653132014657 0ustar www-datawww-data= The Philosophy Behind unicorn Being a server that only runs on Unix-like platforms, unicorn is strongly tied to the Unix philosophy of doing one thing and (hopefully) doing it well. Despite using HTTP, unicorn is strictly a _backend_ application server for running Rack-based Ruby applications. == Avoid Complexity Instead of attempting to be efficient at serving slow clients, unicorn relies on a buffering reverse proxy to efficiently deal with slow clients. unicorn uses an old-fashioned preforking worker model with blocking I/O. Our processing model is the antithesis of more modern (and theoretically more efficient) server processing models using threads or non-blocking I/O with events. === Threads and Events Are Hard ...to many developers. Reasons for this is beyond the scope of this document. unicorn avoids concurrency within each worker process so you have fewer things to worry about when developing your application. Of course unicorn can use multiple worker processes to utilize multiple CPUs or spindles. Applications can still use threads internally, however. == Slow Clients Are Problematic Most benchmarks we've seen don't tell you this, and unicorn doesn't care about slow clients... but you should. A "slow client" can be any client outside of your datacenter. Network traffic within a local network is always faster than traffic that crosses outside of it. The laws of physics do not allow otherwise. Persistent connections were introduced in HTTP/1.1 reduce latency from connection establishment and TCP slow start. They also waste server resources when clients are idle. Persistent connections mean one of the unicorn worker processes (depending on your application, it can be very memory hungry) would spend a significant amount of its time idle keeping the connection alive and not doing anything else. Being single-threaded and using blocking I/O, a worker cannot serve other clients while keeping a connection alive. Thus unicorn does not implement persistent connections. If your application responses are larger than the socket buffer or if you're handling large requests (uploads), worker processes will also be bottlenecked by the speed of the *client* connection. You should not allow unicorn to serve clients outside of your local network. == Application Concurrency != Network Concurrency Performance is asymmetric across the different subsystems of the machine and parts of the network. CPUs and main memory can process gigabytes of data in a second; clients on the Internet are usually only capable of a tiny fraction of that. unicorn deployments should avoid dealing with slow clients directly and instead rely on a reverse proxy to shield it from the effects of slow I/O. == Improved Performance Through Reverse Proxying By acting as a buffer to shield unicorn from slow I/O, a reverse proxy will inevitably incur overhead in the form of extra data copies. However, as I/O within a local network is fast (and faster still with local sockets), this overhead is negligible for the vast majority of HTTP requests and responses. The ideal reverse proxy complements the weaknesses of unicorn. A reverse proxy for unicorn should meet the following requirements: 1. It should fully buffer all HTTP requests (and large responses). Each request should be "corked" in the reverse proxy and sent as fast as possible to the backend unicorn processes. This is the most important feature to look for when choosing a reverse proxy for unicorn. 2. It should spend minimal time in userspace. Network (and disk) I/O are system-level tasks and usually managed by the kernel. This may change if userspace TCP stacks become more popular in the future; but the reverse proxy should not waste time with application-level logic. These concerns should be separated 3. It should avoid context switches and CPU scheduling overhead. In many (most?) cases, network devices and their interrupts are only be handled by one CPU at a time. It should avoid contention within the system by serializing all network I/O into one (or few) userspace processes. Network I/O is not a CPU-intensive task and it is not helpful to use multiple CPU cores (at least not for GigE). 4. It should efficiently manage persistent connections (and pipelining) to slow clients. If you care to serve slow clients outside your network, then these features of HTTP/1.1 will help. 5. It should (optionally) serve static files. If you have static files on your site (especially large ones), they are far more efficiently served with as few data copies as possible (e.g. with sendfile() to completely avoid copying the data to userspace). nginx is the only (Free) solution we know of that meets the above requirements. Indeed, the folks behind unicorn have deployed nginx as a reverse-proxy not only for Ruby applications, but also for production applications running Apache/mod_perl, Apache/mod_php and Apache Tomcat. In every single case, performance improved because application servers were able to use backend resources more efficiently and spend less time waiting on slow I/O. == Worse Is Better Requirements and scope for applications change frequently and drastically. Thus languages like Ruby and frameworks like Rails were built to give developers fewer things to worry about in the face of rapid change. On the other hand, stable protocols which host your applications (HTTP and TCP) only change rarely. This is why we recommend you NOT tie your rapidly-changing application logic directly into the processes that deal with the stable outside world. Instead, use HTTP as a common RPC protocol to communicate between your frontend and backend. In short: separate your concerns. Of course a theoretical "perfect" solution would combine the pieces and _maybe_ give you better performance at the end of the day, but that is not the Unix way. == Just Worse in Some Cases unicorn is not suited for all applications. unicorn is optimized for applications that are CPU/memory/disk intensive and spend little time waiting on external resources (e.g. a database server or external API). unicorn is highly inefficient for Comet/reverse-HTTP/push applications where the HTTP connection spends a large amount of time idle. Nevertheless, the ease of troubleshooting, debugging, and management of unicorn may still outweigh the drawbacks for these applications. The {Rainbows!}[http://rainbows.rubyforge.org/] aims to fill the gap for odd corner cases where the nginx + unicorn combination is not enough. While Rainbows! management/administration is largely identical to unicorn, Rainbows! is far more ambitious and has seen little real-world usage. unicorn-4.7.0/GIT-VERSION-GEN0000755000004100000410000000205312236653132015335 0ustar www-datawww-data#!/usr/bin/env ruby DEF_VER = "v4.7.0" CONSTANT = "Unicorn::Const::UNICORN_VERSION" RVF = "lib/unicorn/version.rb" GVF = "GIT-VERSION-FILE" vn = DEF_VER # First see if there is a version file (included in release tarballs), # then try git-describe, then default. if File.exist?(".git") describe = `git describe --abbrev=4 HEAD 2>/dev/null`.strip case describe when /\Av[0-9]*/ vn = describe system(*%w(git update-index -q --refresh)) unless `git diff-index --name-only HEAD --`.chomp.empty? vn << "-dirty" end vn.tr!('-', '.') end end vn = vn.sub!(/\Av/, "") # generate the Ruby constant new_ruby_version = "#{CONSTANT} = '#{vn}'\n" cur_ruby_version = File.read(RVF) rescue nil if new_ruby_version != cur_ruby_version File.open(RVF, "w") { |fp| fp.write(new_ruby_version) } end # generate the makefile snippet new_make_version = "GIT_VERSION = #{vn}\n" cur_make_version = File.read(GVF) rescue nil if new_make_version != cur_make_version File.open(GVF, "w") { |fp| fp.write(new_make_version) } end puts vn if $0 == __FILE__ unicorn-4.7.0/local.mk.sample0000644000004100000410000000317412236653132016142 0ustar www-datawww-data# this is the local.mk file used by Eric Wong on his dev boxes. # GNUmakefile will source local.mk in the top-level source tree # if it is present. # # This is depends on a bunch of GNU-isms from bash, sed, touch. DLEXT := so # Avoid loading rubygems to speed up tests because gmake is # fork+exec heavy with Ruby. prefix = $(HOME) # XXX clean this up ifeq ($(r192),) ifeq ($(r19),) ifeq ($(rbx),) ifeq ($(r186),) RUBY := $(prefix)/bin/ruby else prefix := $(prefix)/r186-p114 export PATH := $(prefix)/bin:$(PATH) RUBY := $(prefix)/bin/ruby endif else prefix := $(prefix)/rbx export PATH := $(prefix)/bin:$(PATH) RUBY := $(prefix)/bin/rbx endif else prefix := $(prefix)/ruby-1.9 export PATH := $(prefix)/bin:$(PATH) RUBY := $(prefix)/bin/ruby --disable-gems endif else prefix := $(prefix)/ruby-1.9.2 export PATH := $(prefix)/bin:$(PATH) RUBY := $(prefix)/bin/ruby --disable-gems endif # pipefail is THE reason to use bash (v3+) or never revisions of ksh93 # SHELL := /bin/bash -e -o pipefail SHELL := /bin/ksh93 -e -o pipefail full-test: test-18 test-191 test-192 test-rbx test-186 # FIXME: keep eye on Rubinius activity and wait for fixes from upstream # so we don't need RBX_SKIP anymore test-rbx: export RBX_SKIP := 1 test-rbx: export RUBY := $(RUBY) test-rbx: $(MAKE) test test-integration rbx=T 2>&1 |sed -e 's!^!rbx !' test-186: $(MAKE) test-all r186=1 2>&1 |sed 's!^!1.8.6 !' test-18: $(MAKE) test-all 2>&1 |sed 's!^!1.8 !' test-191: $(MAKE) test-all r19=1 2>&1 |sed 's!^!1.9.1 !' test-192: $(MAKE) test-all r192=1 2>&1 |sed 's!^!1.9.2 !' unicorn-4.7.0/.gitignore0000644000004100000410000000041512236653132015222 0ustar www-datawww-data*.o *.bundle *.log *.so *.rbc .DS_Store /.config /InstalledFiles /doc /local.mk /test/rbx-* /test/ruby-* ext/unicorn_http/Makefile ext/unicorn_http/unicorn_http.c log/ pkg/ /vendor /NEWS /ChangeLog /.manifest /GIT-VERSION-FILE /man /tmp /LATEST /lib/unicorn/version.rb unicorn-4.7.0/lib/0000755000004100000410000000000012236653132014000 5ustar www-datawww-dataunicorn-4.7.0/lib/unicorn.rb0000644000004100000410000000744612236653132016015 0ustar www-datawww-data# -*- encoding: binary -*- require 'fcntl' require 'etc' require 'stringio' require 'rack' require 'kgio' # :stopdoc: # Unicorn module containing all of the classes (include C extensions) for # running a Unicorn web server. It contains a minimalist HTTP server with just # enough functionality to service web application requests fast as possible. # :startdoc: # \Unicorn exposes very little of an user-visible API and most of its # internals are subject to change. \Unicorn is designed to host Rack # applications, so applications should be written against the Rack SPEC # and not \Unicorn internals. module Unicorn # Raised inside TeeInput when a client closes the socket inside the # application dispatch. This is always raised with an empty backtrace # since there is nothing in the application stack that is responsible # for client shutdowns/disconnects. This exception is visible to Rack # applications unless PrereadInput middleware is loaded. class ClientShutdown < EOFError end # :stopdoc: # This returns a lambda to pass in as the app, this does not "build" the # app (which we defer based on the outcome of "preload_app" in the # Unicorn config). The returned lambda will be called when it is # time to build the app. def self.builder(ru, op) # allow Configurator to parse cli switches embedded in the ru file op = Unicorn::Configurator::RACKUP.merge!(:file => ru, :optparse => op) # Op is going to get cleared before the returned lambda is called, so # save this value so that it's still there when we need it: no_default_middleware = op[:no_default_middleware] # always called after config file parsing, may be called after forking lambda do || inner_app = case ru when /\.ru$/ raw = File.read(ru) raw.sub!(/^__END__\n.*/, '') eval("Rack::Builder.new {(\n#{raw}\n)}.to_app", TOPLEVEL_BINDING, ru) else require ru Object.const_get(File.basename(ru, '.rb').capitalize) end pp({ :inner_app => inner_app }) if $DEBUG return inner_app if no_default_middleware # return value, matches rackup defaults based on env # Unicorn does not support persistent connections, but Rainbows! # and Zbatery both do. Users accustomed to the Rack::Server default # middlewares will need ContentLength/Chunked middlewares. case ENV["RACK_ENV"] when "development" Rack::Builder.new do use Rack::ContentLength use Rack::Chunked use Rack::CommonLogger, $stderr use Rack::ShowExceptions use Rack::Lint run inner_app end.to_app when "deployment" Rack::Builder.new do use Rack::ContentLength use Rack::Chunked use Rack::CommonLogger, $stderr run inner_app end.to_app else inner_app end end end # returns an array of strings representing TCP listen socket addresses # and Unix domain socket paths. This is useful for use with # Raindrops::Middleware under Linux: http://raindrops.bogomips.org/ def self.listener_names Unicorn::HttpServer::LISTENERS.map do |io| Unicorn::SocketHelper.sock_name(io) end + Unicorn::HttpServer::NEW_LISTENERS end def self.log_error(logger, prefix, exc) message = exc.message message = message.dump if /[[:cntrl:]]/ =~ message logger.error "#{prefix}: #{message} (#{exc.class})" exc.backtrace.each { |line| logger.error(line) } end # :startdoc: end # :enddoc: require 'unicorn/const' require 'unicorn/socket_helper' require 'unicorn/stream_input' require 'unicorn/tee_input' require 'unicorn/http_request' require 'unicorn/configurator' require 'unicorn/tmpio' require 'unicorn/util' require 'unicorn/http_response' require 'unicorn/worker' require 'unicorn/http_server' unicorn-4.7.0/lib/unicorn/0000755000004100000410000000000012236653132015455 5ustar www-datawww-dataunicorn-4.7.0/lib/unicorn/cgi_wrapper.rb0000644000004100000410000001171112236653132020305 0ustar www-datawww-data# -*- encoding: binary -*- # :enddoc: # This code is based on the original CGIWrapper from Mongrel # Copyright (c) 2005 Zed A. Shaw # Copyright (c) 2009 Eric Wong # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or # the GPLv2+ (GPLv3+ preferred) # # Additional work donated by contributors. See CONTRIBUTORS for more info. require 'cgi' module Unicorn; end # The beginning of a complete wrapper around Unicorn's internal HTTP # processing system but maintaining the original Ruby CGI module. Use # this only as a crutch to get existing CGI based systems working. It # should handle everything, but please notify us if you see special # warnings. This work is still very alpha so we need testers to help # work out the various corner cases. class Unicorn::CGIWrapper < ::CGI undef_method :env_table attr_reader :env_table attr_reader :body # these are stripped out of any keys passed to CGIWrapper.header function NPH = 'nph'.freeze # Completely ignored, Unicorn outputs the date regardless CONNECTION = 'connection'.freeze # Completely ignored. Why is CGI doing this? CHARSET = 'charset'.freeze # this gets appended to Content-Type COOKIE = 'cookie'.freeze # maps (Hash,Array,String) to "Set-Cookie" headers STATUS = 'status'.freeze # stored as @status Status = 'Status'.freeze # code + human-readable text, Rails sets this # some of these are common strings, but this is the only module # using them and the reason they're not in Unicorn::Const SET_COOKIE = 'Set-Cookie'.freeze CONTENT_TYPE = 'Content-Type'.freeze CONTENT_LENGTH = 'Content-Length'.freeze # this is NOT Const::CONTENT_LENGTH RACK_INPUT = 'rack.input'.freeze RACK_ERRORS = 'rack.errors'.freeze # this maps CGI header names to HTTP header names HEADER_MAP = { 'status' => Status, 'type' => CONTENT_TYPE, 'server' => 'Server'.freeze, 'language' => 'Content-Language'.freeze, 'expires' => 'Expires'.freeze, 'length' => CONTENT_LENGTH, } # Takes an a Rackable environment, plus any additional CGI.new # arguments These are used internally to create a wrapper around the # real CGI while maintaining Rack/Unicorn's view of the world. This # this will NOT deal well with large responses that take up a lot of # memory, but neither does the CGI nor the original CGIWrapper from # Mongrel... def initialize(rack_env, *args) @env_table = rack_env @status = nil @head = {} @headv = Hash.new { |hash,key| hash[key] = [] } @body = StringIO.new("") super(*args) end # finalizes the response in a way Rack applications would expect def rack_response # @head[CONTENT_LENGTH] ||= @body.size @headv[SET_COOKIE].concat(@output_cookies) if @output_cookies @headv.each_pair do |key,value| @head[key] ||= value.join("\n") unless value.empty? end # Capitalized "Status:", with human-readable status code (e.g. "200 OK") @status ||= @head.delete(Status) [ @status || 500, @head, [ @body.string ] ] end # The header is typically called to send back the header. In our case we # collect it into a hash for later usage. This can be called multiple # times to set different cookies. def header(options = "text/html") # if they pass in a string then just write the Content-Type if String === options @head[CONTENT_TYPE] ||= options else HEADER_MAP.each_pair do |from, to| from = options.delete(from) or next @head[to] = from.to_s end @head[CONTENT_TYPE] ||= "text/html" if charset = options.delete(CHARSET) @head[CONTENT_TYPE] << "; charset=#{charset}" end # lots of ways to set cookies if cookie = options.delete(COOKIE) set_cookies = @headv[SET_COOKIE] case cookie when Array cookie.each { |c| set_cookies << c.to_s } when Hash cookie.each_value { |c| set_cookies << c.to_s } else set_cookies << cookie.to_s end end @status ||= options.delete(STATUS) # all lower-case # drop the keys we don't want anymore options.delete(NPH) options.delete(CONNECTION) # finally, set the rest of the headers as-is, allowing duplicates options.each_pair { |k,v| @headv[k] << v } end # doing this fakes out the cgi library to think the headers are empty # we then do the real headers in the out function call later "" end # The dumb thing is people can call header or this or both and in # any order. So, we just reuse header and then finalize the # HttpResponse the right way. This will have no effect if called # the second time if the first "outputted" anything. def out(options = "text/html") header(options) @body.size == 0 or return @body << yield if block_given? end # Used to wrap the normal stdinput variable used inside CGI. def stdinput @env_table[RACK_INPUT] end # return a pointer to the StringIO body since it's STDOUT-like def stdoutput @body end end unicorn-4.7.0/lib/unicorn/app/0000755000004100000410000000000012236653132016235 5ustar www-datawww-dataunicorn-4.7.0/lib/unicorn/app/old_rails.rb0000644000004100000410000000171112236653132020532 0ustar www-datawww-data# -*- encoding: binary -*- # :enddoc: # This code is based on the original Rails handler in Mongrel # Copyright (c) 2005 Zed A. Shaw # Copyright (c) 2009 Eric Wong # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or # the GPLv2+ (GPLv3+ preferred) # Additional work donated by contributors. See CONTRIBUTORS for more info. require 'unicorn/cgi_wrapper' require 'dispatcher' module Unicorn; module App; end; end # Implements a handler that can run Rails. class Unicorn::App::OldRails autoload :Static, "unicorn/app/old_rails/static" def call(env) cgi = Unicorn::CGIWrapper.new(env) begin Dispatcher.dispatch(cgi, ActionController::CgiRequest::DEFAULT_SESSION_OPTIONS, cgi.body) rescue => e err = env['rack.errors'] err.write("#{e} #{e.message}\n") e.backtrace.each { |line| err.write("#{line}\n") } end cgi.out # finalize the response cgi.rack_response end end unicorn-4.7.0/lib/unicorn/app/exec_cgi.rb0000644000004100000410000001025012236653132020326 0ustar www-datawww-data# -*- encoding: binary -*- # :enddoc: require 'unicorn' module Unicorn::App # This class is highly experimental (even more so than the rest of Unicorn) # and has never run anything other than cgit. class ExecCgi < Struct.new(:args) CHUNK_SIZE = 16384 PASS_VARS = %w( CONTENT_LENGTH CONTENT_TYPE GATEWAY_INTERFACE AUTH_TYPE PATH_INFO PATH_TRANSLATED QUERY_STRING REMOTE_ADDR REMOTE_HOST REMOTE_IDENT REMOTE_USER REQUEST_METHOD SERVER_NAME SERVER_PORT SERVER_PROTOCOL SERVER_SOFTWARE ).map { |x| x.freeze } # frozen strings are faster for Hash assignments class Body < Unicorn::TmpIO def body_offset=(n) sysseek(@body_offset = n) end def each sysseek @body_offset # don't use a preallocated buffer for sysread since we can't # guarantee an actual socket is consuming the yielded string # (or if somebody is pushing to an array for eventual concatenation begin yield sysread(CHUNK_SIZE) rescue EOFError break end while true end end # Intializes the app, example of usage in a config.ru # map "/cgit" do # run Unicorn::App::ExecCgi.new("/path/to/cgit.cgi") # end def initialize(*args) self.args = args first = args[0] or raise ArgumentError, "need path to executable" first[0] == ?/ or args[0] = ::File.expand_path(first) File.executable?(args[0]) or raise ArgumentError, "#{args[0]} is not executable" end # Calls the app def call(env) out, err = Body.new, Unicorn::TmpIO.new inp = force_file_input(env) pid = fork { run_child(inp, out, err, env) } inp.close pid, status = Process.waitpid2(pid) write_errors(env, err, status) if err.stat.size > 0 err.close return parse_output!(out) if status.success? out.close [ 500, { 'Content-Length' => '0', 'Content-Type' => 'text/plain' }, [] ] end private def run_child(inp, out, err, env) PASS_VARS.each do |key| val = env[key] or next ENV[key] = val end ENV['SCRIPT_NAME'] = args[0] ENV['GATEWAY_INTERFACE'] = 'CGI/1.1' env.keys.grep(/^HTTP_/) { |key| ENV[key] = env[key] } $stdin.reopen(inp) $stdout.reopen(out) $stderr.reopen(err) exec(*args) end # Extracts headers from CGI out, will change the offset of out. # This returns a standard Rack-compatible return value: # [ 200, HeadersHash, body ] def parse_output!(out) size = out.stat.size out.sysseek(0) head = out.sysread(CHUNK_SIZE) offset = 2 head, body = head.split(/\n\n/, 2) if body.nil? head, body = head.split(/\r\n\r\n/, 2) offset = 4 end offset += head.length out.body_offset = offset size -= offset prev = nil headers = Rack::Utils::HeaderHash.new head.split(/\r?\n/).each do |line| case line when /^([A-Za-z0-9-]+):\s*(.*)$/ then headers[prev = $1] = $2 when /^[ \t]/ then headers[prev] << "\n#{line}" if prev end end status = headers.delete("Status") || 200 headers['Content-Length'] = size.to_s [ status, headers, out ] end # ensures rack.input is a file handle that we can redirect stdin to def force_file_input(env) inp = env['rack.input'] # inp could be a StringIO or StringIO-like object if inp.respond_to?(:size) && inp.size == 0 ::File.open('/dev/null', 'rb') else tmp = Unicorn::TmpIO.new buf = inp.read(CHUNK_SIZE) begin tmp.syswrite(buf) end while inp.read(CHUNK_SIZE, buf) tmp.sysseek(0) tmp end end # rack.errors this may not be an IO object, so we couldn't # just redirect the CGI executable to that earlier. def write_errors(env, err, status) err.seek(0) dst = env['rack.errors'] pid = status.pid dst.write("#{pid}: #{args.inspect} status=#{status} stderr:\n") err.each_line { |line| dst.write("#{pid}: #{line}") } dst.flush end end end unicorn-4.7.0/lib/unicorn/app/inetd.rb0000644000004100000410000000566012236653132017674 0ustar www-datawww-data# -*- encoding: binary -*- # :enddoc: # Copyright (c) 2009 Eric Wong # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or # the GPLv2+ (GPLv3+ preferred) # this class *must* be used with Rack::Chunked module Unicorn::App class Inetd < Struct.new(:cmd) class CatBody < Struct.new(:errors, :err_rd, :out_rd, :pid_map) def initialize(env, cmd) self.errors = env['rack.errors'] in_rd, in_wr = IO.pipe self.err_rd, err_wr = IO.pipe self.out_rd, out_wr = IO.pipe cmd_pid = fork { inp, out, err = (0..2).map { |i| IO.new(i) } inp.reopen(in_rd) out.reopen(out_wr) err.reopen(err_wr) [ in_rd, in_wr, err_rd, err_wr, out_rd, out_wr ].each { |i| i.close } exec(*cmd) } [ in_rd, err_wr, out_wr ].each { |io| io.close } [ in_wr, err_rd, out_rd ].each { |io| io.binmode } in_wr.sync = true # Unfortunately, input here must be processed inside a seperate # thread/process using blocking I/O since env['rack.input'] is not # IO.select-able and attempting to make it so would trip Rack::Lint inp_pid = fork { input = env['rack.input'] [ err_rd, out_rd ].each { |io| io.close } # this is dependent on input.read having readpartial semantics: buf = input.read(16384) begin in_wr.write(buf) end while input.read(16384, buf) } in_wr.close self.pid_map = { inp_pid => 'input streamer', cmd_pid => cmd.inspect, } end def each begin rd, = IO.select([err_rd, out_rd]) rd && rd.first or next if rd.include?(err_rd) begin errors.write(err_rd.read_nonblock(16384)) rescue Errno::EINTR rescue Errno::EAGAIN break end while true end rd.include?(out_rd) or next begin yield out_rd.read_nonblock(16384) rescue Errno::EINTR rescue Errno::EAGAIN break end while true rescue EOFError,Errno::EPIPE,Errno::EBADF,Errno::EINVAL break end while true self end def close pid_map.each { |pid, str| begin pid, status = Process.waitpid2(pid) status.success? or errors.write("#{str}: #{status.inspect} (PID:#{pid})\n") rescue Errno::ECHILD errors.write("Failed to reap #{str} (PID:#{pid})\n") end } out_rd.close err_rd.close end end def initialize(*cmd) self.cmd = cmd end def call(env) /\A100-continue\z/i =~ env[Unicorn::Const::HTTP_EXPECT] and return [ 100, {} , [] ] [ 200, { 'Content-Type' => 'application/octet-stream' }, CatBody.new(env, cmd) ] end end end unicorn-4.7.0/lib/unicorn/app/old_rails/0000755000004100000410000000000012236653132020205 5ustar www-datawww-dataunicorn-4.7.0/lib/unicorn/app/old_rails/static.rb0000644000004100000410000000374512236653132022032 0ustar www-datawww-data# -*- encoding: binary -*- # :enddoc: # This code is based on the original Rails handler in Mongrel # Copyright (c) 2005 Zed A. Shaw # Copyright (c) 2009 Eric Wong # You can redistribute it and/or modify it under the same terms as Ruby 1.8 or # the GPLv3 # Static file handler for Rails < 2.3. This handler is only provided # as a convenience for developers. Performance-minded deployments should # use nginx (or similar) for serving static files. # # This supports page caching directly and will try to resolve a # request in the following order: # # * If the requested exact PATH_INFO exists as a file then serve it. # * If it exists at PATH_INFO+rest_operator+".html" exists # then serve that. # # This means that if you are using page caching it will actually work # with Unicorn and you should see a decent speed boost (but not as # fast as if you use a static server like nginx). class Unicorn::App::OldRails::Static < Struct.new(:app, :root, :file_server) FILE_METHODS = { 'GET' => true, 'HEAD' => true } # avoid allocating new strings for hash lookups REQUEST_METHOD = 'REQUEST_METHOD' REQUEST_URI = 'REQUEST_URI' PATH_INFO = 'PATH_INFO' def initialize(app) self.app = app self.root = "#{::RAILS_ROOT}/public" self.file_server = ::Rack::File.new(root) end def call(env) # short circuit this ASAP if serving non-file methods FILE_METHODS.include?(env[REQUEST_METHOD]) or return app.call(env) # first try the path as-is path_info = env[PATH_INFO].chomp("/") if File.file?("#{root}/#{::Rack::Utils.unescape(path_info)}") # File exists as-is so serve it up env[PATH_INFO] = path_info return file_server.call(env) end # then try the cached version: path_info << ActionController::Base.page_cache_extension if File.file?("#{root}/#{::Rack::Utils.unescape(path_info)}") env[PATH_INFO] = path_info return file_server.call(env) end app.call(env) # call OldRails end end if defined?(Unicorn::App::OldRails) unicorn-4.7.0/lib/unicorn/launcher.rb0000644000004100000410000000372712236653132017614 0ustar www-datawww-data# -*- encoding: binary -*- # :enddoc: $stdout.sync = $stderr.sync = true $stdin.binmode $stdout.binmode $stderr.binmode require 'unicorn' module Unicorn::Launcher # We don't do a lot of standard daemonization stuff: # * umask is whatever was set by the parent process at startup # and can be set in config.ru and config_file, so making it # 0000 and potentially exposing sensitive log data can be bad # policy. # * don't bother to chdir("/") here since unicorn is designed to # run inside APP_ROOT. Unicorn will also re-chdir() to # the directory it was started in when being re-executed # to pickup code changes if the original deployment directory # is a symlink or otherwise got replaced. def self.daemonize!(options) cfg = Unicorn::Configurator $stdin.reopen("/dev/null") # We only start a new process group if we're not being reexecuted # and inheriting file descriptors from our parent unless ENV['UNICORN_FD'] # grandparent - reads pipe, exits when master is ready # \_ parent - exits immediately ASAP # \_ unicorn master - writes to pipe when ready rd, wr = IO.pipe grandparent = $$ if fork wr.close # grandparent does not write else rd.close # unicorn master does not read Process.setsid exit if fork # parent dies now end if grandparent == $$ # this will block until HttpServer#join runs (or it dies) master_pid = (rd.readpartial(16) rescue nil).to_i unless master_pid > 1 warn "master failed to start, check stderr log for details" exit!(1) end exit 0 else # unicorn master process options[:ready_pipe] = wr end end # $stderr/$stderr can/will be redirected separately in the Unicorn config cfg::DEFAULTS[:stderr_path] ||= "/dev/null" cfg::DEFAULTS[:stdout_path] ||= "/dev/null" cfg::RACKUP[:daemonized] = true end end unicorn-4.7.0/lib/unicorn/version.rb0000644000004100000410000000005212236653132017464 0ustar www-datawww-dataUnicorn::Const::UNICORN_VERSION = '4.7.0' unicorn-4.7.0/lib/unicorn/oob_gc.rb0000644000004100000410000000513312236653132017234 0ustar www-datawww-data# -*- encoding: binary -*- # Runs GC after requests, after closing the client socket and # before attempting to accept more connections. # # This shouldn't hurt overall performance as long as the server cluster # is at <50% CPU capacity, and improves the performance of most memory # intensive requests. This serves to improve _client-visible_ # performance (possibly at the cost of overall performance). # # Increasing the number of +worker_processes+ may be necessary to # improve average client response times because some of your workers # will be busy doing GC and unable to service clients. Think of # using more workers with this module as a poor man's concurrent GC. # # We'll call GC after each request is been written out to the socket, so # the client never sees the extra GC hit it. # # This middleware is _only_ effective for applications that use a lot # of memory, and will hurt simpler apps/endpoints that can process # multiple requests before incurring GC. # # This middleware is only designed to work with unicorn, as it harms # performance with keepalive-enabled servers. # # Example (in config.ru): # # require 'unicorn/oob_gc' # # # GC ever two requests that hit /expensive/foo or /more_expensive/foo # # in your app. By default, this will GC once every 5 requests # # for all endpoints in your app # use Unicorn::OobGC, 2, %r{\A/(?:expensive/foo|more_expensive/foo)} # # Feedback from users of early implementations of this module: # * http://comments.gmane.org/gmane.comp.lang.ruby.unicorn.general/486 # * http://article.gmane.org/gmane.comp.lang.ruby.unicorn.general/596 module Unicorn::OobGC # this pretends to be Rack middleware because it used to be # But we need to hook into unicorn internals so we need to close # the socket before clearing the request env. # # +interval+ is the number of requests matching the +path+ regular # expression before invoking GC. def self.new(app, interval = 5, path = %r{\A/}) @@nr = interval self.const_set :OOBGC_PATH, path self.const_set :OOBGC_INTERVAL, interval ObjectSpace.each_object(Unicorn::HttpServer) do |s| s.extend(self) self.const_set :OOBGC_ENV, s.instance_variable_get(:@request).env end app # pretend to be Rack middleware since it was in the past end #:stopdoc: PATH_INFO = "PATH_INFO" def process_client(client) super(client) # Unicorn::HttpServer#process_client if OOBGC_PATH =~ OOBGC_ENV[PATH_INFO] && ((@@nr -= 1) <= 0) @@nr = OOBGC_INTERVAL OOBGC_ENV.clear disabled = GC.enable GC.start GC.disable if disabled end end # :startdoc: end unicorn-4.7.0/lib/unicorn/ssl_configurator.rb0000644000004100000410000000476012236653132021374 0ustar www-datawww-data# -*- encoding: binary -*- # :stopdoc: # This module is included in Unicorn::Configurator # :startdoc: # module Unicorn::SSLConfigurator def ssl(&block) ssl_require! before = @set[:listeners].dup opts = @set[:ssl_opts] = {} yield (@set[:listeners] - before).each do |address| (@set[:listener_opts][address] ||= {})[:ssl_opts] = opts end ensure @set.delete(:ssl_opts) end def ssl_certificate(file) ssl_set(:ssl_certificate, file) end def ssl_certificate_key(file) ssl_set(:ssl_certificate_key, file) end def ssl_client_certificate(file) ssl_set(:ssl_client_certificate, file) end def ssl_dhparam(file) ssl_set(:ssl_dhparam, file) end def ssl_ciphers(openssl_cipherlist_spec) ssl_set(:ssl_ciphers, openssl_cipherlist_spec) end def ssl_crl(file) ssl_set(:ssl_crl, file) end def ssl_prefer_server_ciphers(bool) ssl_set(:ssl_prefer_server_ciphers, check_bool(bool)) end def ssl_protocols(list) ssl_set(:ssl_protocols, list) end def ssl_verify_client(on_off_optional) ssl_set(:ssl_verify_client, on_off_optional) end def ssl_session_timeout(seconds) ssl_set(:ssl_session_timeout, seconds) end def ssl_verify_depth(depth) ssl_set(:ssl_verify_depth, depth) end # Allows specifying an engine for OpenSSL to use. We have not been # able to successfully test this feature due to a lack of hardware, # Reports of success or patches to mongrel-unicorn@rubyforge.org is # greatly appreciated. def ssl_engine(engine) ssl_warn_global(:ssl_engine) ssl_require! OpenSSL::Engine.load OpenSSL::Engine.by_id(engine) @set[:ssl_engine] = engine end def ssl_compression(bool) # OpenSSL uses the SSL_OP_NO_COMPRESSION flag, Flipper follows suit # with :ssl_no_compression, but we negate it to avoid exposing double # negatives to the user. ssl_set(:ssl_no_compression, check_bool(:ssl_compression, ! bool)) end private def ssl_warn_global(func) # :nodoc: Hash === @set[:ssl_opts] or return warn("`#{func}' affects all SSL contexts in this process, " \ "not just this block") end def ssl_set(key, value) # :nodoc: cur = @set[:ssl_opts] Hash === cur or raise ArgumentError, "#{key} must be called inside an `ssl' block" cur[key] = value end def ssl_require! # :nodoc: require "flipper" require "unicorn/ssl_client" rescue LoadError warn "install 'kgio-monkey' for SSL support" raise end end unicorn-4.7.0/lib/unicorn/preread_input.rb0000644000004100000410000000124212236653132020642 0ustar www-datawww-data# -*- encoding: binary -*- module Unicorn # This middleware is used to ensure input is buffered to memory # or disk (depending on size) before the application is dispatched # by entirely consuming it (from TeeInput) beforehand. # # Usage (in config.ru): # # require 'unicorn/preread_input' # if defined?(Unicorn) # use Unicorn::PrereadInput # end # run YourApp.new class PrereadInput # :stopdoc: def initialize(app) @app = app end def call(env) buf = "" input = env["rack.input"] if input.respond_to?(:rewind) true while input.read(16384, buf) input.rewind end @app.call(env) end # :startdoc: end end unicorn-4.7.0/lib/unicorn/http_response.rb0000644000004100000410000000434712236653132020707 0ustar www-datawww-data# -*- encoding: binary -*- # :enddoc: # Writes a Rack response to your client using the HTTP/1.1 specification. # You use it by simply doing: # # status, headers, body = rack_app.call(env) # http_response_write(socket, status, headers, body) # # Most header correctness (including Content-Length and Content-Type) # is the job of Rack, with the exception of the "Date" and "Status" header. module Unicorn::HttpResponse # Every standard HTTP code mapped to the appropriate message. CODES = Rack::Utils::HTTP_STATUS_CODES.inject({}) { |hash,(code,msg)| hash[code] = "#{code} #{msg}" hash } CRLF = "\r\n" def err_response(code, response_start_sent) "#{response_start_sent ? '' : 'HTTP/1.1 '}#{CODES[code]}\r\n\r\n" end # writes the rack_response to socket as an HTTP response def http_response_write(socket, status, headers, body, response_start_sent=false) status = CODES[status.to_i] || status hijack = nil http_response_start = response_start_sent ? '' : 'HTTP/1.1 ' if headers buf = "#{http_response_start}#{status}\r\n" \ "Date: #{httpdate}\r\n" \ "Status: #{status}\r\n" \ "Connection: close\r\n" headers.each do |key, value| case key when %r{\A(?:Date\z|Connection\z)}i next when "rack.hijack" # this was an illegal key in Rack < 1.5, so it should be # OK to silently discard it for those older versions hijack = hijack_prepare(value) else if value =~ /\n/ # avoiding blank, key-only cookies with /\n+/ buf << value.split(/\n+/).map! { |v| "#{key}: #{v}\r\n" }.join else buf << "#{key}: #{value}\r\n" end end end socket.write(buf << CRLF) end if hijack body = nil # ensure we do not close body hijack.call(socket) else body.each { |chunk| socket.write(chunk) } end ensure body.respond_to?(:close) and body.close end # Rack 1.5.0 (protocol version 1.2) adds response hijacking support if ((Rack::VERSION[0] << 8) | Rack::VERSION[1]) >= 0x0102 def hijack_prepare(value) value end else def hijack_prepare(_) end end end unicorn-4.7.0/lib/unicorn/const.rb0000644000004100000410000000311512236653132017130 0ustar www-datawww-data# -*- encoding: binary -*- # :enddoc: # Frequently used constants when constructing requests or responses. # Many times the constant just refers to a string with the same # contents. Using these constants gave about a 3% to 10% performance # improvement over using the strings directly. Symbols did not really # improve things much compared to constants. module Unicorn::Const # default TCP listen host address (0.0.0.0, all interfaces) DEFAULT_HOST = "0.0.0.0" # default TCP listen port (8080) DEFAULT_PORT = 8080 # default TCP listen address and port (0.0.0.0:8080) DEFAULT_LISTEN = "#{DEFAULT_HOST}:#{DEFAULT_PORT}" # The basic request body size we'll try to read at once (16 kilobytes). CHUNK_SIZE = 16 * 1024 # Maximum request body size before it is moved out of memory and into a # temporary file for reading (112 kilobytes). This is the default # value of client_body_buffer_size. MAX_BODY = 1024 * 112 # :stopdoc: # common errors we'll send back # (N.B. these are not used by unicorn, but we won't drop them until # unicorn 5.x to avoid breaking Rainbows!). ERROR_400_RESPONSE = "HTTP/1.1 400 Bad Request\r\n\r\n" ERROR_414_RESPONSE = "HTTP/1.1 414 Request-URI Too Long\r\n\r\n" ERROR_413_RESPONSE = "HTTP/1.1 413 Request Entity Too Large\r\n\r\n" ERROR_500_RESPONSE = "HTTP/1.1 500 Internal Server Error\r\n\r\n" EXPECT_100_RESPONSE = "HTTP/1.1 100 Continue\r\n\r\n" EXPECT_100_RESPONSE_SUFFIXED = "100 Continue\r\n\r\nHTTP/1.1 " HTTP_RESPONSE_START = ['HTTP', '/1.1 '] HTTP_EXPECT = "HTTP_EXPECT" # :startdoc: end require 'unicorn/version' unicorn-4.7.0/lib/unicorn/configurator.rb0000644000004100000410000006205512236653132020514 0ustar www-datawww-data# -*- encoding: binary -*- require 'logger' require 'unicorn/ssl_configurator' # Implements a simple DSL for configuring a \Unicorn server. # # See http://unicorn.bogomips.org/examples/unicorn.conf.rb and # http://unicorn.bogomips.org/examples/unicorn.conf.minimal.rb # example configuration files. An example config file for use with # nginx is also available at # http://unicorn.bogomips.org/examples/nginx.conf # # See the link:/TUNING.html document for more information on tuning unicorn. class Unicorn::Configurator include Unicorn include Unicorn::SSLConfigurator # :stopdoc: attr_accessor :set, :config_file, :after_reload # used to stash stuff for deferred processing of cli options in # config.ru after "working_directory" is bound. Do not rely on # this being around later on... RACKUP = { :daemonize => false, :host => Unicorn::Const::DEFAULT_HOST, :port => Unicorn::Const::DEFAULT_PORT, :set_listener => false, :options => { :listeners => [] } } # Default settings for Unicorn DEFAULTS = { :timeout => 60, :logger => Logger.new($stderr), :worker_processes => 1, :after_fork => lambda { |server, worker| server.logger.info("worker=#{worker.nr} spawned pid=#{$$}") }, :before_fork => lambda { |server, worker| server.logger.info("worker=#{worker.nr} spawning...") }, :before_exec => lambda { |server| server.logger.info("forked child re-executing...") }, :pid => nil, :preload_app => false, :check_client_connection => false, :rewindable_input => true, # for Rack 2.x: (Rack::VERSION[0] <= 1), :client_body_buffer_size => Unicorn::Const::MAX_BODY, :trust_x_forwarded => true, } #:startdoc: def initialize(defaults = {}) #:nodoc: self.set = Hash.new(:unset) @use_defaults = defaults.delete(:use_defaults) self.config_file = defaults.delete(:config_file) # after_reload is only used by unicorn_rails, unsupported otherwise self.after_reload = defaults.delete(:after_reload) set.merge!(DEFAULTS) if @use_defaults defaults.each { |key, value| self.__send__(key, value) } Hash === set[:listener_opts] or set[:listener_opts] = Hash.new { |hash,key| hash[key] = {} } Array === set[:listeners] or set[:listeners] = [] reload(false) end def reload(merge_defaults = true) #:nodoc: if merge_defaults && @use_defaults set.merge!(DEFAULTS) if @use_defaults end instance_eval(File.read(config_file), config_file) if config_file parse_rackup_file RACKUP[:set_listener] and set[:listeners] << "#{RACKUP[:host]}:#{RACKUP[:port]}" # unicorn_rails creates dirs here after working_directory is bound after_reload.call if after_reload # working_directory binds immediately (easier error checking that way), # now ensure any paths we changed are correctly set. [ :pid, :stderr_path, :stdout_path ].each do |var| String === (path = set[var]) or next path = File.expand_path(path) File.writable?(path) || File.writable?(File.dirname(path)) or \ raise ArgumentError, "directory for #{var}=#{path} not writable" end end def commit!(server, options = {}) #:nodoc: skip = options[:skip] || [] if ready_pipe = RACKUP.delete(:ready_pipe) server.ready_pipe = ready_pipe end if set[:check_client_connection] set[:listeners].each do |address| if set[:listener_opts][address][:tcp_nopush] == true raise ArgumentError, "check_client_connection is incompatible with tcp_nopush:true" end end end set.each do |key, value| value == :unset and next skip.include?(key) and next server.__send__("#{key}=", value) end end def [](key) # :nodoc: set[key] end # sets object to the +obj+ Logger-like object. The new Logger-like # object must respond to the following methods: # * debug # * info # * warn # * error # * fatal # The default Logger will log its output to the path specified # by +stderr_path+. If you're running Unicorn daemonized, then # you must specify a path to prevent error messages from going # to /dev/null. def logger(obj) %w(debug info warn error fatal).each do |m| obj.respond_to?(m) and next raise ArgumentError, "logger=#{obj} does not respond to method=#{m}" end set[:logger] = obj end # sets after_fork hook to a given block. This block will be called by # the worker after forking. The following is an example hook which adds # a per-process listener to every worker: # # after_fork do |server,worker| # # per-process listener ports for debugging/admin: # addr = "127.0.0.1:#{9293 + worker.nr}" # # # the negative :tries parameter indicates we will retry forever # # waiting on the existing process to exit with a 5 second :delay # # Existing options for Unicorn::Configurator#listen such as # # :backlog, :rcvbuf, :sndbuf are available here as well. # server.listen(addr, :tries => -1, :delay => 5, :backlog => 128) # end def after_fork(*args, &block) set_hook(:after_fork, block_given? ? block : args[0]) end # sets before_fork got be a given Proc object. This Proc # object will be called by the master process before forking # each worker. def before_fork(*args, &block) set_hook(:before_fork, block_given? ? block : args[0]) end # sets the before_exec hook to a given Proc object. This # Proc object will be called by the master process right # before exec()-ing the new unicorn binary. This is useful # for freeing certain OS resources that you do NOT wish to # share with the reexeced child process. # There is no corresponding after_exec hook (for obvious reasons). def before_exec(*args, &block) set_hook(:before_exec, block_given? ? block : args[0], 1) end # sets the timeout of worker processes to +seconds+. Workers # handling the request/app.call/response cycle taking longer than # this time period will be forcibly killed (via SIGKILL). This # timeout is enforced by the master process itself and not subject # to the scheduling limitations by the worker process. Due the # low-complexity, low-overhead implementation, timeouts of less # than 3.0 seconds can be considered inaccurate and unsafe. # # For running Unicorn behind nginx, it is recommended to set # "fail_timeout=0" for in your nginx configuration like this # to have nginx always retry backends that may have had workers # SIGKILL-ed due to timeouts. # # # See http://wiki.nginx.org/NginxHttpUpstreamModule for more details # # on nginx upstream configuration: # upstream unicorn_backend { # # for UNIX domain socket setups: # server unix:/path/to/.unicorn.sock fail_timeout=0; # # # for TCP setups # server 192.168.0.7:8080 fail_timeout=0; # server 192.168.0.8:8080 fail_timeout=0; # server 192.168.0.9:8080 fail_timeout=0; # } def timeout(seconds) set_int(:timeout, seconds, 3) # POSIX says 31 days is the smallest allowed maximum timeout for select() max = 30 * 60 * 60 * 24 set[:timeout] = seconds > max ? max : seconds end # sets the current number of worker_processes to +nr+. Each worker # process will serve exactly one client at a time. You can # increment or decrement this value at runtime by sending SIGTTIN # or SIGTTOU respectively to the master process without reloading # the rest of your Unicorn configuration. See the SIGNALS document # for more information. def worker_processes(nr) set_int(:worker_processes, nr, 1) end # sets listeners to the given +addresses+, replacing or augmenting the # current set. This is for the global listener pool shared by all # worker processes. For per-worker listeners, see the after_fork example # This is for internal API use only, do not use it in your Unicorn # config file. Use listen instead. def listeners(addresses) # :nodoc: Array === addresses or addresses = Array(addresses) addresses.map! { |addr| expand_addr(addr) } set[:listeners] = addresses end # Adds an +address+ to the existing listener set. May be specified more # than once. +address+ may be an Integer port number for a TCP port, an # "IP_ADDRESS:PORT" for TCP listeners or a pathname for UNIX domain sockets. # # listen 3000 # listen to port 3000 on all TCP interfaces # listen "127.0.0.1:3000" # listen to port 3000 on the loopback interface # listen "/path/to/.unicorn.sock" # listen on the given Unix domain socket # listen "[::1]:3000" # listen to port 3000 on the IPv6 loopback interface # # When using Unix domain sockets, be sure: # 1) the path matches the one used by nginx # 2) uses the same filesystem namespace as the nginx process # For systemd users using PrivateTmp=true (for either nginx or unicorn), # this means Unix domain sockets must not be placed in /tmp # # The following options may be specified (but are generally not needed): # # [:backlog => number of clients] # # This is the backlog of the listen() syscall. # # Some operating systems allow negative values here to specify the # maximum allowable value. In most cases, this number is only # recommendation and there are other OS-specific tunables and # variables that can affect this number. See the listen(2) # syscall documentation of your OS for the exact semantics of # this. # # If you are running unicorn on multiple machines, lowering this number # can help your load balancer detect when a machine is overloaded # and give requests to a different machine. # # Default: 1024 # # [:rcvbuf => bytes, :sndbuf => bytes] # # Maximum receive and send buffer sizes (in bytes) of sockets. # # These correspond to the SO_RCVBUF and SO_SNDBUF settings which # can be set via the setsockopt(2) syscall. Some kernels # (e.g. Linux 2.4+) have intelligent auto-tuning mechanisms and # there is no need (and it is sometimes detrimental) to specify them. # # See the socket API documentation of your operating system # to determine the exact semantics of these settings and # other operating system-specific knobs where they can be # specified. # # Defaults: operating system defaults # # [:tcp_nodelay => true or false] # # Disables Nagle's algorithm on TCP sockets if +true+. # # Setting this to +true+ can make streaming responses in Rails 3.1 # appear more quickly at the cost of slightly higher bandwidth usage. # The effect of this option is most visible if nginx is not used, # but nginx remains highly recommended with \Unicorn. # # This has no effect on UNIX sockets. # # Default: +true+ (Nagle's algorithm disabled) in \Unicorn, # +true+ in Rainbows! This defaulted to +false+ in \Unicorn # 3.x # # [:tcp_nopush => true or false] # # Enables/disables TCP_CORK in Linux or TCP_NOPUSH in FreeBSD # # This prevents partial TCP frames from being sent out and reduces # wakeups in nginx if it is on a different machine. Since \Unicorn # is only designed for applications that send the response body # quickly without keepalive, sockets will always be flushed on close # to prevent delays. # # This has no effect on UNIX sockets. # # Default: +false+ # This defaulted to +true+ in \Unicorn 3.4 - 3.7 # # [:ipv6only => true or false] # # This option makes IPv6-capable TCP listeners IPv6-only and unable # to receive IPv4 queries on dual-stack systems. A separate IPv4-only # listener is required if this is true. # # This option is only available for Ruby 1.9.2 and later. # # Enabling this option for the IPv6-only listener and having a # separate IPv4 listener is recommended if you wish to support IPv6 # on the same TCP port. Otherwise, the value of \env[\"REMOTE_ADDR\"] # will appear as an ugly IPv4-mapped-IPv6 address for IPv4 clients # (e.g ":ffff:10.0.0.1" instead of just "10.0.0.1"). # # Default: Operating-system dependent # # [:reuseport => true or false] # # This enables multiple, independently-started unicorn instances to # bind to the same port (as long as all the processes enable this). # # This option must be used when unicorn first binds the listen socket. # It cannot be enabled when a socket is inherited via SIGUSR2 # (but it will remain on if inherited), and it cannot be enabled # directly via SIGHUP. # # Note: there is a chance of connections being dropped if # one of the unicorn instances is stopped while using this. # # This is supported on *BSD systems and Linux 3.9 or later. # # ref: https://lwn.net/Articles/542629/ # # Default: false (unset) # # [:tries => Integer] # # Times to retry binding a socket if it is already in use # # A negative number indicates we will retry indefinitely, this is # useful for migrations and upgrades when individual workers # are binding to different ports. # # Default: 5 # # [:delay => seconds] # # Seconds to wait between successive +tries+ # # Default: 0.5 seconds # # [:umask => mode] # # Sets the file mode creation mask for UNIX sockets. If specified, # this is usually in octal notation. # # Typically UNIX domain sockets are created with more liberal # file permissions than the rest of the application. By default, # we create UNIX domain sockets to be readable and writable by # all local users to give them the same accessibility as # locally-bound TCP listeners. # # This has no effect on TCP listeners. # # Default: 0000 (world-read/writable) # # [:tcp_defer_accept => Integer] # # Defer accept() until data is ready (Linux-only) # # For Linux 2.6.32 and later, this is the number of retransmits to # defer an accept() for if no data arrives, but the client will # eventually be accepted after the specified number of retransmits # regardless of whether data is ready. # # For Linux before 2.6.32, this is a boolean option, and # accepts are _always_ deferred indefinitely if no data arrives. # This is similar to :accept_filter => "dataready" # under FreeBSD. # # Specifying +true+ is synonymous for the default value(s) below, # and +false+ or +nil+ is synonymous for a value of zero. # # A value of +1+ is a good optimization for local networks # and trusted clients. For Rainbows! and Zbatery users, a higher # value (e.g. +60+) provides more protection against some # denial-of-service attacks. There is no good reason to ever # disable this with a +zero+ value when serving HTTP. # # Default: 1 retransmit for \Unicorn, 60 for Rainbows! 0.95.0\+ # # [:accept_filter => String] # # defer accept() until data is ready (FreeBSD-only) # # This enables either the "dataready" or (default) "httpready" # accept() filter under FreeBSD. This is intended as an # optimization to reduce context switches with common GET/HEAD # requests. For Rainbows! and Zbatery users, this provides # some protection against certain denial-of-service attacks, too. # # There is no good reason to change from the default. # # Default: "httpready" def listen(address, options = {}) address = expand_addr(address) if String === address [ :umask, :backlog, :sndbuf, :rcvbuf, :tries ].each do |key| value = options[key] or next Integer === value or raise ArgumentError, "not an integer: #{key}=#{value.inspect}" end [ :tcp_nodelay, :tcp_nopush, :ipv6only, :reuseport ].each do |key| (value = options[key]).nil? and next TrueClass === value || FalseClass === value or raise ArgumentError, "not boolean: #{key}=#{value.inspect}" end unless (value = options[:delay]).nil? Numeric === value or raise ArgumentError, "not numeric: delay=#{value.inspect}" end set[:listener_opts][address].merge!(options) end set[:listeners] << address end # sets the +path+ for the PID file of the unicorn master process def pid(path); set_path(:pid, path); end # Enabling this preloads an application before forking worker # processes. This allows memory savings when using a # copy-on-write-friendly GC but can cause bad things to happen when # resources like sockets are opened at load time by the master # process and shared by multiple children. People enabling this are # highly encouraged to look at the before_fork/after_fork hooks to # properly close/reopen sockets. Files opened for logging do not # have to be reopened as (unbuffered-in-userspace) files opened with # the File::APPEND flag are written to atomically on UNIX. # # In addition to reloading the unicorn-specific config settings, # SIGHUP will reload application code in the working # directory/symlink when workers are gracefully restarted when # preload_app=false (the default). As reloading the application # sometimes requires RubyGems updates, +Gem.refresh+ is always # called before the application is loaded (for RubyGems users). # # During deployments, care should _always_ be taken to ensure your # applications are properly deployed and running. Using # preload_app=false (the default) means you _must_ check if # your application is responding properly after a deployment. # Improperly deployed applications can go into a spawn loop # if the application fails to load. While your children are # in a spawn loop, it is is possible to fix an application # by properly deploying all required code and dependencies. # Using preload_app=true means any application load error will # cause the master process to exit with an error. def preload_app(bool) set_bool(:preload_app, bool) end # Toggles making \env[\"rack.input\"] rewindable. # Disabling rewindability can improve performance by lowering # I/O and memory usage for applications that accept uploads. # Keep in mind that the Rack 1.x spec requires # \env[\"rack.input\"] to be rewindable, so this allows # intentionally violating the current Rack 1.x spec. # # +rewindable_input+ defaults to +true+ when used with Rack 1.x for # Rack conformance. When Rack 2.x is finalized, this will most # likely default to +false+ while still conforming to the newer # (less demanding) spec. def rewindable_input(bool) set_bool(:rewindable_input, bool) end # The maximum size (in +bytes+) to buffer in memory before # resorting to a temporary file. Default is 112 kilobytes. # This option has no effect if "rewindable_input" is set to # +false+. def client_body_buffer_size(bytes) set_int(:client_body_buffer_size, bytes, 0) end # When enabled, unicorn will check the client connection by writing # the beginning of the HTTP headers before calling the application. # # This will prevent calling the application for clients who have # disconnected while their connection was queued. # # This only affects clients connecting over Unix domain sockets # and TCP via loopback (127.*.*.*). It is unlikely to detect # disconnects if the client is on a remote host (even on a fast LAN). # # This option cannot be used in conjunction with :tcp_nopush. def check_client_connection(bool) set_bool(:check_client_connection, bool) end # Allow redirecting $stderr to a given path. Unlike doing this from # the shell, this allows the unicorn process to know the path its # writing to and rotate the file if it is used for logging. The # file will be opened with the File::APPEND flag and writes # synchronized to the kernel (but not necessarily to _disk_) so # multiple processes can safely append to it. # # If you are daemonizing and using the default +logger+, it is important # to specify this as errors will otherwise be lost to /dev/null. # Some applications/libraries may also triggering warnings that go to # stderr, and they will end up here. def stderr_path(path) set_path(:stderr_path, path) end # Same as stderr_path, except for $stdout. Not many Rack applications # write to $stdout, but any that do will have their output written here. # It is safe to point this to the same location a stderr_path. # Like stderr_path, this defaults to /dev/null when daemonized. def stdout_path(path) set_path(:stdout_path, path) end # sets the working directory for Unicorn. This ensures SIGUSR2 will # start a new instance of Unicorn in this directory. This may be # a symlink, a common scenario for Capistrano users. Unlike # all other Unicorn configuration directives, this binds immediately # for error checking and cannot be undone by unsetting it in the # configuration file and reloading. def working_directory(path) # just let chdir raise errors path = File.expand_path(path) if config_file && config_file[0] != ?/ && ! File.readable?("#{path}/#{config_file}") raise ArgumentError, "config_file=#{config_file} would not be accessible in" \ " working_directory=#{path}" end Dir.chdir(path) Unicorn::HttpServer::START_CTX[:cwd] = ENV["PWD"] = path end # Runs worker processes as the specified +user+ and +group+. # The master process always stays running as the user who started it. # This switch will occur after calling the after_fork hook, and only # if the Worker#user method is not called in the after_fork hook # +group+ is optional and will not change if unspecified. def user(user, group = nil) # raises ArgumentError on invalid user/group Etc.getpwnam(user) Etc.getgrnam(group) if group set[:user] = [ user, group ] end # Sets whether or not the parser will trust X-Forwarded-Proto and # X-Forwarded-SSL headers and set "rack.url_scheme" to "https" accordingly. # Rainbows!/Zbatery installations facing untrusted clients directly # should set this to +false+. This is +true+ by default as Unicorn # is designed to only sit behind trusted nginx proxies. # # This has never been publically documented and is subject to removal # in future releases. def trust_x_forwarded(bool) # :nodoc: set_bool(:trust_x_forwarded, bool) end # expands "unix:path/to/foo" to a socket relative to the current path # expands pathnames of sockets if relative to "~" or "~username" # expands "*:port and ":port" to "0.0.0.0:port" def expand_addr(address) #:nodoc: return "0.0.0.0:#{address}" if Integer === address return address unless String === address case address when %r{\Aunix:(.*)\z} File.expand_path($1) when %r{\A~} File.expand_path(address) when %r{\A(?:\*:)?(\d+)\z} "0.0.0.0:#$1" when %r{\A\[([a-fA-F0-9:]+)\]\z}, %r/\A((?:\d+\.){3}\d+)\z/ canonicalize_tcp($1, 80) when %r{\A\[([a-fA-F0-9:]+)\]:(\d+)\z}, %r{\A(.*):(\d+)\z} canonicalize_tcp($1, $2.to_i) else address end end private def set_int(var, n, min) #:nodoc: Integer === n or raise ArgumentError, "not an integer: #{var}=#{n.inspect}" n >= min or raise ArgumentError, "too low (< #{min}): #{var}=#{n.inspect}" set[var] = n end def canonicalize_tcp(addr, port) packed = Socket.pack_sockaddr_in(port, addr) port, addr = Socket.unpack_sockaddr_in(packed) /:/ =~ addr ? "[#{addr}]:#{port}" : "#{addr}:#{port}" end def set_path(var, path) #:nodoc: case path when NilClass, String set[var] = path else raise ArgumentError end end def check_bool(var, bool) # :nodoc: case bool when true, false return bool end raise ArgumentError, "#{var}=#{bool.inspect} not a boolean" end def set_bool(var, bool) #:nodoc: set[var] = check_bool(var, bool) end def set_hook(var, my_proc, req_arity = 2) #:nodoc: case my_proc when Proc arity = my_proc.arity (arity == req_arity) or \ raise ArgumentError, "#{var}=#{my_proc.inspect} has invalid arity: " \ "#{arity} (need #{req_arity})" when NilClass my_proc = DEFAULTS[var] else raise ArgumentError, "invalid type: #{var}=#{my_proc.inspect}" end set[var] = my_proc end # this is called _after_ working_directory is bound. This only # parses the embedded switches in .ru files # (for "rackup" compatibility) def parse_rackup_file # :nodoc: ru = RACKUP[:file] or return # we only return here in unit tests # :rails means use (old) Rails autodetect if ru == :rails File.readable?('config.ru') or return ru = 'config.ru' end File.readable?(ru) or raise ArgumentError, "rackup file (#{ru}) not readable" # it could be a .rb file, too, we don't parse those manually ru =~ /\.ru\z/ or return /^#\\(.*)/ =~ File.read(ru) or return RACKUP[:optparse].parse!($1.split(/\s+/)) if RACKUP[:daemonize] # unicorn_rails wants a default pid path, (not plain 'unicorn') if after_reload spid = set[:pid] pid('tmp/pids/unicorn.pid') if spid.nil? || spid == :unset end unless RACKUP[:daemonized] Unicorn::Launcher.daemonize!(RACKUP[:options]) RACKUP[:ready_pipe] = RACKUP[:options].delete(:ready_pipe) end end end end unicorn-4.7.0/lib/unicorn/tmpio.rb0000644000004100000410000000134612236653132017136 0ustar www-datawww-data# -*- encoding: binary -*- # :stopdoc: require 'tmpdir' # some versions of Ruby had a broken Tempfile which didn't work # well with unlinked files. This one is much shorter, easier # to understand, and slightly faster. class Unicorn::TmpIO < File # creates and returns a new File object. The File is unlinked # immediately, switched to binary mode, and userspace output # buffering is disabled def self.new fp = begin super("#{Dir::tmpdir}/#{rand}", RDWR|CREAT|EXCL, 0600) rescue Errno::EEXIST retry end unlink(fp.path) fp.binmode fp.sync = true fp end # for easier env["rack.input"] compatibility with Rack <= 1.1 def size stat.size end unless File.method_defined?(:size) end unicorn-4.7.0/lib/unicorn/util.rb0000644000004100000410000000543312236653132016764 0ustar www-datawww-data# -*- encoding: binary -*- module Unicorn::Util # :stopdoc: def self.is_log?(fp) append_flags = File::WRONLY | File::APPEND ! fp.closed? && fp.stat.file? && fp.sync && (fp.fcntl(Fcntl::F_GETFL) & append_flags) == append_flags rescue IOError, Errno::EBADF false end def self.chown_logs(uid, gid) ObjectSpace.each_object(File) do |fp| fp.chown(uid, gid) if is_log?(fp) end end # :startdoc: # This reopens ALL logfiles in the process that have been rotated # using logrotate(8) (without copytruncate) or similar tools. # A +File+ object is considered for reopening if it is: # 1) opened with the O_APPEND and O_WRONLY flags # 2) the current open file handle does not match its original open path # 3) unbuffered (as far as userspace buffering goes, not O_SYNC) # Returns the number of files reopened # # In Unicorn 3.5.x and earlier, files must be opened with an absolute # path to be considered a log file. def self.reopen_logs to_reopen = [] nr = 0 ObjectSpace.each_object(File) { |fp| is_log?(fp) and to_reopen << fp } to_reopen.each do |fp| orig_st = begin fp.stat rescue IOError, Errno::EBADF # race next end begin b = File.stat(fp.path) next if orig_st.ino == b.ino && orig_st.dev == b.dev rescue Errno::ENOENT end begin # stdin, stdout, stderr are special. The following dance should # guarantee there is no window where `fp' is unwritable in MRI # (or any correct Ruby implementation). # # Fwiw, GVL has zero bearing here. This is tricky because of # the unavoidable existence of stdio FILE * pointers for # std{in,out,err} in all programs which may use the standard C library if fp.fileno <= 2 # We do not want to hit fclose(3)->dup(2) window for std{in,out,err} # MRI will use freopen(3) here internally on std{in,out,err} fp.reopen(fp.path, "a") else # We should not need this workaround, Ruby can be fixed: # http://bugs.ruby-lang.org/issues/9036 # MRI will not call call fclose(3) or freopen(3) here # since there's no associated std{in,out,err} FILE * pointer # This should atomically use dup3(2) (or dup2(2)) syscall File.open(fp.path, "a") { |tmpfp| fp.reopen(tmpfp) } end fp.sync = true fp.flush # IO#sync=true may not implicitly flush new_st = fp.stat # this should only happen in the master: if orig_st.uid != new_st.uid || orig_st.gid != new_st.gid fp.chown(orig_st.uid, orig_st.gid) end nr += 1 rescue IOError, Errno::EBADF # not much we can do... end end nr end end unicorn-4.7.0/lib/unicorn/worker.rb0000644000004100000410000000527312236653132017322 0ustar www-datawww-data# -*- encoding: binary -*- require "raindrops" # This class and its members can be considered a stable interface # and will not change in a backwards-incompatible fashion between # releases of \Unicorn. Knowledge of this class is generally not # not needed for most users of \Unicorn. # # Some users may want to access it in the before_fork/after_fork hooks. # See the Unicorn::Configurator RDoc for examples. class Unicorn::Worker # :stopdoc: attr_accessor :nr, :switched attr_writer :tmp PER_DROP = Raindrops::PAGE_SIZE / Raindrops::SIZE DROPS = [] def initialize(nr) drop_index = nr / PER_DROP @raindrop = DROPS[drop_index] ||= Raindrops.new(PER_DROP) @offset = nr % PER_DROP @raindrop[@offset] = 0 @nr = nr @tmp = @switched = false end # worker objects may be compared to just plain Integers def ==(other_nr) # :nodoc: @nr == other_nr end # called in the worker process def tick=(value) # :nodoc: @raindrop[@offset] = value end # called in the master process def tick # :nodoc: @raindrop[@offset] end # only exists for compatibility def tmp # :nodoc: @tmp ||= begin tmp = Unicorn::TmpIO.new tmp.fcntl(Fcntl::F_SETFD, Fcntl::FD_CLOEXEC) tmp end end def close # :nodoc: @tmp.close if @tmp end # :startdoc: # In most cases, you should be using the Unicorn::Configurator#user # directive instead. This method should only be used if you need # fine-grained control of exactly when you want to change permissions # in your after_fork hooks. # # Changes the worker process to the specified +user+ and +group+ # This is only intended to be called from within the worker # process from the +after_fork+ hook. This should be called in # the +after_fork+ hook after any privileged functions need to be # run (e.g. to set per-worker CPU affinity, niceness, etc) # # Any and all errors raised within this method will be propagated # directly back to the caller (usually the +after_fork+ hook. # These errors commonly include ArgumentError for specifying an # invalid user/group and Errno::EPERM for insufficient privileges def user(user, group = nil) # we do not protect the caller, checking Process.euid == 0 is # insufficient because modern systems have fine-grained # capabilities. Let the caller handle any and all errors. uid = Etc.getpwnam(user).uid gid = Etc.getgrnam(group).gid if group Unicorn::Util.chown_logs(uid, gid) @tmp.chown(uid, gid) if @tmp if gid && Process.egid != gid Process.initgroups(user, gid) Process::GID.change_privilege(gid) end Process.euid != uid and Process::UID.change_privilege(uid) @switched = true end end unicorn-4.7.0/lib/unicorn/socket_helper.rb0000644000004100000410000001725112236653132020637 0ustar www-datawww-data# -*- encoding: binary -*- # :enddoc: require 'socket' module Unicorn module SocketHelper # :stopdoc: include Socket::Constants # prevents IO objects in here from being GC-ed # kill this when we drop 1.8 support IO_PURGATORY = [] # internal interface, only used by Rainbows!/Zbatery DEFAULTS = { # The semantics for TCP_DEFER_ACCEPT changed in Linux 2.6.32+ # with commit d1b99ba41d6c5aa1ed2fc634323449dd656899e9 # This change shouldn't affect Unicorn users behind nginx (a # value of 1 remains an optimization), but Rainbows! users may # want to use a higher value on Linux 2.6.32+ to protect against # denial-of-service attacks :tcp_defer_accept => 1, # FreeBSD, we need to override this to 'dataready' if we # eventually get HTTPS support :accept_filter => 'httpready', # same default value as Mongrel :backlog => 1024, # favor latency over bandwidth savings :tcp_nopush => nil, :tcp_nodelay => true, } #:startdoc: # configure platform-specific options (only tested on Linux 2.6 so far) case RUBY_PLATFORM when /linux/ # from /usr/include/linux/tcp.h TCP_DEFER_ACCEPT = 9 unless defined?(TCP_DEFER_ACCEPT) # do not send out partial frames (Linux) TCP_CORK = 3 unless defined?(TCP_CORK) # Linux got SO_REUSEPORT in 3.9, BSDs have had it for ages unless defined?(SO_REUSEPORT) if RUBY_PLATFORM =~ /(?:alpha|mips|parisc|sparc)/ SO_REUSEPORT = 0x0200 # untested else SO_REUSEPORT = 15 # only tested on x86_64 and i686 end end when /freebsd/ # do not send out partial frames (FreeBSD) TCP_NOPUSH = 4 unless defined?(TCP_NOPUSH) def accf_arg(af_name) [ af_name, nil ].pack('a16a240') end if defined?(SO_ACCEPTFILTER) end def prevent_autoclose(io) if io.respond_to?(:autoclose=) io.autoclose = false else IO_PURGATORY << io end end def set_tcp_sockopt(sock, opt) # just in case, even LANs can break sometimes. Linux sysadmins # can lower net.ipv4.tcp_keepalive_* sysctl knobs to very low values. sock.setsockopt(SOL_SOCKET, SO_KEEPALIVE, 1) if defined?(SO_KEEPALIVE) if defined?(TCP_NODELAY) val = opt[:tcp_nodelay] val = DEFAULTS[:tcp_nodelay] if nil == val sock.setsockopt(IPPROTO_TCP, TCP_NODELAY, val ? 1 : 0) end val = opt[:tcp_nopush] unless val.nil? if defined?(TCP_CORK) # Linux sock.setsockopt(IPPROTO_TCP, TCP_CORK, val) elsif defined?(TCP_NOPUSH) # TCP_NOPUSH is lightly tested (FreeBSD) sock.setsockopt(IPPROTO_TCP, TCP_NOPUSH, val) end end # No good reason to ever have deferred accepts off # (except maybe benchmarking) if defined?(TCP_DEFER_ACCEPT) # this differs from nginx, since nginx doesn't allow us to # configure the the timeout... seconds = opt[:tcp_defer_accept] seconds = DEFAULTS[:tcp_defer_accept] if [true,nil].include?(seconds) seconds = 0 unless seconds # nil/false means disable this sock.setsockopt(SOL_TCP, TCP_DEFER_ACCEPT, seconds) elsif respond_to?(:accf_arg) name = opt[:accept_filter] name = DEFAULTS[:accept_filter] if nil == name begin sock.setsockopt(SOL_SOCKET, SO_ACCEPTFILTER, accf_arg(name)) rescue => e logger.error("#{sock_name(sock)} " \ "failed to set accept_filter=#{name} (#{e.inspect})") end end end def set_server_sockopt(sock, opt) opt = DEFAULTS.merge(opt || {}) TCPSocket === sock and set_tcp_sockopt(sock, opt) if opt[:rcvbuf] || opt[:sndbuf] log_buffer_sizes(sock, "before: ") sock.setsockopt(SOL_SOCKET, SO_RCVBUF, opt[:rcvbuf]) if opt[:rcvbuf] sock.setsockopt(SOL_SOCKET, SO_SNDBUF, opt[:sndbuf]) if opt[:sndbuf] log_buffer_sizes(sock, " after: ") end sock.listen(opt[:backlog]) rescue => e Unicorn.log_error(logger, "#{sock_name(sock)} #{opt.inspect}", e) end def log_buffer_sizes(sock, pfx = '') rcvbuf = sock.getsockopt(SOL_SOCKET, SO_RCVBUF).unpack('i') sndbuf = sock.getsockopt(SOL_SOCKET, SO_SNDBUF).unpack('i') logger.info "#{pfx}#{sock_name(sock)} rcvbuf=#{rcvbuf} sndbuf=#{sndbuf}" end # creates a new server, socket. address may be a HOST:PORT or # an absolute path to a UNIX socket. address can even be a Socket # object in which case it is immediately returned def bind_listen(address = '0.0.0.0:8080', opt = {}) return address unless String === address sock = if address[0] == ?/ if File.exist?(address) if File.socket?(address) begin UNIXSocket.new(address).close # fall through, try to bind(2) and fail with EADDRINUSE # (or succeed from a small race condition we can't sanely avoid). rescue Errno::ECONNREFUSED logger.info "unlinking existing socket=#{address}" File.unlink(address) end else raise ArgumentError, "socket=#{address} specified but it is not a socket!" end end old_umask = File.umask(opt[:umask] || 0) begin Kgio::UNIXServer.new(address) ensure File.umask(old_umask) end elsif /\A\[([a-fA-F0-9:]+)\]:(\d+)\z/ =~ address new_tcp_server($1, $2.to_i, opt.merge(:ipv6=>true)) elsif /\A(\d+\.\d+\.\d+\.\d+):(\d+)\z/ =~ address new_tcp_server($1, $2.to_i, opt) else raise ArgumentError, "Don't know how to bind: #{address}" end set_server_sockopt(sock, opt) sock end def new_tcp_server(addr, port, opt) # n.b. we set FD_CLOEXEC in the workers sock = Socket.new(opt[:ipv6] ? AF_INET6 : AF_INET, SOCK_STREAM, 0) if opt.key?(:ipv6only) defined?(IPV6_V6ONLY) or abort "Socket::IPV6_V6ONLY not defined, upgrade Ruby and/or your OS" sock.setsockopt(IPPROTO_IPV6, IPV6_V6ONLY, opt[:ipv6only] ? 1 : 0) end sock.setsockopt(SOL_SOCKET, SO_REUSEADDR, 1) if defined?(SO_REUSEPORT) && opt[:reuseport] sock.setsockopt(SOL_SOCKET, SO_REUSEPORT, 1) end sock.bind(Socket.pack_sockaddr_in(port, addr)) prevent_autoclose(sock) Kgio::TCPServer.for_fd(sock.fileno) end # returns rfc2732-style (e.g. "[::1]:666") addresses for IPv6 def tcp_name(sock) port, addr = Socket.unpack_sockaddr_in(sock.getsockname) /:/ =~ addr ? "[#{addr}]:#{port}" : "#{addr}:#{port}" end module_function :tcp_name # Returns the configuration name of a socket as a string. sock may # be a string value, in which case it is returned as-is # Warning: TCP sockets may not always return the name given to it. def sock_name(sock) case sock when String then sock when UNIXServer Socket.unpack_sockaddr_un(sock.getsockname) when TCPServer tcp_name(sock) when Socket begin tcp_name(sock) rescue ArgumentError Socket.unpack_sockaddr_un(sock.getsockname) end else raise ArgumentError, "Unhandled class #{sock.class}: #{sock.inspect}" end end module_function :sock_name # casts a given Socket to be a TCPServer or UNIXServer def server_cast(sock) begin Socket.unpack_sockaddr_in(sock.getsockname) Kgio::TCPServer.for_fd(sock.fileno) rescue ArgumentError Kgio::UNIXServer.for_fd(sock.fileno) end end end # module SocketHelper end # module Unicorn unicorn-4.7.0/lib/unicorn/ssl_client.rb0000644000004100000410000000040112236653132020134 0ustar www-datawww-data# -*- encoding: binary -*- # :stopdoc: class Unicorn::SSLClient < Kgio::SSL alias write kgio_write alias close kgio_close # this is no-op for now, to be fixed in kgio-monkey if people care # about SSL support... def shutdown(how = nil) end end unicorn-4.7.0/lib/unicorn/ssl_server.rb0000644000004100000410000000255712236653132020202 0ustar www-datawww-data# -*- encoding: binary -*- # :stopdoc: # this module is meant to be included in Unicorn::HttpServer # It is an implementation detail and NOT meant for users. module Unicorn::SSLServer attr_accessor :ssl_engine def ssl_enable! sni_hostnames = rack_sni_hostnames(@app) seen = {} # we map a single SSLContext to multiple listeners listener_ctx = {} @listener_opts.each do |address, address_opts| ssl_opts = address_opts[:ssl_opts] or next listener_ctx[address] = seen[ssl_opts.object_id] ||= begin unless sni_hostnames.empty? ssl_opts = ssl_opts.dup ssl_opts[:sni_hostnames] = sni_hostnames end ctx = Flipper.ssl_context(ssl_opts) # FIXME: make configurable ctx.session_cache_mode = OpenSSL::SSL::SSLContext::SESSION_CACHE_OFF ctx end end Unicorn::HttpServer::LISTENERS.each do |listener| ctx = listener_ctx[sock_name(listener)] or next listener.extend(Kgio::SSLServer) listener.ssl_ctx = ctx listener.kgio_ssl_class = Unicorn::SSLClient end end # ugh, this depends on Rack internals... def rack_sni_hostnames(rack_app) # :nodoc: hostnames = {} if Rack::URLMap === rack_app mapping = rack_app.instance_variable_get(:@mapping) mapping.each { |hostname,_,_,_| hostnames[hostname] = true } end hostnames.keys end end unicorn-4.7.0/lib/unicorn/tee_input.rb0000644000004100000410000001075712236653132020010 0ustar www-datawww-data# -*- encoding: binary -*- # acts like tee(1) on an input input to provide a input-like stream # while providing rewindable semantics through a File/StringIO backing # store. On the first pass, the input is only read on demand so your # Rack application can use input notification (upload progress and # like). This should fully conform to the Rack::Lint::InputWrapper # specification on the public API. This class is intended to be a # strict interpretation of Rack::Lint::InputWrapper functionality and # will not support any deviations from it. # # When processing uploads, Unicorn exposes a TeeInput object under # "rack.input" of the Rack environment. class Unicorn::TeeInput < Unicorn::StreamInput # The maximum size (in +bytes+) to buffer in memory before # resorting to a temporary file. Default is 112 kilobytes. @@client_body_buffer_size = Unicorn::Const::MAX_BODY # sets the maximum size of request bodies to buffer in memory, # amounts larger than this are buffered to the filesystem def self.client_body_buffer_size=(bytes) @@client_body_buffer_size = bytes end # returns the maximum size of request bodies to buffer in memory, # amounts larger than this are buffered to the filesystem def self.client_body_buffer_size @@client_body_buffer_size end # Initializes a new TeeInput object. You normally do not have to call # this unless you are writing an HTTP server. def initialize(socket, request) @len = request.content_length super @tmp = @len && @len <= @@client_body_buffer_size ? StringIO.new("") : Unicorn::TmpIO.new end # :call-seq: # ios.size => Integer # # Returns the size of the input. For requests with a Content-Length # header value, this will not read data off the socket and just return # the value of the Content-Length header as an Integer. # # For Transfer-Encoding:chunked requests, this requires consuming # all of the input stream before returning since there's no other # way to determine the size of the request body beforehand. # # This method is no longer part of the Rack specification as of # Rack 1.2, so its use is not recommended. This method only exists # for compatibility with Rack applications designed for Rack 1.1 and # earlier. Most applications should only need to call +read+ with a # specified +length+ in a loop until it returns +nil+. def size @len and return @len pos = @tmp.pos consume! @tmp.pos = pos @len = @tmp.size end # :call-seq: # ios.read([length [, buffer ]]) => string, buffer, or nil # # Reads at most length bytes from the I/O stream, or to the end of # file if length is omitted or is nil. length must be a non-negative # integer or nil. If the optional buffer argument is present, it # must reference a String, which will receive the data. # # At end of file, it returns nil or "" depend on length. # ios.read() and ios.read(nil) returns "". # ios.read(length [, buffer]) returns nil. # # If the Content-Length of the HTTP request is known (as is the common # case for POST requests), then ios.read(length [, buffer]) will block # until the specified length is read (or it is the last chunk). # Otherwise, for uncommon "Transfer-Encoding: chunked" requests, # ios.read(length [, buffer]) will return immediately if there is # any data and only block when nothing is available (providing # IO#readpartial semantics). def read(*args) @socket ? tee(super) : @tmp.read(*args) end # :call-seq: # ios.gets => string or nil # # Reads the next ``line'' from the I/O stream; lines are separated # by the global record separator ($/, typically "\n"). A global # record separator of nil reads the entire unread contents of ios. # Returns nil if called at the end of file. # This takes zero arguments for strict Rack::Lint compatibility, # unlike IO#gets. def gets @socket ? tee(super) : @tmp.gets end # :call-seq: # ios.rewind => 0 # # Positions the *ios* pointer to the beginning of input, returns # the offset (zero) of the +ios+ pointer. Subsequent reads will # start from the beginning of the previously-buffered input. def rewind return 0 if 0 == @tmp.size consume! if @socket @tmp.rewind # Rack does not specify what the return value is here end private # consumes the stream of the socket def consume! junk = "" nil while read(@@io_chunk_size, junk) end def tee(buffer) if buffer && buffer.size > 0 @tmp.write(buffer) end buffer end end unicorn-4.7.0/lib/unicorn/http_request.rb0000644000004100000410000000676012236653132020542 0ustar www-datawww-data# -*- encoding: binary -*- # :enddoc: # no stable API here require 'unicorn_http' # TODO: remove redundant names Unicorn.const_set(:HttpRequest, Unicorn::HttpParser) class Unicorn::HttpParser # default parameters we merge into the request env for Rack handlers DEFAULTS = { "rack.errors" => $stderr, "rack.multiprocess" => true, "rack.multithread" => false, "rack.run_once" => false, "rack.version" => [1, 1], "SCRIPT_NAME" => "", # this is not in the Rack spec, but some apps may rely on it "SERVER_SOFTWARE" => "Unicorn #{Unicorn::Const::UNICORN_VERSION}" } NULL_IO = StringIO.new("") attr_accessor :response_start_sent # :stopdoc: # A frozen format for this is about 15% faster REMOTE_ADDR = 'REMOTE_ADDR'.freeze RACK_INPUT = 'rack.input'.freeze @@input_class = Unicorn::TeeInput @@check_client_connection = false def self.input_class @@input_class end def self.input_class=(klass) @@input_class = klass end def self.check_client_connection @@check_client_connection end def self.check_client_connection=(bool) @@check_client_connection = bool end # :startdoc: # Does the majority of the IO processing. It has been written in # Ruby using about 8 different IO processing strategies. # # It is currently carefully constructed to make sure that it gets # the best possible performance for the common case: GET requests # that are fully complete after a single read(2) # # Anyone who thinks they can make it faster is more than welcome to # take a crack at it. # # returns an environment hash suitable for Rack if successful # This does minimal exception trapping and it is up to the caller # to handle any socket errors (e.g. user aborted upload). def read(socket) clear e = env # From http://www.ietf.org/rfc/rfc3875: # "Script authors should be aware that the REMOTE_ADDR and # REMOTE_HOST meta-variables (see sections 4.1.8 and 4.1.9) # may not identify the ultimate source of the request. They # identify the client for the immediate request to the server; # that client may be a proxy, gateway, or other intermediary # acting on behalf of the actual source client." e[REMOTE_ADDR] = socket.kgio_addr # short circuit the common case with small GET requests first socket.kgio_read!(16384, buf) if parse.nil? # Parser is not done, queue up more data to read and continue parsing # an Exception thrown from the parser will throw us out of the loop false until add_parse(socket.kgio_read!(16384)) end # detect if the socket is valid by writing a partial response: if @@check_client_connection && headers? @response_start_sent = true Unicorn::Const::HTTP_RESPONSE_START.each { |c| socket.write(c) } end e[RACK_INPUT] = 0 == content_length ? NULL_IO : @@input_class.new(socket, self) hijack_setup(e, socket) e.merge!(DEFAULTS) end # Rack 1.5.0 (protocol version 1.2) adds hijack request support if ((Rack::VERSION[0] << 8) | Rack::VERSION[1]) >= 0x0102 DEFAULTS["rack.hijack?"] = true DEFAULTS["rack.version"] = [1, 2] RACK_HIJACK = "rack.hijack".freeze RACK_HIJACK_IO = "rack.hijack_io".freeze def hijacked? env.include?(RACK_HIJACK_IO) end def hijack_setup(e, socket) e[RACK_HIJACK] = proc { e[RACK_HIJACK_IO] = socket } end else # old Rack, do nothing. def hijack_setup(e, _) end def hijacked? false end end end unicorn-4.7.0/lib/unicorn/stream_input.rb0000644000004100000410000001047112236653132020517 0ustar www-datawww-data# -*- encoding: binary -*- # When processing uploads, Unicorn may expose a StreamInput object under # "rack.input" of the (future) Rack (2.x) environment. class Unicorn::StreamInput # The I/O chunk size (in +bytes+) for I/O operations where # the size cannot be user-specified when a method is called. # The default is 16 kilobytes. @@io_chunk_size = Unicorn::Const::CHUNK_SIZE # Initializes a new StreamInput object. You normally do not have to call # this unless you are writing an HTTP server. def initialize(socket, request) @chunked = request.content_length.nil? @socket = socket @parser = request @buf = request.buf @rbuf = '' @bytes_read = 0 filter_body(@rbuf, @buf) unless @buf.empty? end # :call-seq: # ios.read([length [, buffer ]]) => string, buffer, or nil # # Reads at most length bytes from the I/O stream, or to the end of # file if length is omitted or is nil. length must be a non-negative # integer or nil. If the optional buffer argument is present, it # must reference a String, which will receive the data. # # At end of file, it returns nil or '' depend on length. # ios.read() and ios.read(nil) returns ''. # ios.read(length [, buffer]) returns nil. # # If the Content-Length of the HTTP request is known (as is the common # case for POST requests), then ios.read(length [, buffer]) will block # until the specified length is read (or it is the last chunk). # Otherwise, for uncommon "Transfer-Encoding: chunked" requests, # ios.read(length [, buffer]) will return immediately if there is # any data and only block when nothing is available (providing # IO#readpartial semantics). def read(length = nil, rv = '') if length if length <= @rbuf.size length < 0 and raise ArgumentError, "negative length #{length} given" rv.replace(@rbuf.slice!(0, length)) else to_read = length - @rbuf.size rv.replace(@rbuf.slice!(0, @rbuf.size)) until to_read == 0 || eof? || (rv.size > 0 && @chunked) @socket.kgio_read(to_read, @buf) or eof! filter_body(@rbuf, @buf) rv << @rbuf to_read -= @rbuf.size end @rbuf.replace('') end rv = nil if rv.empty? && length != 0 else read_all(rv) end rv end # :call-seq: # ios.gets => string or nil # # Reads the next ``line'' from the I/O stream; lines are separated # by the global record separator ($/, typically "\n"). A global # record separator of nil reads the entire unread contents of ios. # Returns nil if called at the end of file. # This takes zero arguments for strict Rack::Lint compatibility, # unlike IO#gets. def gets sep = $/ if sep.nil? read_all(rv = '') return rv.empty? ? nil : rv end re = /\A(.*?#{Regexp.escape(sep)})/ begin @rbuf.sub!(re, '') and return $1 return @rbuf.empty? ? nil : @rbuf.slice!(0, @rbuf.size) if eof? @socket.kgio_read(@@io_chunk_size, @buf) or eof! filter_body(once = '', @buf) @rbuf << once end while true end # :call-seq: # ios.each { |line| block } => ios # # Executes the block for every ``line'' in *ios*, where lines are # separated by the global record separator ($/, typically "\n"). def each while line = gets yield line end self # Rack does not specify what the return value is here end private def eof? if @parser.body_eof? while @chunked && ! @parser.parse once = @socket.kgio_read(@@io_chunk_size) or eof! @buf << once end @socket = nil true else false end end def filter_body(dst, src) rv = @parser.filter_body(dst, src) @bytes_read += dst.size rv end def read_all(dst) dst.replace(@rbuf) @socket or return until eof? @socket.kgio_read(@@io_chunk_size, @buf) or eof! filter_body(@rbuf, @buf) dst << @rbuf end ensure @rbuf.replace('') end def eof! # in case client only did a premature shutdown(SHUT_WR) # we do support clients that shutdown(SHUT_WR) after the # _entire_ request has been sent, and those will not have # raised EOFError on us. if @socket @socket.shutdown @socket.close end ensure raise Unicorn::ClientShutdown, "bytes_read=#{@bytes_read}", [] end end unicorn-4.7.0/lib/unicorn/http_server.rb0000644000004100000410000006516212236653132020361 0ustar www-datawww-data# -*- encoding: binary -*- require "unicorn/ssl_server" # This is the process manager of Unicorn. This manages worker # processes which in turn handle the I/O and application process. # Listener sockets are started in the master process and shared with # forked worker children. # # Users do not need to know the internals of this class, but reading the # {source}[http://bogomips.org/unicorn.git/tree/lib/unicorn/http_server.rb] # is education for programmers wishing to learn how \Unicorn works. # See Unicorn::Configurator for information on how to configure \Unicorn. class Unicorn::HttpServer # :stopdoc: attr_accessor :app, :request, :timeout, :worker_processes, :before_fork, :after_fork, :before_exec, :listener_opts, :preload_app, :reexec_pid, :orig_app, :init_listeners, :master_pid, :config, :ready_pipe, :user attr_reader :pid, :logger include Unicorn::SocketHelper include Unicorn::HttpResponse include Unicorn::SSLServer # backwards compatibility with 1.x Worker = Unicorn::Worker # all bound listener sockets LISTENERS = [] # listeners we have yet to bind NEW_LISTENERS = [] # This hash maps PIDs to Workers WORKERS = {} # We use SELF_PIPE differently in the master and worker processes: # # * The master process never closes or reinitializes this once # initialized. Signal handlers in the master process will write to # it to wake up the master from IO.select in exactly the same manner # djb describes in http://cr.yp.to/docs/selfpipe.html # # * The workers immediately close the pipe they inherit from the # master and replace it with a new pipe after forking. This new # pipe is also used to wakeup from IO.select from inside (worker) # signal handlers. However, workers *close* the pipe descriptors in # the signal handlers to raise EBADF in IO.select instead of writing # like we do in the master. We cannot easily use the reader set for # IO.select because LISTENERS is already that set, and it's extra # work (and cycles) to distinguish the pipe FD from the reader set # once IO.select returns. So we're lazy and just close the pipe when # a (rare) signal arrives in the worker and reinitialize the pipe later. SELF_PIPE = [] # signal queue used for self-piping SIG_QUEUE = [] # list of signals we care about and trap in master. QUEUE_SIGS = [ :WINCH, :QUIT, :INT, :TERM, :USR1, :USR2, :HUP, :TTIN, :TTOU ] # :startdoc: # We populate this at startup so we can figure out how to reexecute # and upgrade the currently running instance of Unicorn # This Hash is considered a stable interface and changing its contents # will allow you to switch between different installations of Unicorn # or even different installations of the same applications without # downtime. Keys of this constant Hash are described as follows: # # * 0 - the path to the unicorn/unicorn_rails executable # * :argv - a deep copy of the ARGV array the executable originally saw # * :cwd - the working directory of the application, this is where # you originally started Unicorn. # # To change your unicorn executable to a different path without downtime, # you can set the following in your Unicorn config file, HUP and then # continue with the traditional USR2 + QUIT upgrade steps: # # Unicorn::HttpServer::START_CTX[0] = "/home/bofh/1.9.2/bin/unicorn" START_CTX = { :argv => ARGV.map { |arg| arg.dup }, 0 => $0.dup, } # We favor ENV['PWD'] since it is (usually) symlink aware for Capistrano # and like systems START_CTX[:cwd] = begin a = File.stat(pwd = ENV['PWD']) b = File.stat(Dir.pwd) a.ino == b.ino && a.dev == b.dev ? pwd : Dir.pwd rescue Dir.pwd end # :stopdoc: # Creates a working server on host:port (strange things happen if # port isn't a Number). Use HttpServer::run to start the server and # HttpServer.run.join to join the thread that's processing # incoming requests on the socket. def initialize(app, options = {}) @app = app @request = Unicorn::HttpRequest.new self.reexec_pid = 0 options = options.dup @ready_pipe = options.delete(:ready_pipe) @init_listeners = options[:listeners] ? options[:listeners].dup : [] options[:use_defaults] = true self.config = Unicorn::Configurator.new(options) self.listener_opts = {} # we try inheriting listeners first, so we bind them later. # we don't write the pid file until we've bound listeners in case # unicorn was started twice by mistake. Even though our #pid= method # checks for stale/existing pid files, race conditions are still # possible (and difficult/non-portable to avoid) and can be likely # to clobber the pid if the second start was in quick succession # after the first, so we rely on the listener binding to fail in # that case. Some tests (in and outside of this source tree) and # monitoring tools may also rely on pid files existing before we # attempt to connect to the listener(s) config.commit!(self, :skip => [:listeners, :pid]) self.orig_app = app end # Runs the thing. Returns self so you can run join on it def start inherit_listeners! # this pipe is used to wake us up from select(2) in #join when signals # are trapped. See trap_deferred. init_self_pipe! # setup signal handlers before writing pid file in case people get # trigger happy and send signals as soon as the pid file exists. # Note that signals don't actually get handled until the #join method QUEUE_SIGS.each { |sig| trap(sig) { SIG_QUEUE << sig; awaken_master } } trap(:CHLD) { awaken_master } # write pid early for Mongrel compatibility if we're not inheriting sockets # This was needed for compatibility with some health checker a long time # ago. This unfortunately has the side effect of clobbering valid PID # files. self.pid = config[:pid] unless ENV["UNICORN_FD"] self.master_pid = $$ build_app! if preload_app bind_new_listeners! # Assuming preload_app==false, we drop the pid file after the app is ready # to process requests. If binding or build_app! fails with # preload_app==true, we'll never get here and the parent will recover self.pid = config[:pid] if ENV["UNICORN_FD"] spawn_missing_workers self end # replaces current listener set with +listeners+. This will # close the socket if it will not exist in the new listener set def listeners=(listeners) cur_names, dead_names = [], [] listener_names.each do |name| if ?/ == name[0] # mark unlinked sockets as dead so we can rebind them (File.socket?(name) ? cur_names : dead_names) << name else cur_names << name end end set_names = listener_names(listeners) dead_names.concat(cur_names - set_names).uniq! LISTENERS.delete_if do |io| if dead_names.include?(sock_name(io)) IO_PURGATORY.delete_if do |pio| pio.fileno == io.fileno && (pio.close rescue nil).nil? # true end (io.close rescue nil).nil? # true else set_server_sockopt(io, listener_opts[sock_name(io)]) false end end (set_names - cur_names).each { |addr| listen(addr) } end def stdout_path=(path); redirect_io($stdout, path); end def stderr_path=(path); redirect_io($stderr, path); end def logger=(obj) Unicorn::HttpRequest::DEFAULTS["rack.logger"] = @logger = obj end def clobber_pid(path) unlink_pid_safe(@pid) if @pid if path fp = begin tmp = "#{File.dirname(path)}/#{rand}.#$$" File.open(tmp, File::RDWR|File::CREAT|File::EXCL, 0644) rescue Errno::EEXIST retry end fp.syswrite("#$$\n") File.rename(fp.path, path) fp.close end end # sets the path for the PID file of the master process def pid=(path) if path if x = valid_pid?(path) return path if pid && path == pid && x == $$ if x == reexec_pid && pid =~ /\.oldbin\z/ logger.warn("will not set pid=#{path} while reexec-ed "\ "child is running PID:#{x}") return end raise ArgumentError, "Already running on PID:#{x} " \ "(or pid=#{path} is stale)" end end # rename the old pid if possible if @pid && path begin File.rename(@pid, path) rescue Errno::ENOENT, Errno::EXDEV # a user may have accidentally removed the original, # obviously cross-FS renames don't work, either. clobber_pid(path) end else clobber_pid(path) end @pid = path end # add a given address to the +listeners+ set, idempotently # Allows workers to add a private, per-process listener via the # after_fork hook. Very useful for debugging and testing. # +:tries+ may be specified as an option for the number of times # to retry, and +:delay+ may be specified as the time in seconds # to delay between retries. # A negative value for +:tries+ indicates the listen will be # retried indefinitely, this is useful when workers belonging to # different masters are spawned during a transparent upgrade. def listen(address, opt = {}.merge(listener_opts[address] || {})) address = config.expand_addr(address) return if String === address && listener_names.include?(address) delay = opt[:delay] || 0.5 tries = opt[:tries] || 5 begin io = bind_listen(address, opt) unless Kgio::TCPServer === io || Kgio::UNIXServer === io prevent_autoclose(io) io = server_cast(io) end logger.info "listening on addr=#{sock_name(io)} fd=#{io.fileno}" LISTENERS << io io rescue Errno::EADDRINUSE => err logger.error "adding listener failed addr=#{address} (in use)" raise err if tries == 0 tries -= 1 logger.error "retrying in #{delay} seconds " \ "(#{tries < 0 ? 'infinite' : tries} tries left)" sleep(delay) retry rescue => err logger.fatal "error adding listener addr=#{address}" raise err end end # monitors children and receives signals forever # (or until a termination signal is sent). This handles signals # one-at-a-time time and we'll happily drop signals in case somebody # is signalling us too often. def join respawn = true last_check = Time.now proc_name 'master' logger.info "master process ready" # test_exec.rb relies on this message if @ready_pipe @ready_pipe.syswrite($$.to_s) @ready_pipe = @ready_pipe.close rescue nil end begin reap_all_workers case SIG_QUEUE.shift when nil # avoid murdering workers after our master process (or the # machine) comes out of suspend/hibernation if (last_check + @timeout) >= (last_check = Time.now) sleep_time = murder_lazy_workers else sleep_time = @timeout/2.0 + 1 @logger.debug("waiting #{sleep_time}s after suspend/hibernation") end maintain_worker_count if respawn master_sleep(sleep_time) when :QUIT # graceful shutdown break when :TERM, :INT # immediate shutdown stop(false) break when :USR1 # rotate logs logger.info "master reopening logs..." Unicorn::Util.reopen_logs logger.info "master done reopening logs" kill_each_worker(:USR1) when :USR2 # exec binary, stay alive in case something went wrong reexec when :WINCH if Unicorn::Configurator::RACKUP[:daemonized] respawn = false logger.info "gracefully stopping all workers" kill_each_worker(:QUIT) self.worker_processes = 0 else logger.info "SIGWINCH ignored because we're not daemonized" end when :TTIN respawn = true self.worker_processes += 1 when :TTOU self.worker_processes -= 1 if self.worker_processes > 0 when :HUP respawn = true if config.config_file load_config! else # exec binary and exit if there's no config file logger.info "config_file not present, reexecuting binary" reexec end end rescue => e Unicorn.log_error(@logger, "master loop error", e) end while true stop # gracefully shutdown all workers on our way out logger.info "master complete" unlink_pid_safe(pid) if pid end # Terminates all workers, but does not exit master process def stop(graceful = true) self.listeners = [] limit = Time.now + timeout until WORKERS.empty? || Time.now > limit kill_each_worker(graceful ? :QUIT : :TERM) sleep(0.1) reap_all_workers end kill_each_worker(:KILL) end def rewindable_input Unicorn::HttpRequest.input_class.method_defined?(:rewind) end def rewindable_input=(bool) Unicorn::HttpRequest.input_class = bool ? Unicorn::TeeInput : Unicorn::StreamInput end def client_body_buffer_size Unicorn::TeeInput.client_body_buffer_size end def client_body_buffer_size=(bytes) Unicorn::TeeInput.client_body_buffer_size = bytes end def trust_x_forwarded Unicorn::HttpParser.trust_x_forwarded? end def trust_x_forwarded=(bool) Unicorn::HttpParser.trust_x_forwarded = bool end def check_client_connection Unicorn::HttpRequest.check_client_connection end def check_client_connection=(bool) Unicorn::HttpRequest.check_client_connection = bool end private # wait for a signal hander to wake us up and then consume the pipe def master_sleep(sec) IO.select([ SELF_PIPE[0] ], nil, nil, sec) or return SELF_PIPE[0].kgio_tryread(11) end def awaken_master SELF_PIPE[1].kgio_trywrite('.') # wakeup master process from select end # reaps all unreaped workers def reap_all_workers begin wpid, status = Process.waitpid2(-1, Process::WNOHANG) wpid or return if reexec_pid == wpid logger.error "reaped #{status.inspect} exec()-ed" self.reexec_pid = 0 self.pid = pid.chomp('.oldbin') if pid proc_name 'master' else worker = WORKERS.delete(wpid) and worker.close rescue nil m = "reaped #{status.inspect} worker=#{worker.nr rescue 'unknown'}" status.success? ? logger.info(m) : logger.error(m) end rescue Errno::ECHILD break end while true end # reexecutes the START_CTX with a new binary def reexec if reexec_pid > 0 begin Process.kill(0, reexec_pid) logger.error "reexec-ed child already running PID:#{reexec_pid}" return rescue Errno::ESRCH self.reexec_pid = 0 end end if pid old_pid = "#{pid}.oldbin" begin self.pid = old_pid # clear the path for a new pid file rescue ArgumentError logger.error "old PID:#{valid_pid?(old_pid)} running with " \ "existing pid=#{old_pid}, refusing rexec" return rescue => e logger.error "error writing pid=#{old_pid} #{e.class} #{e.message}" return end end self.reexec_pid = fork do listener_fds = {} LISTENERS.each do |sock| # IO#close_on_exec= will be available on any future version of # Ruby that sets FD_CLOEXEC by default on new file descriptors # ref: http://redmine.ruby-lang.org/issues/5041 sock.close_on_exec = false if sock.respond_to?(:close_on_exec=) listener_fds[sock.fileno] = sock end ENV['UNICORN_FD'] = listener_fds.keys.join(',') Dir.chdir(START_CTX[:cwd]) cmd = [ START_CTX[0] ].concat(START_CTX[:argv]) # avoid leaking FDs we don't know about, but let before_exec # unset FD_CLOEXEC, if anything else in the app eventually # relies on FD inheritence. (3..1024).each do |io| next if listener_fds.include?(io) io = IO.for_fd(io) rescue next prevent_autoclose(io) io.fcntl(Fcntl::F_SETFD, Fcntl::FD_CLOEXEC) end # exec(command, hash) works in at least 1.9.1+, but will only be # required in 1.9.4/2.0.0 at earliest. cmd << listener_fds if RUBY_VERSION >= "1.9.1" logger.info "executing #{cmd.inspect} (in #{Dir.pwd})" before_exec.call(self) exec(*cmd) end proc_name 'master (old)' end # forcibly terminate all workers that haven't checked in in timeout seconds. The timeout is implemented using an unlinked File def murder_lazy_workers next_sleep = @timeout - 1 now = Time.now.to_i WORKERS.dup.each_pair do |wpid, worker| tick = worker.tick 0 == tick and next # skip workers that haven't processed any clients diff = now - tick tmp = @timeout - diff if tmp >= 0 next_sleep > tmp and next_sleep = tmp next end next_sleep = 0 logger.error "worker=#{worker.nr} PID:#{wpid} timeout " \ "(#{diff}s > #{@timeout}s), killing" kill_worker(:KILL, wpid) # take no prisoners for timeout violations end next_sleep <= 0 ? 1 : next_sleep end def after_fork_internal @ready_pipe.close if @ready_pipe Unicorn::Configurator::RACKUP.clear @ready_pipe = @init_listeners = @before_exec = @before_fork = nil srand # http://redmine.ruby-lang.org/issues/4338 # The OpenSSL PRNG is seeded with only the pid, and apps with frequently # dying workers can recycle pids OpenSSL::Random.seed(rand.to_s) if defined?(OpenSSL::Random) end def spawn_missing_workers worker_nr = -1 until (worker_nr += 1) == @worker_processes WORKERS.value?(worker_nr) and next worker = Worker.new(worker_nr) before_fork.call(self, worker) if pid = fork WORKERS[pid] = worker else after_fork_internal worker_loop(worker) exit end end rescue => e @logger.error(e) rescue nil exit! end def maintain_worker_count (off = WORKERS.size - worker_processes) == 0 and return off < 0 and return spawn_missing_workers WORKERS.dup.each_pair { |wpid,w| w.nr >= worker_processes and kill_worker(:QUIT, wpid) rescue nil } end # if we get any error, try to write something back to the client # assuming we haven't closed the socket, but don't get hung up # if the socket is already closed or broken. We'll always ensure # the socket is closed at the end of this function def handle_error(client, e) code = case e when EOFError,Errno::ECONNRESET,Errno::EPIPE,Errno::ENOTCONN # client disconnected on us and there's nothing we can do when Unicorn::RequestURITooLongError 414 when Unicorn::RequestEntityTooLargeError 413 when Unicorn::HttpParserError # try to tell the client they're bad 400 else Unicorn.log_error(@logger, "app error", e) 500 end if code client.kgio_trywrite(err_response(code, @request.response_start_sent)) end client.close rescue end def expect_100_response if @request.response_start_sent Unicorn::Const::EXPECT_100_RESPONSE_SUFFIXED else Unicorn::Const::EXPECT_100_RESPONSE end end # once a client is accepted, it is processed in its entirety here # in 3 easy steps: read request, call app, write app response def process_client(client) status, headers, body = @app.call(env = @request.read(client)) return if @request.hijacked? if 100 == status.to_i client.write(expect_100_response) env.delete(Unicorn::Const::HTTP_EXPECT) status, headers, body = @app.call(env) return if @request.hijacked? end @request.headers? or headers = nil http_response_write(client, status, headers, body, @request.response_start_sent) unless client.closed? # rack.hijack may've close this for us client.shutdown # in case of fork() in Rack app client.close # flush and uncork socket immediately, no keepalive end rescue => e handle_error(client, e) end EXIT_SIGS = [ :QUIT, :TERM, :INT ] WORKER_QUEUE_SIGS = QUEUE_SIGS - EXIT_SIGS # gets rid of stuff the worker has no business keeping track of # to free some resources and drops all sig handlers. # traps for USR1, USR2, and HUP may be set in the after_fork Proc # by the user. def init_worker_process(worker) # we'll re-trap :QUIT later for graceful shutdown iff we accept clients EXIT_SIGS.each { |sig| trap(sig) { exit!(0) } } exit!(0) if (SIG_QUEUE & EXIT_SIGS)[0] WORKER_QUEUE_SIGS.each { |sig| trap(sig, nil) } trap(:CHLD, 'DEFAULT') SIG_QUEUE.clear proc_name "worker[#{worker.nr}]" START_CTX.clear init_self_pipe! WORKERS.clear LISTENERS.each { |sock| sock.fcntl(Fcntl::F_SETFD, Fcntl::FD_CLOEXEC) } after_fork.call(self, worker) # can drop perms worker.user(*user) if user.kind_of?(Array) && ! worker.switched self.timeout /= 2.0 # halve it for select() @config = nil build_app! unless preload_app ssl_enable! @after_fork = @listener_opts = @orig_app = nil end def reopen_worker_logs(worker_nr) logger.info "worker=#{worker_nr} reopening logs..." Unicorn::Util.reopen_logs logger.info "worker=#{worker_nr} done reopening logs" init_self_pipe! rescue => e logger.error(e) rescue nil exit!(77) # EX_NOPERM in sysexits.h end # runs inside each forked worker, this sits around and waits # for connections and doesn't die until the parent dies (or is # given a INT, QUIT, or TERM signal) def worker_loop(worker) ppid = master_pid init_worker_process(worker) nr = 0 # this becomes negative if we need to reopen logs l = LISTENERS.dup ready = l.dup # closing anything we IO.select on will raise EBADF trap(:USR1) { nr = -65536; SELF_PIPE[0].close rescue nil } trap(:QUIT) { worker = nil; LISTENERS.each { |s| s.close rescue nil }.clear } logger.info "worker=#{worker.nr} ready" begin nr < 0 and reopen_worker_logs(worker.nr) nr = 0 worker.tick = Time.now.to_i while sock = ready.shift if client = sock.kgio_tryaccept process_client(client) nr += 1 worker.tick = Time.now.to_i end break if nr < 0 end # make the following bet: if we accepted clients this round, # we're probably reasonably busy, so avoid calling select() # and do a speculative non-blocking accept() on ready listeners # before we sleep again in select(). unless nr == 0 # (nr < 0) => reopen logs (unlikely) ready = l.dup redo end ppid == Process.ppid or return # timeout used so we can detect parent death: worker.tick = Time.now.to_i ret = IO.select(l, nil, SELF_PIPE, @timeout) and ready = ret[0] rescue => e redo if nr < 0 && (Errno::EBADF === e || IOError === e) # reopen logs Unicorn.log_error(@logger, "listen loop error", e) if worker end while worker end # delivers a signal to a worker and fails gracefully if the worker # is no longer running. def kill_worker(signal, wpid) Process.kill(signal, wpid) rescue Errno::ESRCH worker = WORKERS.delete(wpid) and worker.close rescue nil end # delivers a signal to each worker def kill_each_worker(signal) WORKERS.keys.each { |wpid| kill_worker(signal, wpid) } end # unlinks a PID file at given +path+ if it contains the current PID # still potentially racy without locking the directory (which is # non-portable and may interact badly with other programs), but the # window for hitting the race condition is small def unlink_pid_safe(path) (File.read(path).to_i == $$ and File.unlink(path)) rescue nil end # returns a PID if a given path contains a non-stale PID file, # nil otherwise. def valid_pid?(path) wpid = File.read(path).to_i wpid <= 0 and return Process.kill(0, wpid) wpid rescue Errno::EPERM logger.info "pid=#{path} possibly stale, got EPERM signalling PID:#{wpid}" nil rescue Errno::ESRCH, Errno::ENOENT # don't unlink stale pid files, racy without non-portable locking... end def load_config! loaded_app = app logger.info "reloading config_file=#{config.config_file}" config[:listeners].replace(@init_listeners) config.reload config.commit!(self) kill_each_worker(:QUIT) Unicorn::Util.reopen_logs self.app = orig_app build_app! if preload_app logger.info "done reloading config_file=#{config.config_file}" rescue StandardError, LoadError, SyntaxError => e Unicorn.log_error(@logger, "error reloading config_file=#{config.config_file}", e) self.app = loaded_app end # returns an array of string names for the given listener array def listener_names(listeners = LISTENERS) listeners.map { |io| sock_name(io) } end def build_app! if app.respond_to?(:arity) && app.arity == 0 if defined?(Gem) && Gem.respond_to?(:refresh) logger.info "Refreshing Gem list" Gem.refresh end self.app = app.call end end def proc_name(tag) $0 = ([ File.basename(START_CTX[0]), tag ]).concat(START_CTX[:argv]).join(' ') end def redirect_io(io, path) File.open(path, 'ab') { |fp| io.reopen(fp) } if path io.sync = true end def init_self_pipe! SELF_PIPE.each { |io| io.close rescue nil } SELF_PIPE.replace(Kgio::Pipe.new) SELF_PIPE.each { |io| io.fcntl(Fcntl::F_SETFD, Fcntl::FD_CLOEXEC) } end def inherit_listeners! # inherit sockets from parents, they need to be plain Socket objects # before they become Kgio::UNIXServer or Kgio::TCPServer inherited = ENV['UNICORN_FD'].to_s.split(/,/).map do |fd| io = Socket.for_fd(fd.to_i) set_server_sockopt(io, listener_opts[sock_name(io)]) prevent_autoclose(io) logger.info "inherited addr=#{sock_name(io)} fd=#{fd}" server_cast(io) end config_listeners = config[:listeners].dup LISTENERS.replace(inherited) # we start out with generic Socket objects that get cast to either # Kgio::TCPServer or Kgio::UNIXServer objects; but since the Socket # objects share the same OS-level file descriptor as the higher-level # *Server objects; we need to prevent Socket objects from being # garbage-collected config_listeners -= listener_names if config_listeners.empty? && LISTENERS.empty? config_listeners << Unicorn::Const::DEFAULT_LISTEN @init_listeners << Unicorn::Const::DEFAULT_LISTEN START_CTX[:argv] << "-l#{Unicorn::Const::DEFAULT_LISTEN}" end NEW_LISTENERS.replace(config_listeners) end # call only after calling inherit_listeners! # This binds any listeners we did NOT inherit from the parent def bind_new_listeners! NEW_LISTENERS.each { |addr| listen(addr) } raise ArgumentError, "no listeners" if LISTENERS.empty? NEW_LISTENERS.clear end end unicorn-4.7.0/NEWS0000600000004100000410000024340312236653132013727 0ustar www-datawww-data=== unicorn 4.7.0 - minor updates, license tweak / 2013-11-04 06:59 UTC * support SO_REUSEPORT on new listeners (:reuseport) This allows users to start an independent instance of unicorn on a the same port as a running unicorn (as long as both instances use :reuseport). ref: https://lwn.net/Articles/542629/ * unicorn is now GPLv2-or-later and Ruby 1.8-licensed (instead of GPLv2-only, GPLv3-only, and Ruby 1.8-licensed) This changes nothing at the moment. Once the FSF publishes the next version of the GPL, users may choose the newer GPL version without the unicorn BDFL approving it. Two years ago when I got permission to add GPLv3 to the license options, I also got permission from all past contributors to approve future versions of the GPL. So now I'm approving all future versions of the GPL for use with unicorn. Reasoning below: In case the GPLv4 arrives and I am not alive to approve/review it, the lesser of evils is have give blanket approval of all future GPL versions (as published by the FSF). The worse evil is to be stuck with a license which cannot guarantee the Free-ness of this project in the future. This unfortunately means the FSF can theoretically come out with license terms I do not agree with, but the GPLv2 and GPLv3 will always be an option to all users. Note: we currently prefer GPLv3 Two improvements thanks to Ernest W. Durbin III: * USR2 redirects fixed for Ruby 1.8.6 (broken since 4.1.0) * unicorn(1) and unicorn_rails(1) enforces valid integer for -p/--port A few more odd, minor tweaks and fixes: * attempt to rename PID file when possible (on USR2) * workaround reopen atomicity issues for stdio vs non-stdio * improve handling of client-triggerable socket errors === unicorn 4.6.3 - fix --no-default-middleware option / 2013-06-21 08:01 UTC Thanks to Micah Chalmer for this fix. There are also minor documentation updates and internal cleanups. === unicorn 4.6.2 - HTTP parser fix for Rainbows! / 2013-02-26 02:59 UTC This release fixes a bug in Unicorn::HttpParser#filter_body which affected some configurations of Rainbows! There is also a minor size reduction in the DSO. === unicorn 4.6.1 - minor cleanups / 2013-02-21 08:38 UTC Unicorn::Const::UNICORN_VERSION is now auto-generated from GIT-VERSION-GEN and always correct. Minor cleanups for hijacking. === unicorn 4.6.0 - hijacking support / 2013-02-06 11:23 UTC This pre-release adds hijacking support for Rack 1.5 users. See Rack documentation for more information about hijacking. There is also a new --no-default-middleware/-N option for the `unicorn' command to ignore RACK_ENV within unicorn thanks to Lin Jen-Shin. There are only documentation and test-portability updates since 4.6.0pre1, no code changes. === unicorn 4.6.0pre1 - hijacking support / 2013-01-29 21:05 UTC This pre-release adds hijacking support for Rack 1.5 users. See Rack documentation for more information about hijacking. There is also a new --no-default-middleware/-N option for the `unicorn' command to ignore RACK_ENV within unicorn. === unicorn 4.5.0 - check_client_connection option / 2012-12-07 22:59 UTC The new check_client_connection option allows unicorn to detect most disconnected local clients before potentially expensive application processing begins. This feature is useful for applications experiencing spikes of traffic leading to undesirable queue times, as clients will disconnect (and perhaps even retry, compounding the problem) before unicorn can even start processing the request. To enable this feature, add the following line to a unicorn config file: check_client_connection true This feature only works when nginx (or any other HTTP/1.0+ client) is on the same machine as unicorn. A huge thanks to Tom Burns for implementing and testing this change in production with real traffic (including mitigating an unexpected DoS attack). ref: http://mid.gmane.org/CAK4qKG3rkfVYLyeqEqQyuNEh_nZ8yw0X_cwTxJfJ+TOU+y8F+w@mail.gmail.com This release fixes broken Rainbows! compatibility in 4.5.0pre1. === unicorn 4.5.0pre1 - check_client_connection option / 2012-11-29 23:48 UTC The new check_client_connection option allows unicorn to detect most disconnected clients before potentially expensive application processing begins. This feature is useful for applications experiencing spikes of traffic leading to undesirable queue times, as clients will disconnect (and perhaps even retry, compounding the problem) before unicorn can even start processing the request. To enable this feature, add the following line to a unicorn config file: check_client_connection true A huge thanks to Tom Burns for implementing and testing this change in production with real traffic (including mitigating an unexpected DoS attack). === unicorn 4.4.0 - minor updates / 2012-10-11 09:11 UTC Non-regular files are no longer reopened on SIGUSR1. This allows users to specify FIFOs as log destinations. TCP_NOPUSH/TCP_CORK is no longer set/unset by default. Use :tcp_nopush explicitly with the "listen" directive if you wish to enable TCP_NOPUSH/TCP_CORK. Listen sockets are now bound _after_ loading the application for preload_app(true) users. This prevents load balancers from sending traffic to an application server while the application is still loading. There are also minor test suite cleanups. === unicorn 4.3.1 - shutdown() fixes / 2012-04-29 07:04 UTC * Call shutdown(2) if a client EOFs on us during upload. We can avoid holding a socket open if the Rack app forked a process during uploads. * ignore potential Errno::ENOTCONN errors (from shutdown(2)). Even on LANs, connections can occasionally be accept()-ed but be unusable afterwards. Thanks to Joel Nimety , Matt Smith and George on the mongrel-unicorn@rubyforge.org mailing list for their feedback and testing for this release. === unicorn 4.3.0 - minor fixes and updates / 2012-04-17 21:51 UTC * PATH_INFO (aka REQUEST_PATH) increased to 4096 (from 1024). This allows requests with longer path components and matches the system PATH_MAX value common to GNU/Linux systems for serving filesystem components with long names. * Apps that fork() (but do not exec()) internally for background tasks now indicate the end-of-request immediately after writing the Rack response. Thanks to Hongli Lai, Lawrence Pit, Patrick Wenger and Nuo Yan for their valuable feedback for this release. === unicorn 4.2.1 - minor fix and doc updates / 2012-03-26 21:39 UTC * Stale pid files are detected if a pid is recycled by processes belonging to another user, thanks to Graham Bleach. * nginx example config updates thanks to to Eike Herzbach. * KNOWN_ISSUES now documents issues with apps/libs that install conflicting signal handlers. === unicorn 4.2.0 / 2012-01-28 09:18 UTC The GPLv3 is now an option to the Unicorn license. The existing GPLv2 and Ruby-only terms will always remain options, but the GPLv3 is preferred. Daemonization is correctly detected on all terminals for development use (Brian P O'Rourke). Unicorn::OobGC respects applications that disable GC entirely during application dispatch (Yuichi Tateno). Many test fixes for OpenBSD, which may help other *BSDs, too. (Jeremy Evans). There is now _optional_ SSL support (via the "kgio-monkey" RubyGem). On fast, secure LANs, SSL is only intended for detecting data corruption that weak TCP checksums cannot detect. Our SSL support is remains unaudited by security experts. There are also some minor bugfixes and documentation improvements. Ruby 2.0.0dev also has a copy-on-write friendly GC which can save memory when combined with "preload_app true", so if you're in the mood, start testing Unicorn with the latest Ruby! === unicorn 4.1.1 - fix last-resort timeout accuracy / 2011-08-25 21:30 UTC The last-resort timeout mechanism was inaccurate and often delayed in activation since the 2.0.0 release. It is now fixed and remains power-efficient in idle situations, especially with the wakeup reduction in MRI 1.9.3+. There is also a new document on application timeouts intended to discourage the reliance on this last-resort mechanism. It is visible on the web at: http://unicorn.bogomips.org/Application_Timeouts.html === unicorn 4.1.0 - small updates and fixes / 2011-08-20 00:33 UTC * Rack::Chunked and Rack::ContentLength middlewares are loaded by default for RACK_ENV=(development|deployment) users to match Rack::Server behavior. As before, use RACK_ENV=none if you want fine-grained control of your middleware. This should also help users of Rainbows! and Zbatery. * CTL characters are now rejected from HTTP header values * Exception messages are now filtered for [:cntrl:] characters since application/middleware authors may forget to do so * Workers will now terminate properly if a SIGQUIT/SIGTERM/SIGINT is received while during worker process initialization. * close-on-exec is explicitly disabled to future-proof against Ruby 2.0 changes [ruby-core:38140] === unicorn 4.0.1 - regression bugfixes / 2011-06-29 18:59 UTC This release fixes things for users of per-worker "listen" directives in the after_fork hook. Thanks to ghazel@gmail.com for reporting the bug. The "timeout" configurator directive is now truncated to 0x7ffffffe seconds to prevent overflow when calling IO.select. === unicorn 4.0.0 - for mythical hardware! / 2011-06-27 09:05 UTC A single Unicorn instance may manage more than 1024 workers without needing privileges to modify resource limits. As a result of this, the "raindrops"[1] gem/library is now a required dependency. TCP socket defaults now favor low latency to mimic UNIX domain socket behavior (tcp_nodelay: true, tcp_nopush: false). This hurts throughput, users who want to favor throughput should specify "tcp_nodelay: false, tcp_nopush: true" in the listen directive. Error logging is more consistent and all lines should be formatted correctly in backtraces. This may break the behavior of some log parsers. The call stack is smaller and thus easier to examine backtraces when debugging Rack applications. There are some internal API changes and cleanups, but none that affect applications designed for Rack. See "git log v3.7.0.." for details. For users who cannot install kgio[2] or raindrops, Unicorn 1.1.x remains supported indefinitely. Unicorn 3.x will remain supported if there is demand. We expect raindrops to introduce fewer portability problems than kgio did, however. [1] http://raindrops.bogomips.org/ [2] http://bogomips.org/kgio/ === unicorn 3.7.0 - minor feature update / 2011-06-09 20:51 UTC * miscellaneous documentation improvements * return 414 (instead of 400) for Request-URI Too Long * strip leading and trailing linear whitespace in header values User-visible improvements meant for Rainbows! users: * add :ipv6only "listen" option (same as nginx) === unicorn 3.6.2 - fix Unicorn::OobGC module / 2011-04-30 06:40 UTC The optional Unicorn::OobGC module is reimplemented to fix breakage that appeared in v3.3.1. There are also minor documentation updates, but no code changes as of 3.6.1 for non-OobGC users. There is also a v1.1.7 release to fix the same OobGC breakage that appeared for 1.1.x users in the v1.1.6 release. === unicorn 1.1.7 - major fixes to minor components / 2011-04-30 06:33 UTC No changes to the core code, so this release only affects users of the Unicorn::OobGC and Unicorn::ExecCGI modules. Unicorn::OobGC was totally broken by the fix in the v1.1.6 release and is now reimplemented. Unicorn::ExecCGI (which hardly anybody uses) now returns proper HTTP status codes. === unicorn 3.6.1 - fix OpenSSL PRNG workaround / 2011-04-26 23:06 UTC Our attempt in 3.6.0 to workaround a problem with the OpenSSL PRNG actually made the problem worse. This release corrects the workaround to properly reseed the OpenSSL PRNG after forking. === unicorn 3.6.0 - small fixes, PRNG workarounds / 2011-04-21 06:46 UTC Mainly small fixes, improvements, and workarounds for fork() issues with pseudo-random number generators shipped with Ruby (Kernel#rand, OpenSSL::Random (used by SecureRandom and also by Rails). The PRNG issues are documented in depth here (and links to Ruby Redmine): http://bogomips.org/unicorn.git/commit?id=1107ede7 http://bogomips.org/unicorn.git/commit?id=b3241621 If you're too lazy to upgrade, you can just do this in your after_fork hooks: after_fork do |server,worker| tmp = srand OpenSSL::Random.seed(tmp.to_s) if defined?(OpenSSL::Random) end There are also small log reopening (SIGUSR1) improvements: * relative paths may also be reopened, there's a small chance this will break with a handful of setups, but unlikely. This should make configuration easier especially since the "working_directory" configurator directive exists. Brought up by Matthew Kocher: http://thread.gmane.org/gmane.comp.lang.ruby.unicorn.general/900 * workers will just die (and restart) if log reopening fails for any reason (including user error). This is to workaround the issue reported by Emmanuel Gomez: http://thread.gmane.org/gmane.comp.lang.ruby.unicorn.general/906 === unicorn 3.5.0 - very minor improvements / 2011-03-15 12:27 UTC A small set of small changes but it's been more than a month since our last release. There are minor memory usage and efficiently improvements (for graceful shutdowns). MRI 1.8.7 users on *BSD should be sure they're using the latest patchlevel (or upgrade to 1.9.x) because we no longer workaround their broken stdio (that's MRI's job :) === unicorn 3.4.0 - for people with very big LANs / 2011-02-04 21:23 UTC * IPv6 support in the HTTP hostname parser and configuration language. Configurator syntax for "listen" addresses should be the same as nginx. Even though we support IPv6, we will never support non-LAN/localhost clients connecting to Unicorn. * TCP_NOPUSH/TCP_CORK is enabled by default to optimize for bandwidth usage and avoid unnecessary wakeups in nginx. * Updated KNOWN_ISSUES document for bugs in recent Ruby 1.8.7 (RNG needs reset after fork) and nginx+sendfile()+FreeBSD 8. * examples/nginx.conf updated for modern stable versions of nginx. * "Status" in headers no longer ignored in the response, Rack::Lint already enforces this so we don't duplicate the work. * All tests pass under Ruby 1.9.3dev * various bugfixes in the (mostly unused) ExecCGI class that powers http://bogomips.org/unicorn.git === unicorn 3.3.1 - one minor, esoteric bugfix / 2011-01-06 23:48 UTC We now close the client socket after closing the response body. This does not affect most applications that run under Unicorn, in fact, it may not affect any. There is also a new v1.1.6 release for users who do not use kgio. === unicorn 1.1.6 - one minor, esoteric bugfix / 2011-01-06 23:46 UTC We now close the client socket after closing the response body. This does not affect most applications that run under Unicorn, in fact, it may not affect any. === unicorn 3.3.0 - minor optimizations / 2011-01-05 23:43 UTC Certain applications that already serve hundreds/thousands of requests a second should experience performance improvements due to Time.now.httpdate usage being removed and reimplemented in C. There are also minor internal changes and cleanups for Rainbows! === unicorn 3.2.1 - parser improvements for Rainbows! / 2010-12-26 08:04 UTC There are numerous improvements in the HTTP parser for Rainbows!, none of which affect Unicorn-only users. The kgio dependency is incremented to 2.1: this should avoid ENOSYS errors for folks building binaries on newer Linux kernels and then deploying to older ones. There are also minor documentation improvements, the website is now JavaScript-free! (Ignore the 3.2.0 release, I fat-fingered some packaging things) === unicorn 3.2.0 - parser improvements for Rainbows! / 2010-12-26 07:50 UTC There are numerous improvements in the HTTP parser for Rainbows!, none of which affect Unicorn-only users. The kgio dependency is incremented to 2.1: this should avoid ENOSYS errors for folks building binaries on newer Linux kernels and then deploying to older ones. There are also minor documentation improvements, the website is now JavaScript-free! === unicorn 3.1.0 - client_buffer_body_size tuning / 2010-12-09 22:28 UTC This release enables tuning the client_buffer_body_size to raise or lower the threshold for buffering request bodies to disk. This only applies to users who have not disabled rewindable input. There is also a TeeInput bugfix for uncommon usage patterns and Configurator examples in the FAQ should be fixed === unicorn 3.0.1 - one bugfix for Rainbows! / 2010-12-03 00:34 UTC ...and only Rainbows! This release fixes HTTP pipelining for requests with bodies for users of synchronous Rainbows! concurrency models. Since Unicorn itself does not support keepalive nor pipelining, Unicorn-only users need not upgrade. === unicorn 3.0.0 - disable rewindable input! / 2010-11-20 02:41 UTC Rewindable "rack.input" may be disabled via the "rewindable_input false" directive in the configuration file. This will violate Rack::Lint for Rack 1.x applications, but can reduce I/O for applications that do not need a rewindable input. This release updates us to the Kgio 2.x series which should play more nicely with other libraries and applications. There are also internal cleanups and improvements for future versions of Rainbows! The Unicorn 3.x series supercedes the 2.x series while the 1.x series will remain supported indefinitely. === unicorn 3.0.0pre2 - less bad than 2.x or 3.0.0pre1! / 2010-11-19 00:07 UTC This release updates us to the Kgio 2.x series which should play more nicely with other applications. There are also bugfixes from the 2.0.1 release and a small bugfix to the new StreamInput class. The Unicorn 3.x series will supercede the 2.x series while the 1.x series will remain supported indefinitely. === unicorn 2.0.1 - fix errors in error handling / 2010-11-17 23:48 UTC This release fixes errors in our own error handling, causing certain errors to not be logged nor responded to correctly. Eric Wong (3): t0012: fix race condition in reload http_server: fix HttpParserError constant resolution tests: add parser error test from Rainbows! === unicorn 3.0.0pre1 / 2010-11-17 00:04 UTC Rewindable "rack.input" may be disabled via the "rewindable_input false" directive in the configuration file. This will violate Rack::Lint for Rack 1.x applications, but can reduce I/O for applications that do not need it. There are also internal cleanups and enhancements for future versions of Rainbows! Eric Wong (11): t0012: fix race condition in reload enable HTTP keepalive support for all methods http_parser: add HttpParser#next? method tee_input: switch to simpler API for parsing trailers switch versions to 3.0.0pre add stream_input class and build tee_input on it configurator: enable "rewindable_input" directive http_parser: ensure keepalive is disabled when reset *_input: make life easier for subclasses/modules tee_input: restore read position after #size preread_input: no-op for non-rewindable "rack.input" === unicorn 2.0.0 - mostly internal cleanups / 2010-10-27 23:44 UTC Despite the version number, this release mostly features internal cleanups for future versions of Rainbows!. User visible changes include reductions in CPU wakeups on idle sites using high timeouts. Barring possible portability issues due to the introduction of the kgio library, this release should be ready for all to use. However, 1.1.x (and possibly 1.0.x) will continue to be maintained. Unicorn 1.1.5 and 1.0.2 have also been released with bugfixes found during development of 2.0.0. === unicorn 1.1.5 / 2010-10-27 23:30 UTC This maintenance release fixes several long-standing but recently-noticed bugs. SIGHUP reloading now correctly restores default values if they're erased or commented-out in the Unicorn configuration file. Delays/slowdowns in signal handling since 0.990 are fixed, too. === unicorn 1.0.2 / 2010-10-27 23:12 UTC This is the latest maintenance release of the 1.0.x series. All users are encouraged to upgrade to 1.1.x stable series and report bugs there. Shortlog of changes since 1.0.1: Eric Wong (8): SIGTTIN works after SIGWINCH fix delays in signal handling Rakefile: don't post freshmeat on empty changelogs Rakefile: capture prerelease tags configurator: use "__send__" instead of "send" configurator: reloading with unset values restores default gemspec: depend on Isolate 3.0.0 for dev doc: stop using deprecated rdoc CLI options === unicorn 2.0.0pre3 - more small fixes / 2010-10-09 00:06 UTC There is a new Unicorn::PrereadInput middleware to which allows input bodies to be drained off the socket and buffered to disk (or memory) before dispatching the application. HTTP Pipelining behavior is fixed for Rainbows! There are some small Kgio fixes and updates for Rainbows! users as well. === unicorn 2.0.0pre2 - releases are cheap / 2010-10-07 07:23 UTC Internal changes/cleanups for Rainbows! === unicorn 2.0.0pre1 - a boring "major" release / 2010-10-06 01:17 UTC Mostly internal cleanups for future versions of Rainbows! and people trying out Rubinius. There are tiny performance improvements for Ruby 1.9.2 users which may only be noticeable with Rainbows!. There is a new dependency on the "kgio" library for kinder, gentler I/O :) Please report any bugs and portability issues with kgio to the Unicorn mailing list[1]. Unicorn 1.1.x users are NOT required nor even encouraged to upgrade yet. Unicorn 1.1.x will be maintained for the forseeable future. [1] - mongrel-unicorn@rubyforge.org === unicorn 1.1.4 - small bug fix and doc updates / 2010-10-04 20:32 UTC We no longer unlinking actively listening sockets upon startup (but continue to unlink dead ones). This bug could trigger downtime and nginx failures if a user makes an error and attempts to start Unicorn while it is already running. Thanks to Jordan Ritter for the detailed bug report leading to this fix. ref: http://mid.gmane.org/8D95A44B-A098-43BE-B532-7D74BD957F31@darkridge.com There are also minor documentation and test updates pulled in from master. This is hopefully the last bugfix release of the 1.1.x series. === unicorn 1.1.3 - small bug fixes / 2010-08-28 19:27 UTC This release fixes race conditions during SIGUSR1 log cycling. This bug mainly affects Rainbows! users serving static files, but some Rack apps use threads internally even under Unicorn. Other small fixes: * SIGTTIN works as documented after SIGWINCH * --help output from `unicorn` and `unicorn_rails` is more consistent === unicorn 1.1.2 - fixing upgrade rollbacks / 2010-07-13 20:04 UTC This release is fixes a long-standing bug where the original PID file is not restored when rolling back from a USR2 upgrade. Presumably most upgrades aren't rolled back, so it took over a year to notice this issue. Thanks to Lawrence Pit for discovering and reporting this issue. === unicorn 1.0.1 - bugfixes only / 2010-07-13 20:01 UTC The first maintenance release of 1.0.x, this release is primarily to fix a long-standing bug where the original PID file is not restored when rolling back from a USR2 upgrade. Presumably most upgrades aren't rolled back, so it took over a year to notice this issue. Thanks to Lawrence Pit for discovering and reporting this issue. There is also a pedantic TeeInput bugfix which shouldn't affect real apps from the 1.1.x series and a test case fix for OSX, too. === unicorn 1.1.1 - fixing cleanups gone bad :x / 2010-07-11 02:13 UTC Unicorn::TeeInput constant resolution for Unicorn::ClientError got broken simplifying code for RDoc. This affects users of Rainbows! and Zbatery. === unicorn 1.1.0 - small changes and cleanups / 2010-07-08 07:57 UTC This is a small, incremental feature release with some internal changes to better support upcoming versions of the Rainbows! and Zbatery web servers. There is no need to upgrade if you're happy with 1.0.0, but also little danger in upgrading. There is one pedantic bugfix which shouldn't affect anyone and small documentation updates as well. === unicorn 1.0.0 - yes, this is a real project / 2010-06-17 09:18 UTC There are only minor changes since 0.991.0. For users clinging onto the past, MRI 1.8.6 support has been restored. Users are strongly encouraged to upgrade to the latest 1.8.7, REE or 1.9.1. For users looking towards the future, the core test suite and the Rails 3 (beta) integration tests pass entirely under 1.9.2 preview3. As of the latest rubinius.git[1], Rubinius support is nearly complete as well. Under Rubinius, signals may corrupt responses as they're being written to the socket, but that should be fixable transparently to us[4]. Support for the hardly used, hardly documented[2] embedded command-line switches in rackup config (.ru) files is is also broken under Rubinius. The recently-released Rack 1.2.1 introduced no compatiblity issues[3] in core Unicorn. We remain compatible with all Rack releases starting with 0.9.1 (and possibly before). [1] tested with Rubinius upstream commit cf4a5a759234faa3f7d8a92d68fa89d8c5048f72 [2] lets avoid the Dueling Banjos effect here :x [3] actually, Rack 1.2.1 is broken under 1.8.6. [4] http://github.com/evanphx/rubinius/issues/373 === unicorn 0.991.0 - startup improvements / 2010-06-11 02:18 UTC The "working_directory" configuration parameter is now handled before config.ru. That means "unicorn" and "unicorn_rails" no longer barfs when initially started outside of the configured "working_directory" where a config.ru is required. A huge thanks to Pierre Baillet for catching this ugly UI inconsistency before the big 1.0 release Thanks to Hongli Lai, out-of-the-box Rails 3 (beta) support should be improved for deployments lacking a config.ru There are more new integration tests, cleanups and some documentation improvements. === unicorn 0.990.0 - inching towards 1.0 / 2010-06-08 09:41 UTC Thanks to Augusto Becciu for finding a bug in the HTTP parser that caused a TypeError (and 500) when a rare client set the "Version:" header which conflicts with the HTTP_VERSION header we parse in the first line of the request[1]. Horizontal tabs are now allowed as leading whitespace in header values as according to RFC 2616 as pointed out by IƱaki Baz Castillo[2]. Taking a hint from Rack 1.1, the "logger" configuration parameter no longer requires a "close" method. This means some more Logger replacements may be used. There's a new, optional, Unicorn (and maybe Passenger)-only middleware, Unicorn::OobGC[2] that runs GC outside of the normal request/response cycle to help out memory-hungry applications. Thanks to Luke Melia for being brave enough to test and report back on my big_app_gc.rb monkey patch[3] which lead up to this. Rails 3 (beta) support: Using "unicorn" is still recommended as Rails 3 comes with a config.ru, but "unicorn_rails" is cleaned up a bit and *should* work as well as "unicorn" out-of-the-box. Feedback is much appreciated. Rubinius updates: USR2 binary upgrades are broken due to {TCPServer,UNIXServer}.for_fd[5][6] being broken (differently). Repeatedly hitting the server with signals in a tight loop is unusual and not recommended[7]. There are some workarounds and general code cleanups for other issues[8], as well but things should generally work unless you need USR2 upgrades. Feedback and reports would be greatly appreciated as usual. MRI support: All tests (except old Rails) run and pass under 1.9.2-preview3. 1.8.7 and 1.9.1 work well as usual and will continue to be supported indefinitely. Lets hope this is the last release before 1.0. Please report any issues on the mailing list[9] or email us privately[a]. Don't send HTML mail. [1] - http://mid.gmane.org/AANLkTimuGgcwNAMcVZdViFWdF-UcW_RGyZAue7phUXps@mail.gmail.com [2] - http://mid.gmane.org/i2xcc1f582e1005070651u294bd83oc73d1e0adf72373a@mail.gmail.com [3] - http://unicorn.bogomips.org/Unicorn/OobGC.html [4] - http://unicorn.bogomips.org/examples/big_app_gc.rb [5] - http://github.com/evanphx/rubinius/issues/354 [6] - http://github.com/evanphx/rubinius/issues/355 [7] - http://github.com/evanphx/rubinius/issues/356 [8] - http://github.com/evanphx/rubinius/issues/347 [9] - mailto:mongrel-unicorn@rubyforge.org [a] - mailto:unicorn@bogomips.org === unicorn 0.99.0 - simplicity wins / 2010-05-06 19:32 UTC Starting with this release, we'll always load Rack up front at startup. Previously we had complicated ways to avoid loading Rack until after the application was loaded to allow the application to load an alternate version of Rack. However this has proven too error-prone to be worth supporting even though Unicorn does not have strict requirements on currently released Rack versions. If an app requires a different version of Rack than what Unicorn would load by default, it is recommended they only install that version of Rack (and no others) since Unicorn does not have any strict requirements on currently released Rack versions. Rails 2.3.x users should be aware of this as those versions are not compatible with Rack 1.1.0. If it is not possible to only have one Rack version installed "globally", then they should either use Isolate or Bundler and install a private version of Unicorn along with their preferred version of Rack. Users who install in this way are recommended to execute the isolated/bundled version of Unicorn, instead of what would normally be in $PATH. Feedback/tips to mailto:mongrel-unicorn@rubyforge.org from Isolate and Bundler users would be greatly appreciated. === unicorn 0.98.0 / 2010-05-05 00:53 UTC Deployments that suspend or hibernate servers should no longer have workers killed off (and restarted) upon resuming. For Linux users of {raindrops}[http://raindrops.bogomips.org/] (v0.2.0+) configuration is easier as raindrops can now automatically detect the active listeners on the server via the new Unicorn.listener_names singleton method. For the pedantic, chunked request bodies without trailers are no longer allowed to omit the final CRLF. This shouldn't affect any real and RFC-compliant clients out there. Chunked requests with trailers have always worked and continue to work the same way. The rest are mostly small internal cleanups and documentation fixes. See the commit logs for full details. === unicorn 0.97.1 - fix HTTP parser for Rainbows!/Zbatery / 2010-04-19 21:00 UTC This release fixes a denial-of-service vector for derived servers exposed directly to untrusted clients. This bug does not affect most Unicorn deployments as Unicorn is only supported with trusted clients (such as nginx) on a LAN. nginx is known to reject clients that send invalid Content-Length headers, so any deployments on a trusted LAN and/or behind nginx are safe. Servers affected by this bug include (but are not limited to) Rainbows! and Zbatery. This bug does not affect Thin nor Mongrel, as neither got the request body filtering treatment that the Unicorn HTTP parser got in August 2009. The bug fixed in this release could result in a denial-of-service as it would trigger a process-wide assertion instead of raising an exception. For servers such as Rainbows!/Zbatery that serve multiple clients per worker process, this could abort all clients connected to the particular worker process that hit the assertion. === unicorn 0.97.0 - polishing and cleaning up / 2010-03-01 18:26 UTC A bunch of small fixes related to startup/configuration and hot reload issues with HUP: * Variables in the user-generated config.ru files no longer risk clobbering variables used in laucher scripts. * signal handlers are initialized before the pid file is dropped, so over-eager firing of init scripts won't mysteriously nuke a process. * SIGHUP will return app to original state if an updated config.ru fails to load due to {Syntax,Load}Error. * unicorn_rails should be Rails 3 compatible out-of-the-box ('unicorn' works as always, and is recommended for Rails 3) * unicorn_rails is finally "working_directory"-aware when generating default temporary paths and pid file * config.ru encoding is the application's default in 1.9, not forced to binary like many parts of Unicorn. * configurator learned to handle the "user" directive outside of after_fork hook (which will always remain supported). There are also various internal cleanups and possible speedups. === unicorn 0.96.1 - fix leak in Rainbows!/Zbatery / 2010-02-13 08:35 UTC This maintenance release is intended for users of Rainbows! and Zbatery servers (and anybody else using Unicorn::HttpParser). This memory leak DID NOT affect Unicorn itself: Unicorn always allocates the HttpParser once and always reuses it in every sequential request. This leak affects applications that repeatedly allocate a new HTTP parser. Thus this bug affects _all_ deployments of Rainbows! and Zbatery. These servers allocate a new parser for every client connection to serve clients concurrently. I misread the Data_Make_Struct()/Data_Wrap_Struct() documentation and ended up passing NULL as the "free" argument instead of -1, causing the memory to never be freed. From README.EXT in the MRI source which I misread: > The free argument is the function to free the pointer > allocation. If this is -1, the pointer will be just freed. > The functions mark and free will be called from garbage > collector. === unicorn 0.96.0 - Rack 1.1 bump / 2010-01-08 05:18 UTC This release includes small changes for things allowed by Rack 1.1. It is also now easier to detect if daemonized process fails to start. Manpages received some minor updates as well. Rack 1.1 allowed us to make the following environment changes: * "rack.logger" is now set to the "logger" specified in the Unicorn config file. This defaults to a Logger instance pointing to $stderr. * "rack.version" is now at [1,1]. Unicorn remains compatible with previous Rack versions if your app depends on it. While only specified since Rack 1.1, Unicorn has always exposed "rack.input" in binary mode (and has ridiculous integration tests that go outside of Ruby to prove it!). === unicorn 0.95.3 / 2009-12-21 21:51 UTC The HTTP parser now allows (but does not parse) the userinfo component in the very rare requests that send absoluteURIs. Thanks to Scott Chacon for reporting and submitting a test case for this fix. There are also minor documentation updates and tiny cleanups. === unicorn 0.95.2 / 2009-12-07 09:52 UTC Small fixes to our HTTP parser to allows semicolons in PATH_INFO as allowed by RFC 2396, section 3.3. This is low impact for existing apps as semicolons are rarely seen in URIs. Our HTTP parser runs properly under Rubinius 0.13.0 and 1.0.0-rc1 again (though not yet the rest of the server since we rely heavily on signals). Another round of small documentation tweaks and minor cleanups. === unicorn 0.95.1 / 2009-11-21 21:13 UTC Configuration files paths given on the command-line are no longer expanded. This should make configuration reloads possible when a non-absolute path is specified for --config-file and Unicorn was deployed to a symlink directories (as with Capistrano). Since deployments have always been strongly encouraged to use absolute paths in the config file, this change does not affect them. This is our first gem release using gemcutter. Eric Wong (3): SIGNALS: HUP + preload_app cannot reload app code Do not expand paths given on the shell GNUmakefile: prep release process for gemcutter === unicorn 0.95.0 / 2009-11-15 22:21 UTC Mostly internal cleanups and documentation updates. Irrelevant stacktraces from client disconnects/errors while reading "rack.input" are now cleared to avoid unnecessary noise. If user switching in workers is used, ownership of logs is now preserved when reopening worker logs (send USR1 only to the the master in this case). The timeout config no longer affects long after_fork hooks or application startups. New features include the addition of the :umask option for the "listen" config directive and error reporting for non-portable socket options. No ponies have ever been harmed in our development. Eric Wong (28): unicorn.1: document RACK_ENV changes in 0.94.0 HACKING: update with "gmake" in examples don't nuke children for long after_fork and app loads local.mk.sample: steal some updates from Rainbows! Load Unicorn constants when building app tee_input: fix RDoc argument definition for tee Add FAQ FAQ: fix links to Configurator docs tee_input: better premature disconnect handling tee_input: don't shadow struct members raise Unicorn::ClientShutdown if client aborts in TeeInput tee_input: fix comment from an intermediate commit FAQ: additional notes on getting HTTPS redirects right configurator: update RDoc and comments in examples bump version to 0.95.0pre configurator: listen :umask parameter for UNIX sockets preserve user/group ownership when reopening logs old_rails/static: avoid freezing strings old_rails: autoload Static const: no need to freeze HTTP_EXPECT test_server: ensure stderr is written to before reading tee_input: expand client error handling replace "rescue => e" with "rescue Object => e" socket_helper: do not hide errors when setting socket options socket_helper: RDoc for constants ClientShutdown: RDoc Rakefile: add raa_update task tee_input: client_error always raises === unicorn 0.94.0 / 2009-11-05 09:52 UTC The HTTP parser is fix for oddly-aligned reads of trailers (this technically affects headers, too, but is highly unlikely due to our non-support of slow clients). This allows our HTTP parser to better support very slow clients when used by other servers (like Rainbows!). Fortunately this bug does not appear to lead to any invalid memory accesses (and potential arbitrary code execution). FreeBSD (and possibly other *BSDs) support is improved and and all the test cases pass under FreeBSD 7.2. Various flavors of GNU/Linux remains our primary platform for development and production. New features added include the "working_directory" directive in the configurator . Even without specifying a "working_directory", symlink-aware detection of the current path no longer depends on /bin/sh so it should work out-of-the-box on FreeBSD and Solaris and not just systems where /bin/sh is dash, ksh93 or bash. User-switching support is finally supported but only intended for use in the after_fork hook of worker processes. Putting it in the after_fork hook allows allows users to set things like CPU affinity[1] on a per-worker basis before dropping privileges. The master process retains all privileges it started with. The ENV["RACK_ENV"] (process-wide) environment variable is now both read and set for `unicorn' in the same way RAILS_ENV is used by `unicorn_rails'. This allows the Merb launcher to read ENV["RACK_ENV"] in config.ru. Other web servers already set this and there may be applications or libraries that already rely on this de facto standard. Eric Wong (26): cleanup: avoid redundant error checks for fstat test_helper: connect(2) may fail with EINVAL GNUmakefile: fix non-portable tar(1) usage tests: provide a pure Ruby setsid(8) equivalent more portable symlink awareness for START_CTX[:cwd] test_signals: avoid portability issues with fchmod(2) cleanup error handling and make it less noisy Do not override Dir.chdir in config files configurator: add "working_directory" directive configurator: working_directory is expanded configurator: set ENV["PWD"] with working_directory, too configurator: working_directory affects pid, std{err,out}_paths configurator: update documentation for working_directory TODO: remove working_directory bit, done Util.reopen_logs: remove needless Range worker: user/group switching for after_fork hooks Fix autoload of Etc in Worker for Ruby 1.9 bin/unicorn: allow RACK_ENV to be passed from parent tests for RACK_ENV preservation http: allow headers/trailers to be written byte-wise http: extra test for bytewise chunked bodies tee_input: do not clobber trailer buffer on partial uploads test_exec: ensure master is killed after test Util::tmpio returns a TmpIO that responds to #size TODO: remove user-switching bit, done unicorn 0.94.0 Wayne Larsen (1): bin/unicorn: set ENV["RACK_ENV"] on startup [1] - Unicorn does not support CPU affinity directly, but it is possible to load code that allows it inside after_fork hooks, or even just call sched_tool(8). === unicorn 0.93.5 / 2009-10-29 21:41 UTC This release fixes a regression introduced in 0.93.3 where timed-out worker processes run a chance of not being killed off at all if they're hung. While it's not ever advisable to have requests take a long time, we realize it's easy to fix everything :) Eric Wong (3): TODO: remove --kill fix reliability of timeout kills TODO: update for next version (possibly 1.0-pre) === unicorn 0.93.4 / 2009-10-27 07:57 UTC This release mainly works around BSD stdio compatibility issues that affect at least FreeBSD and OS X. While this issues was documented and fixed in [ruby-core:26300][1], no production release of MRI 1.8 has it, and users typically upgrade MRI more slowly than gems. This issue does NOT affect 1.9 users. Thanks to Vadim Spivak for reporting and testing this issue and Andrey Stikheev for the fix. Additionally there are small documentation bits, one error handling improvement, and one minor change that should improve reliability of signal delivery. Andrey Stikheev (1): workaround FreeBSD/OSX IO bug for large uploads Eric Wong (7): DESIGN: address concerns about on-demand and thundering herd README: alter reply conventions for the mailing list configurator: stop testing for non-portable listens KNOWN_ISSUES: document Rack gem issue w/Rails 2.3.2 stop continually resends signals during shutdowns add news bodies to site NEWS.atom.xml configurator: fix broken example in RDoc Suraj N. Kurapati (1): show configuration file path in errors instead of '(eval)' [1] http://redmine.ruby-lang.org/issues/show/2267 === unicorn 0.93.3 / 2009-10-09 22:50 UTC This release fixes compatibility with OpenBSD (and possibly other Unices with stricter fchmod(2) implementations) thanks to Jeremy Evans. Additionally there are small documentation changes all around. Eric Wong (12): doc: expand on the SELF_PIPE description fchmod heartbeat flips between 0/1 for compatibility examples/init.sh: remove "set -u" configurator: update with nginx fail_timeout=0 example PHILOSOPHY: clarify experience other deployments PHILOSOPHY: plug the Rainbows! spin-off project README: remove unnecessary and extraneous dash DESIGN: clarification and possibly improve HTML validity README: remove the "non-existent" part README: emphasize the "fast clients"-only part drop the whitespace cleaner for Ragel->C unicorn 0.93.3 === unicorn 0.93.2 / 2009-10-07 08:45 UTC Avoid truncated POST bodies from with URL-encoded forms in Rails by switching TeeInput to use read-in-full semantics (only) when a Content-Length: header exists. Chunked request bodies continue to exhibit readpartial semantics to support simultaneous bidirectional chunking. The lack of return value checking in Rails to protect against a short ios.read(length) is entirely reasonable even if not pedantically correct. Most ios.read(length) implementations return the full amount requested except right before EOF. Also there are some minor documentation improvements. Eric Wong (8): Fix NEWS generation on single-paragraph tag messages Include GPLv2 in docs doc: make it clear contributors retain copyrights TODO: removed Rainbows! (see rainbows.rubyforge.org) Document the START_CTX hash contents more-compatible TeeInput#read for POSTs with Content-Length tests for read-in-full vs readpartial semantics unicorn 0.93.2 === unicorn 0.93.1 / 2009-10-03 01:17 UTC Fix permissions for release tarballs/gems, no other changes. Thanks to Jay Reitz for reporting this. === unicorn 0.93.0 / 2009-10-02 21:04 UTC The one minor bugfix is only for Rails 2.3.x+ users who set the RAILS_RELATIVE_URL_ROOT environment variable in a config file. Users of the "--path" switch or those who set the environment variable in the shell were unaffected by this bug. Note that we still don't have relative URL root support for Rails < 2.3, and are unlikely to bother with it unless there is visible demand for it. New features includes support for :tries and :delay when specifying a "listen" in an after_fork hook. This was inspired by Chris Wanstrath's example of binding per-worker listen sockets in a loop while migrating (or upgrading) Unicorn. Setting a negative value for :tries means we'll retry the listen indefinitely until the socket becomes available. So you can do something like this in an after_fork hook: after_fork do |server, worker| addr = "127.0.0.1:#{9293 + worker.nr}" server.listen(addr, :tries => -1, :delay => 5) end There's also the usual round of added documentation, packaging fixes, code cleanups, small fixes and minor performance improvements that are viewable in the "git log" output. Eric Wong (55): build: hardcode the canonical git URL build: manifest dropped manpages build: smaller ChangeLog doc/LATEST: remove trailing newline http: don't force -fPIC if it can't be used .gitignore on *.rbc files Rubinius generates README/gemspec: a better description, hopefully GNUmakefile: add missing .manifest dep on test installs Add HACKING document configurator: fix user switch example in RDoc local.mk.sample: time and perms enforcement unicorn_rails: show "RAILS_ENV" in help message gemspec: compatibility with older Rubygems Split out KNOWN_ISSUES document KNOWN_ISSUES: add notes about the "isolate" gem gemspec: fix test_files regexp match gemspec: remove tests that fork from test_files test_signals: ensure we can parse pids in response GNUmakefile: cleanup test/manifest generation util: remove APPEND_FLAGS constant http_request: simplify and remove handle_body method http_response: simplify and remove const dependencies local.mk.sample: fix .js times TUNING: notes about benchmarking a high :backlog HttpServer#listen accepts :tries and :delay parameters "make install" avoids installing multiple .so objects Use Configurator#expand_addr in HttpServer#listen configurator: move initialization stuff to #initialize Remove "Z" constant for binary strings cgi_wrapper: don't warn about stdoutput usage cgi_wrapper: simplify status handling in response cgi_wrapper: use Array#concat instead of += server: correctly unset reexec_pid on child death configurator: update and modernize examples configurator: add colons in front of listen() options configurator: remove DEFAULT_LOGGER constant gemspec: clarify commented-out licenses section Add makefile targets for non-release installs cleanup: use question mark op for 1-byte comparisons RDoc for Unicorn::HttpServer::Worker small cleanup to pid file handling + documentation rails: RAILS_RELATIVE_URL_ROOT may be set in Unicorn config unicorn_rails: undeprecate --path switch manpages: document environment variables README: remove reference to different versions Avoid a small window when a pid file can be empty configurator: update some migration examples configurator: listen :delay must be Numeric test: don't rely on .manifest for test install SIGNALS: state that we stole semantics from nginx const: DEFAULT_PORT as a string doesn't make sense test_helper: unused_port rejects 8080 unconditionally GNUmakefile: SINCE variable may be unset tests: GIT-VERSION-GEN is a test install dependency unicorn 0.93.0 === unicorn 0.92.0 / 2009-09-18 21:40 UTC Small fixes and documentation are the focus of this release. James Golick reported and helped me track down a bug that caused SIGHUP to drop the default listener (0.0.0.0:8080) if and only if listeners were completely unspecified in both the command-line and Unicorn config file. The Unicorn config file remains the recommended option for specifying listeners as it allows fine-tuning of the :backlog, :rcvbuf, :sndbuf, :tcp_nopush, and :tcp_nodelay options. There are some documentation (and resulting website) improvements. setup.rb users will notice the new section 1 manpages for `unicorn` and `unicorn_rails`, Rubygems users will have to install manpages manually or use the website. The HTTP parser got a 3rd-party code review which resulted in some cleanups and one insignificant bugfix as a result. Additionally, the HTTP parser compiles, runs and passes unit tests under Rubinius. The pure-Ruby parts still do not work yet and we currently lack the resources/interest to pursue this further but help will be gladly accepted. The website now has an Atom feed for new release announcements. Those unfamiliar with Atom or HTTP may finger unicorn@bogomips.org for the latest announcements. Eric Wong (53): README: update with current version http: cleanup and avoid potential signedness warning http: clarify the setting of the actual header in the hash http: switch to macros for bitflag handling http: refactor keepalive tracking to functions http: use explicit elses for readability http: remove needless goto http: extra assertion when advancing p manually http: verbose assertions http: NIL_P(var) instead of var == Qnil http: rb_gc_mark already ignores immediates http: ignore Host: continuation lines with absolute URIs doc/SIGNALS: fix the no-longer-true bit about socket options "encoding: binary" comments for all sources (1.9) http_response: don't "rescue nil" for body.close CONTRIBUTORS: fix capitalization for why http: support Rubies without the OBJ_FROZEN macro http: define OFFT2NUM macro on Rubies without it http: no-op rb_str_modify() for Rubies without it http: compile with -fPIC http: use rb_str_{update,flush} if available http: create a new string buffer on empty values Update documentation for Rubinius support status http: cleanup assertion for memoized header strings http: add #endif comment labels where appropriate Add .mailmap file for "git shortlog" and other tools Update Manifest with mailmap Fix comment about speculative accept() SIGNALS: use "Unicorn" when referring to the web server Add new Documentation section for manpages test_exec: add extra tests for HUP and preload_app socket_helper: (FreeBSD) don't freeze the accept filter constant Avoid freezing objects that don't benefit from it SIGHUP no longer drops lone, default listener doc: generate ChangeLog and NEWS file for RDoc Remove Echoe and roll our own packaging/release... unicorn_rails: close parentheses in help message launchers: deprecate ambiguous -P/--p* switches man1/unicorn: avoid unnecessary emphasis Add unicorn_rails(1) manpage Documentation: don't force --rsyncable flag with gzip(1) Simplify and standardize manpages build/install GNUmakefile: package .tgz includes all generated files doc: begin integration of HTML manpages into RDoc Update TODO html: add Atom feeds doc: latest news is available through finger NEWS.atom: file timestamp matches latest entry pandoc needs the standalone switch for manpages man1/unicorn: split out RACK ENVIRONMENT section man1/unicorn_rails: fix unescaped underscore NEWS.atom.xml only lists the first 10 entries unicorn 0.92.0 === unicorn 0.10.3r / 2009-09-09 00:09 UTC Removes the Rev monkey patch, rev 0.3.0 is out now so we can just depend on that instead of monkey patching it. Experimental HTTP keepalive/pipelining support has arrived as well. Three features from mainline Unicorn are now working again with this branch: * Deadlocked workers can be detected by the master and nuked * multiple (TCP) listeners per process * graceful shutdown This (pre-)release does NOT feature HTTP/0.9 support that Unicorn 0.91.0 had, expect that when this branch is ready for merging with mainline. === unicorn 0.91.0 / 2009-09-04 19:04 UTC HTTP/0.9 support, multiline header support, small fixes 18 years too late, Unicorn finally gets HTTP/0.9 support as HTTP was implemented in 1991. Eric Wong (16): Documentation updates examples/echo: "Expect:" value is case-insensitive http: make strings independent before modification http: support for multi-line HTTP headers tee_input: fix rdoc unicorn_http: "fix" const warning http: extension-methods allow any tokens http: support for simple HTTP/0.9 GET requests test_http_parser_ng: fix failing HTTP/0.9 test case launcher: defer daemonized redirects until config is read test to ensure stderr goes *somewhere* when daemonized http: SERVER_PROTOCOL matches HTTP_VERSION http: add HttpParser#headers? method Support HTTP/0.9 entity-body-only responses Redirect files in binary mode unicorn 0.91.0 === unicorn v0.10.2r --rainbows / 2009-08-18 22:28 UTC Two botched releases in one day, hopefully this is the last... Eric Wong (3): rainbows: monkey-patch Rev::TCPListener for now rainbows: make the embedded SHA1 app Rack::Lint-safe unicorn 0.10.2r === unicorn 0.10.1r --rainbows / 2009-08-18 22:01 UTC Ruby 1.9 only, again Eric Wong (2): Actually hook up Rainbows to the rest of the beast unicorn 0.10.1r === unicorn 0.10.0r -- rainbows! / 2009-08-18 21:41 UTC This "release" is for Ruby 1.9 only === unicorn 0.90.0 / 2009-08-17 00:24 UTC switch chunking+trailer handling to Ragel, v0.8.4 fixes Moved chunked decoding and trailer parsing over to C/Ragel. Minor bug fixes, internal code cleanups, and API changes. Eric Wong (55): README: update version numbers for website Update Rails tests to run on Rails 2.3.3.1 README: latest stable version is 0.8.4 unicorn_http: small cleanups and size reduction Remove Ragel-generated file from version control unicorn_http: remove typedef from http_parser unicorn_http: update copyright unicorn_http: change "global_" prefix to "g_" unicorn_http: add helpful macros extconf: SIZEOF_OFF_T should be a ruby.h macro Refactoring unicorn_http C/Ragel code http: find_common_field_value => find_common_field http: split uncommon_field into a separate function http: remove some redundant functions http: "hp" denotes http_parser structs for consistency http: small cleanup in "https" detection http: minor cleanup of http_field handling http: split out server params handling http: move global initialization code http: cleanup setting for common values => globals http: remove noise functions http: move non-Ruby-specific macros c_util.h http: prepare http_parser struct for body processing http: generic C string vs VALUEs comparison function http: process Content-Length and Transfer-Encoding http: preliminary chunk decoding test_upload: extra CRLF is needed Switch to Ragel/C-based chunk/trailer parser http: unit tests for overflow and bad lengths http: add test for invalid trailer http: join repeated headers with a comma test_util: explicitly close tempfiles for GC-safety test_exec: wait for worker readiness Documentation updates test_signals: unlink log files of KILL-ed process http: rename read_body to filter_body http: add CONST_MEM_EQ macro http: add "HttpParser#keepalive?" method http: freeze fields when creating them, always README: everybody loves Ruby DSLs http_request: reinstate empty StringIO optimization tee_input: make interface more usable outside of Unicorn Drop the micro benchmarks http: fix warning when sizeof(off_t) == sizeof(long long) GNUmakefile: Fix "install" target Fix documentation for Util.reopen_logs http_response: pass through unknown status codes const: remove unused constants update TODO http: support for "Connection: keep-alive" TODO: remove keep-alive/pipelining Make launchers __END__-aware Remove explicit requires for Rack things app/inetd: explicitly close pipe descriptors on CatBody#close unicorn 0.90.0 === unicorn 0.8.4 / 2009-08-06 22:48 UTC pass through unknown HTTP status codes This release allows graceful degradation in case a user is using a status code not defined by Rack::Utils::HTTP_STATUS_CODES. A patch has been submitted[1] upstream to Rack but this issue may still affect users of yet-to-be-standardized status codes. Eric Wong (2): http_response: pass through unknown status codes unicorn 0.8.4 [1] - http://rack.lighthouseapp.com/projects/22435-rack/tickets/70 === unicorn 0.9.2 / 2009-07-20 01:29 UTC Ruby 1.9.2 preview1 compatibility This release mainly fixes compatibility issues the Ruby 1.9.2 preview1 release (and one existing 1.9.x issue). Note that Rails 2.3.2.1 does NOT appear to work with Ruby 1.9.2 preview1, but that is outside the scope of this project. The 0.9.x series (including this release) is only recommended for development/experimental use. This series is NOT recommended for production use, use 0.8.x instead. Eric Wong (10): README: add Gmane newsgroup info README: update about development/stable versions Rename unicorn/http11 => unicorn_http move all #gets logic to tee_input out of chunked_reader http_request: don't support apps that close env["rack.input"] HttpRequest: no need for a temporary variable Remove core Tempfile dependency (1.9.2-preview1 compat) fix tests to run correctly under 1.9.2preview1 app/exec_cgi: fix 1.9 compatibility unicorn 0.9.2 === unicorn 0.8.3 / 2009-07-20 01:26 UTC Ruby 1.9.2 preview1 compatibility This release fixes compatibility issues the Ruby 1.9.2 preview1 release (and one existing 1.9.x issue). Note that Rails 2.3.2.1 does NOT appear to work with Ruby 1.9.2 preview1, but that is outside the scope of this project. Eric Wong (4): Remove core Tempfile dependency (1.9.2-preview1 compat) fix tests to run correctly under 1.9.2preview1 app/exec_cgi: fix 1.9 compatibility unicorn 0.8.3 === unicorn 0.8.2 / 2009-07-09 08:59 UTC socket handling bugfixes and usability tweaks Socket handling bugfixes and socket-related usability and performance tweaks. We no longer trust FD_CLOEXEC to be inherited across accept(); thanks to Paul Sponagl for diagnosing this issue on OSX. There are also minor tweaks backported from 0.9.0 to make non-graceful restarts/upgrades go more smoothly. Eric Wong (6): Unbind listeners as before stopping workers Retry listen() on EADDRINUSE 5 times every 500ms Re-add support for non-portable socket options Minor cleanups to core always set FD_CLOEXEC on sockets post-accept() unicorn 0.8.2 === unicorn 0.9.1 / 2009-07-09 08:49 UTC FD_CLOEXEC portability fix (v0.8.2 port) Minor cleanups, set FD_CLOEXEC on accepted listen sockets instead of relying on the flag to be inherited across accept. The 0.9.x series (including this release) is NOT recommended for production use, try 0.8.x instead. Eric Wong (10): Avoid temporary array creation Favor Struct members to instance variables Minor cleanups to core Unbind listeners as before stopping workers Retry listen() on EADDRINUSE 5 times ever 500ms Re-add support for non-portable socket options Minor cleanups to core (cherry picked from commit ec70433f84664af0dff1336845ddd51f50a714a3) always set FD_CLOEXEC on sockets post-accept() unicorn 0.8.2 unicorn 0.9.1 (merge 0.8.2) === unicorn 0.9.0 / 2009-07-01 22:24 UTC bodies: "Transfer-Encoding: chunked", rewindable streaming We now have support for "Transfer-Encoding: chunked" bodies in requests. Not only that, Rack applications reading input bodies get that data streamed off to the client socket on an as-needed basis. This allows the application to do things like upload progress notification and even tunneling of arbitrary stream protocols via bidirectional chunked encoding. See Unicorn::App::Inetd and examples/git.ru (including the comments) for an example of tunneling the git:// protocol over HTTP. This release also gives applications the ability to respond positively to "Expect: 100-continue" headers before being rerun without closing the socket connection. See Unicorn::App::Inetd for an example of how this is used. This release is NOT recommended for production use. Eric Wong (43): http_request: no need to reset the request http_request: StringIO is binary for empty bodies (1.9) http_request: fix typo for 1.9 Transfer-Encoding: chunked streaming input support Unicorn::App::Inetd: reinventing Unix, poorly :) README: update with mailing list info local.mk.sample: publish_doc gzips all html, js, css Put copyright text in new files, include GPL2 text examples/cat-chunk-proxy: link to proposed curl(1) patch Update TODO Avoid duplicating the "Z" constant Optimize body-less GET/HEAD requests (again) tee_input: Don't expose the @rd object as a return value exec_cgi: small cleanups README: another note about older Sinatra tee_input: avoid defining a @rd.size method Make TeeInput easier to use test_upload: add tests for chunked encoding GNUmakefile: more stringent error checking in tests test_upload: fix ECONNRESET with 1.9 GNUmakefile: allow TRACER= to be specified for tests test_rails: workaround long-standing 1.9 bug tee_input: avoid rereading fresh data "Fix" tests that break with stream_input=false inetd: fix broken constant references configurator: provide stream_input (true|false) option chunked_reader: simpler interface http_request: force BUFFER to be Encoding::BINARY ACK clients on "Expect: 100-continue" header Only send "100 Continue" when no body has been sent http_request: tighter Transfer-Encoding: "chunked" check Add trailer_parser for parsing trailers chunked_reader: Add test for chunk parse failure TeeInput: use only one IO for tempfile trailer_parser: set keys with "HTTP_" prefix TrailerParser integration into ChunkedReader Unbind listeners as before stopping workers Retry listen() on EADDRINUSE 5 times ever 500ms Re-add support for non-portable socket options Move "Expect: 100-continue" handling to the app tee_input: avoid ignoring initial body blob Force streaming input onto apps by default unicorn 0.9.0 === unicorn 0.8.1 / 2009-05-28 21:45 UTC safer timeout handling, more consistent reload behavior This release features safer, more descriptive timeout handling, more consistent reload behavior, and is a miniscule amount faster on "Hello World" benchmarks Eric Wong (7): doc: cleanup summary/description Fix potential race condition in timeout handling SIGHUP reloads app even if preload_app is true Make our HttpRequest object a global constant Avoid instance variables lookups in a critical path Consistent logger assignment for multiple objects unicorn 0.8.1 === unicorn 0.8.0 / 2009-05-26 22:59 UTC enforce Rack dependency, minor performance improvements and fixes The RubyGem now has a hard dependency on Rack. Minor performance improvements and code cleanups. If RubyGems are in use, the Gem index is refreshed when SIGHUP is issued. Eric Wong (66): test_request: enable with Ruby 1.9 now Rack 1.0.0 is out Small cleanup test_upload: still uncomfortable with 1.9 IO encoding... Add example init script app/exec_cgi: GC prevention Add TUNING document Make speculative accept() faster for the common case app/old_rails: correctly log errors in output http_request: avoid StringIO.new for GET/HEAD requests http_response: luserspace buffering is barely faster benchmark/*: updates for newer versions of Unicorn http_request: switch to readpartial over sysread No point in unsetting the O_NONBLOCK flag Merge commit 'origin/benchmark' Safer timeout handling and test case Ignore unhandled master signals in the workers TUNING: add a note about somaxconn with UNIX sockets Remove redundant socket closing/checking Instant shutdown signals really mean instant shutdown test_signals: ready workers before connecting Speed up the worker accept loop Fix a warning about @pid being uninitialized Inline and remove the HttpRequest#reset method Preserve 1.9 IO encodings in reopen_logs configurator: fix rdoc formatting http_request: use Rack::InputWrapper-compatible methods app/exec_cgi: use explicit buffers for read/sysread Enforce minimum timeout at 3 seconds Avoid killing sleeping workers Remove trickletest HttpRequest::DEF_PARAMS => HttpRequest::DEFAULTS exec_cgi: don't assume the body#each consumer is a socket Reopen master logs on SIGHUP, too Require Rack for HTTP Status codes http_response: allow string status codes test_response: correct OFS test privatize constants only used by old_rails/static Disable formatting for command-line switches GNUmakefile: glob all files in bin/* test_request: enable with Ruby 1.9 now Rack 1.0.0 is out test_upload: still uncomfortable with 1.9 IO encoding... Add example init script app/exec_cgi: GC prevention Add TUNING document app/old_rails: correctly log errors in output Safer timeout handling and test case Ignore unhandled master signals in the workers TUNING: add a note about somaxconn with UNIX sockets Fix a warning about @pid being uninitialized Preserve 1.9 IO encodings in reopen_logs configurator: fix rdoc formatting Enforce minimum timeout at 3 seconds http_response: allow string status codes test_response: correct OFS test Disable formatting for command-line switches GNUmakefile: glob all files in bin/* Merge branch '0.7.x-stable' Define HttpRequest#reset if missing Merge branch 'benchmark' unicorn 0.7.1 Merge commit 'v0.7.1' Refresh Gem list when building the app Only refresh the gem list when building the app Switch to autoload to defer requires remove trickletest from Manifest unicorn 0.8.0 === unicorn 0.7.1 / 2009-05-22 09:06 UTC minor fixes, cleanups and documentation improvements Eric Wong (18): test_request: enable with Ruby 1.9 now Rack 1.0.0 is out test_upload: still uncomfortable with 1.9 IO encoding... Add example init script app/exec_cgi: GC prevention Add TUNING document app/old_rails: correctly log errors in output Safer timeout handling and test case Ignore unhandled master signals in the workers TUNING: add a note about somaxconn with UNIX sockets Fix a warning about @pid being uninitialized Preserve 1.9 IO encodings in reopen_logs configurator: fix rdoc formatting Enforce minimum timeout at 3 seconds http_response: allow string status codes test_response: correct OFS test Disable formatting for command-line switches GNUmakefile: glob all files in bin/* unicorn 0.7.1 === unicorn 0.7.0 / 2009-04-25 18:59 UTC rack.version is 1.0 Rack 1.0.0 compatibility, applications are now passed env["rack.version"] == [1, 0] Eric Wong (5): doc: formatting changes for SIGNALS doc configurator: "listen" directive more nginx-like Fix log rotation being delayed in workers when idle Rack 1.0.0 compatibility unicorn 0.7.0 === unicorn 0.6.0 / 2009-04-24 21:47 UTC cleanups + optimizations, signals to {in,de}crement processes * Mostly OCD-induced yak-shaving changes * SIGTTIN and SIGTTOU are now used to control incrementing and decrementing of worker processes without needing to change the config file and SIGHUP. Eric Wong (46): test_upload: ensure StringIO objects are binary http11: cleanup #includes and whitespace GNUmakefile: Fix ragel dependencies GNUmakefile: kill trailing whitespace after ragel Move absolute URI parsing into HTTP parser http11: remove unused variables/elements http_request: freeze modifiable elements HttpParser: set QUERY_STRING for Rack-compliance GNUmakefile: mark test_signals as a slow test const: remove unused QUERY_STRING constant http11: formatting cleanups http11: remove callbacks from structure replace DATA_GET macro with a function http11: minor cleanups in return types http11: make parser obey HTTP_HOST with empty port http11: cleanup some CPP macros http11: rfc2616 handling of absolute URIs http_response: small speedup by eliminating loop Stop extending core classes rename socket.rb => socket_helper.rb Remove @start_ctx instance variable http11: support underscores in URI hostnames test: empty port test for absolute URIs Cleanup some unnecessary requires Cleanup GNUmakefile and fix dependencies Fix data corruption with small uploads via browsers Get rid of UNICORN_TMP_BASE constant GNUmakefile: mark test_upload as a slow test unicorn_rails: avoid nesting lambdas test_exec: cleanup stale socket on exit Merge commit 'v0.5.4' http_request: micro optimizations IO_PURGATORY should be a global constant Make LISTENERS and WORKERS global constants, too test_socket_helper: disable GC for this test http_response: just barely faster http_response: minor performance gains make SELF_PIPE is a global constant Describe the global constants we use. Fixup reference to a dead variable Avoid getppid() if serving heavy traffic minor cleanups and save a few variables Allow std{err,out}_path to be changed via HUP SIGTT{IN,OU} {in,de}crements worker_processes cleanup: avoid duped self-pipe init/replacement logic unicorn 0.6.0 === unicorn 0.5.4 / 2009-04-24 01:41 UTC fix data corruption with some small uploads (not curl) Eric Wong (2): Fix data corruption with small uploads via browsers unicorn 0.5.4 === unicorn 0.5.3 / 2009-04-17 05:32 UTC fix 100% CPU usage when idle, small cleanups fix 100% CPU usage when idle Eric Wong (7): update Manifest (add test_signals.rb) Fix forgotten Rails tests Fix my local.mk file to always run Rails tests fix 100% CPU usage when idle remove DATE constant Small garbage reduction in HttpResponse unicorn 0.5.3 === unicorn 0.5.2 / 2009-04-16 23:32 UTC force Status: header for compat, small cleanups * Ensure responses always have the "Status:" header. This is needed for compatibility with some broken clients. * Other small and minor cleanups Eric Wong (10): Explicitly trap SIGINT/SIGTERM again s/rotating/reopening/g in log messages before_commit and before_exec can never be nil/false worker_loop cleanups, var golf, and yak-shaving http11: default server port is 443 for https ensure responses always have the "Status:" header test: fix dependency issue with "make test-unit" GNUmakefile: small dependency cleanups unicorn/const: kill trailing whitespace unicorn 0.5.2 === unicorn 0.5.1 / 2009-04-13 21:24 UTC exit correctly on INT/TERM, QUIT is still recommended, however We now exit correctly on INT/TERM signals, QUIT is still recommended as it does graceful shutdowns. Eric Wong (2): Fix SIGINT/SIGTERM handling (broken in 0.5.0) unicorn 0.5.1 === unicorn 0.5.0 / 2009-04-13 19:08 UTC {after,before}_fork API change, small tweaks/fixes * There is an API change in the {after,before}_fork hooks so now the entire Worker struct is exposed to the user. This allows Unicorn to unofficially support user/group privilege changing. * The "X-Forwarded-Proto:" header can be set by proxies to ensure rack.url_scheme is "https" for SSL-enabled sites. * Small cleanups and tweaks throughout, see shortlog (below) or changelog for details. Eric Wong (32): test_helper: redirect_io uses append and sync configurator: allow hooks to be passed callable objects Add a test for signal recovery Documentation updates Enforce umask 0000 with UNIX domain sockets local.mk: touch files after set-file-times Add test for :preload_app config option GNUmakefile: remove unnecessary asterisks in output GNUmakefile: allow "make V=1 ..." for verbosity test_configurator: rename test name that never ran cleanup some log messages test_request: tests esoteric/rare REQUEST_URIs http11: Remove qsort/bsearch code paths http11: handle "X-Forwarded-Proto: https" close listeners when removing them from our array config: handle listener unbind/replace in config file README: doc updates Restore unlinked UNIX sockets on SIGHUP listen backlog, sndbuf, rcvbuf are always changeable Remove _all_ non-POSIX socket options http11: cleanup+safer rack.url_scheme handling test_exec: fix potential races in fd leak test test_http_parser: fix broken URL in comment Save one fcntl() syscall on every request Remove unnecessary sync assignment Don't bother restoring ENV or umask across reexec old_rails: try harder to ensure valid responses small cleanups in signal handling and worker init Remove unnecessary local variables in process_client Expose worker to {before,after}_fork hooks Configurator: add example for user/group switching unicorn 0.5.0 === unicorn 0.4.2 / 2009-04-02 19:14 UTC fix Rails ARStore, FD leak prevention, descriptive proctitles Eric Wong (16): Manifest: updates Merge unicorn test_exec: add test case for per-worker listeners Remove set_cloexec wrapper and require FD_CLOEXEC All IOs created in workers have FD_CLOEXEC set FD_CLOEXEC all non-listen descriptors before exec Close std{err,out} redirection targets test_upload: fix a race condition in unlink test More descriptive process titles unicorn_rails: cleanup redundant bits test/rails: v2.1.2 + ActiveRecordStore all around Use File.basename instead of a regexp Add log directories to tests unicorn: remove unnecessary lambda generation GNUmakefile: "install" preserves unicorn_rails unicorn 0.4.2 === unicorn v0.4.1 / 2009-04-01 10:52 UTC Rails support, per-listener backlog and {snd,rcv}buf Eric Wong (50): All new benchmarks, old ones removed benchmark: header values must be strings Merge commit 'origin/benchmark' into release HttpResponse: speed up non-multivalue headers Streamline rack environment generation Don't bother unlinking UNIX sockets unicorn_rails: support non-Rack versions of Rails HttpRequest: small improvement for GET requests simplify the HttpParser interface Socket: add {snd,rcv}buf opts to bind_listen Merge commit 'v0.2.3' Don't allow failed log rotation to to break app Deferred log rotation in workers style: symbols instead of strings for signal names No need to disable luserspace buffering on client socket test_server: quieter tests Remove needless line break Always try to send a valid HTTP response back test_response: ensure closed socket after write test_response: ensure response body is closed TODO: update roadmap to 1.0.0 configurator: per-listener backlog, {rcv,snd}buf config configurator: favor "listen" directive over "listeners" http11: use :http_body instead of "HTTP_BODY" Avoid having two pid files pointing to the same pid test_exec: fix race conditions test_exec: fix response bodies Fix default listener setup test_exec: fix another race condition bin/*: parse CLI switches in config.ru sooner app/old_rails/static: define missing constant unicorn_rails: give more info when aborting GNUmakefile: add test-exec and test-unit targets cgi_wrapper: ensure "Status:" header is not set Better canonicalization of listener paths + tests configurator: remove unnecessary SocketHelper include unicorn_rails: minor cleanup for dead variable Use {read,write}_nonblock on the pipe unicorn_rails: cleanup path mapping usage Rails stack tests for unicorn_rails test: factor out exec helpers into common code for Rails tests cgi_wrapper: fix cookies and other headers GNUmakefile: prefix errors with $(extra) variable cgi_wrapper: HTTP status code cleanups Add more tests for Rails test_rails: 4x speedup Manifest update Documentation updates, prep for 0.4.1 release Add local.mk.sample file that I use unicorn 0.4.1 === unicorn v0.2.3 / 2009-03-25 23:31 UTC Unlink Tempfiles after use (they were closed, just not unlinked) Eric Wong (3): Don't bother unlinking UNIX sockets Ensure Tempfiles are unlinked after every request unicorn 0.2.3 === unicorn v0.2.2 / 2009-03-22 23:45 UTC small bug fixes, fix Rack multi-value headers (Set-Cookie:) Eric Wong (19): Fix link to Rubyforge releases page start libifying common launcher code unicorn_rails: fix standard pid path setup Move listen path and address expansion to Configurator Trap WINCH to QUIT children without respawning Remove Mongrel stuff from CHANGELOG HttpResponse: close body if it can close Add Unicorn::App::ExecCgi Process management cleanups documentation/disclaimer updates unicorn_rails: remove unnecessary Rack-loading logic unicorn/http11: remove GATEWAY_INTERFACE http11: don't set headers Rack doesn't like HttpRequest test so our requests pass Rack::Lint HttpRequest: correctly reference logger Rotate master logs before workers. Simplify code for sleeping/waking up the master Handle Rack multivalue headers correctly unicorn 0.2.2 === unicorn v0.2.1 / 2009-03-19 03:20 UTC Fix broken Manifest that cause unicorn_rails to not be bundled Eric Wong (1): unicorn v0.2.1, fix the Manifest === unicorn v0.2.0 / 2009-03-19 03:16 UTC unicorn_rails launcher script. Eric Wong (8): Start _Known Issues_ section in README Allow binding to UNIX sockets relative to "~" tests: do not trust (our correct use of) 1.9 encodings gracefully die if working dir is invalid at fork Add signal queueing for test reliability Add unicorn_rails script for Rails 2.3.2 Documentation updates, prepare for 0.2.0 unicorn 0.2.0 === unicorn v0.1.0 / 2009-03-11 01:50 UTC Unicorn - UNIX-only fork of Mongrel free of threading