pax_global_header00006660000000000000000000000064123136065040014512gustar00rootroot0000000000000052 comment=4243e7f781e59657a31b3d0746e36c8b49024dd0 cubemap-1.0.4/000077500000000000000000000000001231360650400131305ustar00rootroot00000000000000cubemap-1.0.4/.gitignore000066400000000000000000000000561231360650400151210ustar00rootroot00000000000000cubemap !munin/cubemap *.o *.d *.pb.cc *.pb.h cubemap-1.0.4/COPYING000066400000000000000000000432541231360650400141730ustar00rootroot00000000000000 GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Lesser General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) year name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. cubemap-1.0.4/Makefile000066400000000000000000000036121231360650400145720ustar00rootroot00000000000000CC=gcc CXX=g++ INSTALL=install PROTOC=protoc CXXFLAGS=-Wall -O2 -g -pthread $(shell getconf LFS_CFLAGS) LDLIBS=-lprotobuf -pthread -lrt OBJS=main.o client.o server.o stream.o udpstream.o serverpool.o mutexlock.o input.o input_stats.o httpinput.o udpinput.o parse.o config.o markpool.o acceptor.o stats.o accesslog.o thread.o util.o log.o metacube2.o sa_compare.o state.pb.o all: cubemap %.pb.cc %.pb.h : %.proto $(PROTOC) --cpp_out=. $< %.o: %.cpp state.pb.h $(CXX) -MMD -MP $(CPPFLAGS) $(CXXFLAGS) -o $@ -c $< %.pb.o: %.pb.cc $(CXX) -MMD -MP $(CPPFLAGS) $(CXXFLAGS) -o $@ -c $< cubemap: $(OBJS) $(CXX) -o cubemap $(OBJS) $(LDLIBS) $(LDFLAGS) DEPS=$(OBJS:.o=.d) -include $(DEPS) clean: $(RM) cubemap $(OBJS) $(DEPS) state.pb.h state.pb.cc PREFIX=/usr/local SYSCONFDIR=/etc LOCALSTATEDIR=/var install: $(INSTALL) -m 755 -o root -g root -d \ $(DESTDIR)$(PREFIX)/bin \ $(DESTDIR)$(PREFIX)/share/man/man1 \ $(DESTDIR)$(SYSCONFDIR) \ $(DESTDIR)$(LOCALSTATEDIR)/lib/cubemap \ $(DESTDIR)$(LOCALSTATEDIR)/log/cubemap \ $(DESTDIR)$(PREFIX)/share/munin/plugins \ $(DESTDIR)$(PREFIX)/lib/systemd/system $(INSTALL) -m 755 -o root -g root cubemap $(DESTDIR)$(PREFIX)/bin/cubemap $(INSTALL) -m 755 -o root -g root munin/cubemap munin/cubemap_input $(DESTDIR)$(PREFIX)/share/munin/plugins/ sed \ -e "s,cubemap\.stats,$(LOCALSTATEDIR)/lib/cubemap/\0,g" \ -e "s,cubemap-input\.stats,$(LOCALSTATEDIR)/lib/cubemap/\0,g" \ -e "s,access\.log,$(LOCALSTATEDIR)/log/cubemap/\0,g" \ -e "s,cubemap\.log,$(LOCALSTATEDIR)/log/cubemap/\0,g" \ -e 's,^stream,#\0,g' \ -e 's,^udpstream,#\0,g' \ cubemap.config.sample > $(DESTDIR)$(SYSCONFDIR)/cubemap.config gzip -c cubemap.1 > $(DESTDIR)$(PREFIX)/share/man/man1/cubemap.1.gz sed \ -e "s,@prefix@,$(PREFIX),g" \ -e "s,@sysconfdir@,$(SYSCONFDIR),g" \ cubemap.service.in > $(DESTDIR)$(PREFIX)/lib/systemd/system/cubemap.service .PHONY: clean install .SUFFIXES: cubemap-1.0.4/NEWS000066400000000000000000000023401231360650400136260ustar00rootroot00000000000000Cubemap 1.0.4, 2014-03-23 * Fix a segfault on reload that was introduced in 1.0.2. * Remove the Metacube VLC patch, as it is now upstream. * Always compile with large file support, which works around a blocking issue with 32-bit x86. Cubemap 1.0.3, 2014-02-06 * Fix a compilation error with newer glibc. Cubemap 1.0.2, 2014-02-04 * Support SO_MAX_PACING_RATE (Linux 3.13 and above). * Add a listen statement to listen only on specific IP addresses, in addition to the port statement. * Update the VLC Metacube patch to apply to current VLC git. * Fix a crash bug on reload. * Be more consistent about handling streams that have no data yet. In particular, this could show itself as erratic behavior when sending Metacube streams on to other Cubemap instances. Cubemap 1.0.1, 2013-09-19 * Added NEWS file. * Fix an issue where Cubemap could be slow when /tmp was slow (ie., not on SSD and not on tmpfs), due to high mutex contention. * Fix compilation on 32-bit systems. * Various packaging fixes and a systemd service unit, contributed by Philipp Kern. * Use the new deleted-by-default temporary files if available (Linux 3.11 and above). Cubemap 1.0.0, 2013-08-24 * Initial release. cubemap-1.0.4/README000066400000000000000000000046021231360650400140120ustar00rootroot00000000000000Cubemap is a high-performance, high-availability video reflector, specifically made for use with VLC. A short list of features: - High-performance, through a design with multiple worker threads, epoll and sendfile (yes, sendfile); a 2GHz quadcore can saturate 10 gigabit Ethernet, given a modern kernel, a modern NIC and the right kernel tuning. - High-availability. You can change any part of the configuration (and even upgrade to a newer version of Cubemap) by changing cubemap.config and sending a SIGHUP; all clients will continue as if nothing had happened (unless you delete the stream they are watching, of course). Cubemap also survives the encoder dying and reconnecting. - Per-stream fwmark support, for TCP pacing through tc (separate config needed). - Support for setting max pacing rate through the fq packet scheduler (obsoletes the previous point, but depends on Linux 3.13 or newer). - Reflects anything VLC can reflect over HTTP, even the muxes VLC has problems reflecting itself (in particular, FLV). - IPv4 support. Yes, Cubemap even supports (some) legacy protocols. HOWTO: sudo aptitude install libprotobuf-dev protobuf-compiler make -j4 If you want to use HTTP input (you probably want to), you want VLC 2.2.0 or newer. Then start the VLC encoder with the “metacube” flag to the http access mux, like this: cvlc [...] --sout '#std{access=http{metacube,mime=video/x-flv},mux=flv,dst=:4013/test.flv}' Then look through cubemap.config.sample, copy it to cubemap.config, compile and start cubemap. To upgrade cubemap (after you've compiled a new binary), or to pick up new config: killall -HUP cubemap Cubemap will serialize itself to disk, check that the new binary and config are OK, and then exec() the new version, which deserializes everything and keeps going. Munin plugins: To activate these, symlink them into /etc/munin/plugins. If you don't put the files in the expected default locations (as done by 'make install'), you probably want some configuration in /etc/munin/plugin-conf.d/cubemap or similar, like this: [cubemap*] user env.cubemap_config /etc/cubemap/cubemap.config env.cubemap_stats /var/lib/cubemap/cubemap.stats env.cubemap_input_stats /var/lib/cubemap/cubemap-input.stats Legalese: Copyright 2013 Steinar H. Gunderson . Licensed under the GNU GPL, version 2. See the included COPYING file. cubemap-1.0.4/acceptor.cpp000066400000000000000000000072021231360650400154350ustar00rootroot00000000000000#include #include #include #include #include #include #include #include #include #include #include "acceptor.h" #include "log.h" #include "serverpool.h" #include "state.pb.h" #include "util.h" using namespace std; extern ServerPool *servers; int create_server_socket(const sockaddr_in6 &addr, SocketType socket_type) { int server_sock; if (socket_type == TCP_SOCKET) { server_sock = socket(PF_INET6, SOCK_STREAM, IPPROTO_TCP); } else { assert(socket_type == UDP_SOCKET); server_sock = socket(PF_INET6, SOCK_DGRAM, IPPROTO_UDP); } if (server_sock == -1) { log_perror("socket"); exit(1); } int one = 1; if (setsockopt(server_sock, SOL_SOCKET, SO_REUSEADDR, &one, sizeof(one)) == -1) { log_perror("setsockopt(SO_REUSEADDR)"); exit(1); } // We want dual-stack sockets. (Sorry, OpenBSD and Windows XP...) int zero = 0; if (setsockopt(server_sock, IPPROTO_IPV6, IPV6_V6ONLY, &zero, sizeof(zero)) == -1) { log_perror("setsockopt(IPV6_V6ONLY)"); exit(1); } // Set as non-blocking, so the acceptor thread can notice that we want to shut it down. if (ioctl(server_sock, FIONBIO, &one) == -1) { log_perror("ioctl(FIONBIO)"); exit(1); } if (bind(server_sock, reinterpret_cast(&addr), sizeof(addr)) == -1) { log_perror("bind"); exit(1); } if (socket_type == TCP_SOCKET) { if (listen(server_sock, 128) == -1) { log_perror("listen"); exit(1); } } return server_sock; } sockaddr_in6 CreateAnyAddress(int port) { sockaddr_in6 sin6; memset(&sin6, 0, sizeof(sin6)); sin6.sin6_family = AF_INET6; sin6.sin6_port = htons(port); return sin6; } sockaddr_in6 ExtractAddressFromAcceptorProto(const AcceptorProto &proto) { sockaddr_in6 sin6; memset(&sin6, 0, sizeof(sin6)); sin6.sin6_family = AF_INET6; if (!proto.addr().empty()) { int ret = inet_pton(AF_INET6, proto.addr().c_str(), &sin6.sin6_addr); assert(ret == 1); } sin6.sin6_port = htons(proto.port()); return sin6; } Acceptor::Acceptor(int server_sock, const sockaddr_in6 &addr) : server_sock(server_sock), addr(addr) { } Acceptor::Acceptor(const AcceptorProto &serialized) : server_sock(serialized.server_sock()), addr(ExtractAddressFromAcceptorProto(serialized)) { } AcceptorProto Acceptor::serialize() const { char buf[INET6_ADDRSTRLEN]; inet_ntop(addr.sin6_family, &addr.sin6_addr, buf, sizeof(buf)); AcceptorProto serialized; serialized.set_server_sock(server_sock); serialized.set_addr(buf); serialized.set_port(ntohs(addr.sin6_port)); return serialized; } void Acceptor::close_socket() { safe_close(server_sock); } void Acceptor::do_work() { while (!should_stop()) { if (!wait_for_activity(server_sock, POLLIN, NULL)) { continue; } sockaddr_in6 addr; socklen_t addrlen = sizeof(addr); // Get a new socket. int sock = accept(server_sock, reinterpret_cast(&addr), &addrlen); if (sock == -1 && errno == EINTR) { continue; } if (sock == -1) { log_perror("accept"); usleep(100000); continue; } // Set the socket as nonblocking. int one = 1; if (ioctl(sock, FIONBIO, &one) == -1) { log_perror("ioctl(FIONBIO)"); exit(1); } // Enable TCP_CORK for maximum throughput. In the rare case that the // stream stops entirely, this will cause a small delay (~200 ms) // before the last part is sent out, but that should be fine. if (setsockopt(sock, SOL_TCP, TCP_CORK, &one, sizeof(one)) == -1) { log_perror("setsockopt(TCP_CORK)"); // Can still continue. } // Pick a server, round-robin, and hand over the socket to it. servers->add_client(sock); } } cubemap-1.0.4/acceptor.h000066400000000000000000000014321231360650400151010ustar00rootroot00000000000000#ifndef _ACCEPTOR_H #define _ACCEPTOR_H #include #include "thread.h" enum SocketType { TCP_SOCKET, UDP_SOCKET, }; int create_server_socket(const sockaddr_in6 &addr, SocketType socket_type); class AcceptorProto; sockaddr_in6 CreateAnyAddress(int port); sockaddr_in6 ExtractAddressFromAcceptorProto(const AcceptorProto &proto); // A thread that accepts new connections on a given socket, // and hands them off to the server pool. class Acceptor : public Thread { public: Acceptor(int server_sock, const sockaddr_in6 &addr); // Serialization/deserialization. Acceptor(const AcceptorProto &serialized); AcceptorProto serialize() const; void close_socket(); private: virtual void do_work(); int server_sock; sockaddr_in6 addr; }; #endif // !defined(_ACCEPTOR_H) cubemap-1.0.4/accesslog.cpp000066400000000000000000000032061231360650400156000ustar00rootroot00000000000000#include #include #include #include #include #include "accesslog.h" #include "client.h" #include "log.h" #include "mutexlock.h" using namespace std; AccessLogThread::AccessLogThread() { pthread_mutex_init(&mutex, NULL); } AccessLogThread::AccessLogThread(const string &filename) : filename(filename) { pthread_mutex_init(&mutex, NULL); } void AccessLogThread::write(const ClientStats& client) { { MutexLock lock(&mutex); pending_writes.push_back(client); } wakeup(); } void AccessLogThread::do_work() { // Open the file. if (filename.empty()) { logfp = NULL; } else { logfp = fopen(filename.c_str(), "a+"); if (logfp == NULL) { log_perror(filename.c_str()); // Continue as before. } } while (!should_stop()) { // Empty the queue. vector writes; { MutexLock lock(&mutex); swap(pending_writes, writes); } if (logfp != NULL) { // Do the actual writes. time_t now = time(NULL); for (size_t i = 0; i < writes.size(); ++i) { fprintf(logfp, "%llu %s %s %d %llu %llu %llu\n", (long long unsigned)(writes[i].connect_time), writes[i].remote_addr.c_str(), writes[i].url.c_str(), int(now - writes[i].connect_time), (long long unsigned)(writes[i].bytes_sent), (long long unsigned)(writes[i].bytes_lost), (long long unsigned)(writes[i].num_loss_events)); } fflush(logfp); } // Wait until we are being woken up, either to quit or because // there is material in pending_writes. wait_for_wakeup(NULL); } if (logfp != NULL) { if (fclose(logfp) == EOF) { log_perror("fclose"); } } logfp = NULL; } cubemap-1.0.4/accesslog.h000066400000000000000000000020351231360650400152440ustar00rootroot00000000000000#ifndef _ACCESSLOG_H #define _ACCESSLOG_H // A class to log clients that just disconnected. Since this is shared by all // Server instances, we try not to let write() block too much, and rather do // all the I/O in a separate I/O thread. #include #include #include #include #include "client.h" #include "thread.h" class AccessLogThread : public Thread { public: // Used if we do not have a file to log to. The thread will still exist, // but won't actually write anywhere. AccessLogThread(); // Log to a given file. If the file can't be opened, log an error // to the error log, and work as if we didn't have a log file. AccessLogThread(const std::string &filename); // Add a log entry. Entries are written out at least once every second. void write(const ClientStats& client); private: virtual void do_work(); // The file we are logging to. If NULL, do not log. FILE *logfp; std::string filename; pthread_mutex_t mutex; std::vector pending_writes; }; #endif // _ACCESSLOG_H cubemap-1.0.4/client.cpp000066400000000000000000000064201231360650400151140ustar00rootroot00000000000000#include #include #include #include #include "client.h" #include "log.h" #include "markpool.h" #include "state.pb.h" #include "stream.h" #ifndef SO_MAX_PACING_RATE #define SO_MAX_PACING_RATE 47 #endif using namespace std; Client::Client(int sock) : sock(sock), fwmark(0), connect_time(time(NULL)), state(Client::READING_REQUEST), stream(NULL), header_or_error_bytes_sent(0), stream_pos(0), bytes_sent(0), bytes_lost(0), num_loss_events(0) { request.reserve(1024); // Find the remote address, and convert it to ASCII. sockaddr_in6 addr; socklen_t addr_len = sizeof(addr); if (getpeername(sock, reinterpret_cast(&addr), &addr_len) == -1) { log_perror("getpeername"); remote_addr = ""; return; } char buf[INET6_ADDRSTRLEN]; if (IN6_IS_ADDR_V4MAPPED(&addr.sin6_addr)) { // IPv4 address, really. if (inet_ntop(AF_INET, &addr.sin6_addr.s6_addr32[3], buf, sizeof(buf)) == NULL) { log_perror("inet_ntop"); remote_addr = ""; } else { remote_addr = buf; } } else { if (inet_ntop(addr.sin6_family, &addr.sin6_addr, buf, sizeof(buf)) == NULL) { log_perror("inet_ntop"); remote_addr = ""; } else { remote_addr = buf; } } } Client::Client(const ClientProto &serialized, Stream *stream) : sock(serialized.sock()), remote_addr(serialized.remote_addr()), connect_time(serialized.connect_time()), state(State(serialized.state())), request(serialized.request()), url(serialized.url()), stream(stream), header_or_error(serialized.header_or_error()), header_or_error_bytes_sent(serialized.header_or_error_bytes_sent()), stream_pos(serialized.stream_pos()), bytes_sent(serialized.bytes_sent()), bytes_lost(serialized.bytes_lost()), num_loss_events(serialized.num_loss_events()) { if (stream != NULL && stream->mark_pool != NULL) { fwmark = stream->mark_pool->get_mark(); } else { fwmark = 0; // No mark. } if (setsockopt(sock, SOL_SOCKET, SO_MARK, &fwmark, sizeof(fwmark)) == -1) { if (fwmark != 0) { log_perror("setsockopt(SO_MARK)"); } fwmark = 0; } if (stream != NULL) { if (setsockopt(sock, SOL_SOCKET, SO_MAX_PACING_RATE, &stream->pacing_rate, sizeof(stream->pacing_rate)) == -1) { if (stream->pacing_rate != ~0U) { log_perror("setsockopt(SO_MAX_PACING_RATE)"); } } } } ClientProto Client::serialize() const { ClientProto serialized; serialized.set_sock(sock); serialized.set_remote_addr(remote_addr); serialized.set_connect_time(connect_time); serialized.set_state(state); serialized.set_request(request); serialized.set_url(url); serialized.set_header_or_error(header_or_error); serialized.set_header_or_error_bytes_sent(serialized.header_or_error_bytes_sent()); serialized.set_stream_pos(stream_pos); serialized.set_bytes_sent(bytes_sent); serialized.set_bytes_lost(bytes_lost); serialized.set_num_loss_events(num_loss_events); return serialized; } ClientStats Client::get_stats() const { ClientStats stats; if (url.empty()) { stats.url = "-"; } else { stats.url = url; } stats.sock = sock; stats.fwmark = fwmark; stats.remote_addr = remote_addr; stats.connect_time = connect_time; stats.bytes_sent = bytes_sent; stats.bytes_lost = bytes_lost; stats.num_loss_events = num_loss_events; return stats; } cubemap-1.0.4/client.h000066400000000000000000000044531231360650400145650ustar00rootroot00000000000000#ifndef _CLIENT_H #define _CLIENT_H 1 // A Client represents a single connection from a client (watching a single stream). #include #include #include class ClientProto; struct Stream; // Digested statistics for writing to logs etc. struct ClientStats { std::string url; int sock; int fwmark; std::string remote_addr; time_t connect_time; size_t bytes_sent; size_t bytes_lost; size_t num_loss_events; }; struct Client { Client(int sock); // Serialization/deserialization. Client(const ClientProto &serialized, Stream *stream); ClientProto serialize() const; ClientStats get_stats() const; // The file descriptor associated with this socket. int sock; // The fwmark associated with this socket (or 0). int fwmark; // Some information only used for logging. std::string remote_addr; time_t connect_time; enum State { READING_REQUEST, SENDING_HEADER, SENDING_DATA, SENDING_ERROR, WAITING_FOR_KEYFRAME }; State state; // The HTTP request, as sent by the client. If we are in READING_REQUEST, // this might not be finished. std::string request; // What stream we're connecting to; parsed from . // Not relevant for READING_REQUEST. std::string url; Stream *stream; // The header we want to send. This is nominally a copy of Stream::header, // but since that might change on reconnects etc., we keep a local copy here. // Only relevant for SENDING_HEADER or SENDING_ERROR; blank otherwise. std::string header_or_error; // Number of bytes we've sent of the header. Only relevant for SENDING_HEADER // or SENDING_ERROR. size_t header_or_error_bytes_sent; // Number of bytes we are into the stream (ie., the end of last send). // -1 means we want to send from the end of the backlog (the normal case), // although only at a keyframe. // -2 means we want to send from the _beginning_ of the backlog. // Once we go into WAITING_FOR_KEYFRAME or SENDING_DATA, these negative // values will be translated to real numbers. size_t stream_pos; // Number of bytes we've sent of data. Only relevant for SENDING_DATA. size_t bytes_sent; // Number of times we've skipped forward due to the backlog being too big, // and how many bytes we've skipped over in all. Only relevant for SENDING_DATA. size_t bytes_lost, num_loss_events; }; #endif // !defined(_CLIENT_H) cubemap-1.0.4/config.cpp000066400000000000000000000330661231360650400151110ustar00rootroot00000000000000#include #include #include #include #include #include #include #include #include #include #include #include #include "acceptor.h" #include "config.h" #include "log.h" #include "parse.h" using namespace std; #define DEFAULT_BACKLOG_SIZE 1048576 struct ConfigLine { string keyword; vector arguments; map parameters; }; namespace { bool parse_hostport(const string &hostport, sockaddr_in6 *addr) { memset(addr, 0, sizeof(*addr)); addr->sin6_family = AF_INET6; string port_string; // See if the argument if on the type [ipv6addr]:port. if (!hostport.empty() && hostport[0] == '[') { size_t split = hostport.find("]:"); if (split == string::npos) { log(ERROR, "address '%s' is malformed; must be either [ipv6addr]:port or ipv4addr:port"); return false; } string host(hostport.begin() + 1, hostport.begin() + split); port_string = hostport.substr(split + 2); if (inet_pton(AF_INET6, host.c_str(), &addr->sin6_addr) != 1) { log(ERROR, "'%s' is not a valid IPv6 address"); return false; } } else { // OK, then it must be ipv4addr:port. size_t split = hostport.find(":"); if (split == string::npos) { log(ERROR, "address '%s' is malformed; must be either [ipv6addr]:port or ipv4addr:port"); return false; } string host(hostport.begin(), hostport.begin() + split); port_string = hostport.substr(split + 1); // Parse to an IPv4 address, then construct a mapped-v4 address from that. in_addr addr4; if (inet_pton(AF_INET, host.c_str(), &addr4) != 1) { log(ERROR, "'%s' is not a valid IPv4 address"); return false; } addr->sin6_addr.s6_addr32[2] = htonl(0xffff); addr->sin6_addr.s6_addr32[3] = addr4.s_addr; } int port = atoi(port_string.c_str()); if (port < 1 || port >= 65536) { log(ERROR, "port %d is out of range (must be [1,65536>).", port); return false; } addr->sin6_port = ntohs(port); return true; } bool read_config(const string &filename, vector *lines) { FILE *fp = fopen(filename.c_str(), "r"); if (fp == NULL) { log_perror(filename.c_str()); return false; } char buf[4096]; while (!feof(fp)) { if (fgets(buf, sizeof(buf), fp) == NULL) { break; } // Chop off the string at the first #, \r or \n. buf[strcspn(buf, "#\r\n")] = 0; // Remove all whitespace from the end of the string. size_t len = strlen(buf); while (len > 0 && isspace(buf[len - 1])) { buf[--len] = 0; } // If the line is now all blank, ignore it. if (len == 0) { continue; } vector tokens = split_tokens(buf); assert(!tokens.empty()); ConfigLine line; line.keyword = tokens[0]; for (size_t i = 1; i < tokens.size(); ++i) { // foo=bar is a parameter; anything else is an argument. size_t equals_pos = tokens[i].find_first_of('='); if (equals_pos == string::npos) { line.arguments.push_back(tokens[i]); } else { string key = tokens[i].substr(0, equals_pos); string value = tokens[i].substr(equals_pos + 1, string::npos); line.parameters.insert(make_pair(key, value)); } } lines->push_back(line); } fclose(fp); return true; } bool fetch_config_string(const vector &config, const string &keyword, string *value) { for (unsigned i = 0; i < config.size(); ++i) { if (config[i].keyword != keyword) { continue; } if (config[i].parameters.size() > 0 || config[i].arguments.size() != 1) { log(ERROR, "'%s' takes one argument and no parameters", keyword.c_str()); return false; } *value = config[i].arguments[0]; return true; } return false; } bool fetch_config_int(const vector &config, const string &keyword, int *value) { for (unsigned i = 0; i < config.size(); ++i) { if (config[i].keyword != keyword) { continue; } if (config[i].parameters.size() > 0 || config[i].arguments.size() != 1) { log(ERROR, "'%s' takes one argument and no parameters", keyword.c_str()); return false; } *value = atoi(config[i].arguments[0].c_str()); // TODO: verify int validity. return true; } return false; } bool parse_port(const ConfigLine &line, Config *config) { if (line.arguments.size() != 1) { log(ERROR, "'port' takes exactly one argument"); return false; } int port = atoi(line.arguments[0].c_str()); if (port < 1 || port >= 65536) { log(ERROR, "port %d is out of range (must be [1,65536>).", port); return false; } AcceptorConfig acceptor; acceptor.addr = CreateAnyAddress(port); config->acceptors.push_back(acceptor); return true; } bool parse_listen(const ConfigLine &line, Config *config) { if (line.arguments.size() != 1) { log(ERROR, "'listen' takes exactly one argument"); return false; } AcceptorConfig acceptor; if (!parse_hostport(line.arguments[0], &acceptor.addr)) { return false; } config->acceptors.push_back(acceptor); return true; } int allocate_mark_pool(int from, int to, Config *config) { int pool_index = -1; // Reuse mark pools if an identical one exists. // Otherwise, check if we're overlapping some other mark pool. for (size_t i = 0; i < config->mark_pools.size(); ++i) { const MarkPoolConfig &pool = config->mark_pools[i]; if (from == pool.from && to == pool.to) { pool_index = i; } else if ((from >= pool.from && from < pool.to) || (to >= pool.from && to < pool.to)) { log(WARNING, "Mark pool %d-%d partially overlaps with %d-%d, you may get duplicate marks." "Mark pools must either be completely disjunct, or completely overlapping.", from, to, pool.from, pool.to); } } if (pool_index != -1) { return pool_index; } // No match to existing pools. MarkPoolConfig pool; pool.from = from; pool.to = to; config->mark_pools.push_back(pool); return config->mark_pools.size() - 1; } bool parse_mark_pool(const string &mark_str, int *from, int *to) { size_t split = mark_str.find_first_of('-'); if (split == string::npos) { log(ERROR, "Invalid mark specification '%s' (expected 'X-Y').", mark_str.c_str()); return false; } string from_str(mark_str.begin(), mark_str.begin() + split); string to_str(mark_str.begin() + split + 1, mark_str.end()); *from = atoi(from_str.c_str()); *to = atoi(to_str.c_str()); if (*from <= 0 || *from >= 65536 || *to <= 0 || *to >= 65536) { log(ERROR, "Mark pool range %d-%d is outside legal range [1,65536>.", *from, *to); return false; } return true; } bool parse_stream(const ConfigLine &line, Config *config) { if (line.arguments.size() != 1) { log(ERROR, "'stream' takes exactly one argument"); return false; } StreamConfig stream; stream.url = line.arguments[0]; map::const_iterator src_it = line.parameters.find("src"); if (src_it == line.parameters.end()) { log(WARNING, "stream '%s' has no src= attribute, clients will not get any data.", stream.url.c_str()); } else { stream.src = src_it->second; // TODO: Verify that the URL is parseable? } map::const_iterator backlog_it = line.parameters.find("backlog_size"); if (backlog_it == line.parameters.end()) { stream.backlog_size = DEFAULT_BACKLOG_SIZE; } else { stream.backlog_size = atoi(backlog_it->second.c_str()); } // Parse encoding. map::const_iterator encoding_parm_it = line.parameters.find("encoding"); if (encoding_parm_it == line.parameters.end() || encoding_parm_it->second == "raw") { stream.encoding = StreamConfig::STREAM_ENCODING_RAW; } else if (encoding_parm_it->second == "metacube") { stream.encoding = StreamConfig::STREAM_ENCODING_METACUBE; } else { log(ERROR, "Parameter 'encoding' must be either 'raw' (default) or 'metacube'"); return false; } // Parse marks, if so desired. map::const_iterator mark_parm_it = line.parameters.find("mark"); if (mark_parm_it == line.parameters.end()) { stream.mark_pool = -1; } else { int from, to; if (!parse_mark_pool(mark_parm_it->second, &from, &to)) { return false; } stream.mark_pool = allocate_mark_pool(from, to, config); } // Parse the pacing rate, converting from kilobits to bytes as needed. map::const_iterator pacing_rate_it = line.parameters.find("pacing_rate_kbit"); if (pacing_rate_it == line.parameters.end()) { stream.pacing_rate = ~0U; } else { stream.pacing_rate = atoi(pacing_rate_it->second.c_str()) * 1024 / 8; } config->streams.push_back(stream); return true; } bool parse_udpstream(const ConfigLine &line, Config *config) { if (line.arguments.size() != 1) { log(ERROR, "'udpstream' takes exactly one argument"); return false; } UDPStreamConfig udpstream; string hostport = line.arguments[0]; if (!parse_hostport(hostport, &udpstream.dst)) { return false; } map::const_iterator src_it = line.parameters.find("src"); if (src_it == line.parameters.end()) { // This is pretty meaningless, but OK, consistency is good. log(WARNING, "udpstream to %s has no src= attribute, clients will not get any data.", hostport.c_str()); } else { udpstream.src = src_it->second; // TODO: Verify that the URL is parseable? } // Parse marks, if so desired. map::const_iterator mark_parm_it = line.parameters.find("mark"); if (mark_parm_it == line.parameters.end()) { udpstream.mark_pool = -1; } else { int from, to; if (!parse_mark_pool(mark_parm_it->second, &from, &to)) { return false; } udpstream.mark_pool = allocate_mark_pool(from, to, config); } // Parse the pacing rate, converting from kilobits to bytes as needed. map::const_iterator pacing_rate_it = line.parameters.find("pacing_rate_kbit"); if (pacing_rate_it == line.parameters.end()) { udpstream.pacing_rate = ~0U; } else { udpstream.pacing_rate = atoi(pacing_rate_it->second.c_str()) * 1024 / 8; } config->udpstreams.push_back(udpstream); return true; } bool parse_error_log(const ConfigLine &line, Config *config) { if (line.arguments.size() != 0) { log(ERROR, "'error_log' takes no arguments (only parameters type= and filename=)"); return false; } LogConfig log_config; map::const_iterator type_it = line.parameters.find("type"); if (type_it == line.parameters.end()) { log(ERROR, "'error_log' has no type= parameter"); return false; } string type = type_it->second; if (type == "file") { log_config.type = LogConfig::LOG_TYPE_FILE; } else if (type == "syslog") { log_config.type = LogConfig::LOG_TYPE_SYSLOG; } else if (type == "console") { log_config.type = LogConfig::LOG_TYPE_CONSOLE; } else { log(ERROR, "Unknown log type '%s'", type.c_str()); return false; } if (log_config.type == LogConfig::LOG_TYPE_FILE) { map::const_iterator filename_it = line.parameters.find("filename"); if (filename_it == line.parameters.end()) { log(ERROR, "error_log type 'file' with no filename= parameter"); return false; } log_config.filename = filename_it->second; } config->log_destinations.push_back(log_config); return true; } } // namespace bool parse_config(const string &filename, Config *config) { vector lines; if (!read_config(filename, &lines)) { return false; } config->daemonize = false; if (!fetch_config_int(lines, "num_servers", &config->num_servers)) { log(ERROR, "Missing 'num_servers' statement in config file."); return false; } if (config->num_servers < 1 || config->num_servers >= 20000) { // Insanely high max limit. log(ERROR, "'num_servers' is %d, needs to be in [1, 20000>.", config->num_servers); return false; } // See if the user wants stats. config->stats_interval = 60; bool has_stats_file = fetch_config_string(lines, "stats_file", &config->stats_file); bool has_stats_interval = fetch_config_int(lines, "stats_interval", &config->stats_interval); if (has_stats_interval && !has_stats_file) { log(WARNING, "'stats_interval' given, but no 'stats_file'. No client statistics will be written."); } config->input_stats_interval = 60; bool has_input_stats_file = fetch_config_string(lines, "input_stats_file", &config->input_stats_file); bool has_input_stats_interval = fetch_config_int(lines, "input_stats_interval", &config->input_stats_interval); if (has_input_stats_interval && !has_input_stats_file) { log(WARNING, "'input_stats_interval' given, but no 'input_stats_file'. No input statistics will be written."); } fetch_config_string(lines, "access_log", &config->access_log_file); for (size_t i = 0; i < lines.size(); ++i) { const ConfigLine &line = lines[i]; if (line.keyword == "num_servers" || line.keyword == "stats_file" || line.keyword == "stats_interval" || line.keyword == "input_stats_file" || line.keyword == "input_stats_interval" || line.keyword == "access_log") { // Already taken care of, above. } else if (line.keyword == "port") { if (!parse_port(line, config)) { return false; } } else if (line.keyword == "listen") { if (!parse_listen(line, config)) { return false; } } else if (line.keyword == "stream") { if (!parse_stream(line, config)) { return false; } } else if (line.keyword == "udpstream") { if (!parse_udpstream(line, config)) { return false; } } else if (line.keyword == "error_log") { if (!parse_error_log(line, config)) { return false; } } else if (line.keyword == "daemonize") { config->daemonize = true; } else { log(ERROR, "Unknown configuration keyword '%s'.", line.keyword.c_str()); return false; } } return true; } cubemap-1.0.4/config.h000066400000000000000000000031271231360650400145510ustar00rootroot00000000000000#ifndef _CONFIG_H #define _CONFIG_H // Various routines that deal with parsing the configuration file. #include #include #include #include #include struct MarkPoolConfig { int from, to; }; struct StreamConfig { std::string url; // As seen by the client. std::string src; // Can be empty. size_t backlog_size; int mark_pool; // -1 for none. uint32_t pacing_rate; // In bytes per second. Default is ~0U (no limit). enum { STREAM_ENCODING_RAW = 0, STREAM_ENCODING_METACUBE } encoding; }; struct UDPStreamConfig { sockaddr_in6 dst; std::string src; // Can be empty. int mark_pool; // -1 for none. uint32_t pacing_rate; // In bytes per second. Default is ~0U (no limit). }; struct AcceptorConfig { sockaddr_in6 addr; }; struct LogConfig { enum { LOG_TYPE_FILE, LOG_TYPE_CONSOLE, LOG_TYPE_SYSLOG } type; std::string filename; }; struct Config { bool daemonize; int num_servers; std::vector mark_pools; std::vector streams; std::vector udpstreams; std::vector acceptors; std::vector log_destinations; std::string stats_file; // Empty means no stats file. int stats_interval; std::string input_stats_file; // Empty means no input stats file. int input_stats_interval; std::string access_log_file; // Empty means no accses_log file. }; // Parse and validate configuration. Returns false on error. // is taken to be empty (uninitialized) on entry. bool parse_config(const std::string &filename, Config *config); #endif // !defined(_CONFIG_H) cubemap-1.0.4/cubemap.1000066400000000000000000000035371231360650400146360ustar00rootroot00000000000000.\" Hey, EMACS: -*- nroff -*- .\" (C) Copyright 2013 Philipp Kern , .\" licensed under the GPL-2 or any later version. .\" .TH CUBEMAP 1 "August 17, 2013" .\" Please adjust this date whenever revising the manpage. .SH NAME cubemap \- scalable video reflector, designed to be used with VLC .SH SYNOPSIS .B cubemap .RI [ options ] .RI [ FILE ] .SH DESCRIPTION .B cubemap is a high-performance, high-availability video reflector, specifically made for use with VLC. .PP .IP \[bu] 2 High-performance, through a design with multiple worker threads, epoll and sendfile (yes, sendfile); a 2GHz quadcore can saturate 10 gigabit Ethernet, given a modern kernel, a modern NIC and the right kernel tuning. .IP \[bu] High-availability. You can change any part of the configuration (and even upgrade to a newer version of Cubemap) by changing cubemap.config and sending a SIGHUP; all clients will continue as if nothing had happened (unless you delete the stream they are watching, of course). Cubemap also survives the encoder dying and reconnecting. .IP \[bu] Per-stream fwmark support, for TCP pacing through tc (separate config needed). .IP \[bu] Support for setting max pacing rate through the fq packet scheduler (obsoletes the previous point, but depends on Linux 3.13 or newer). .IP \[bu] Reflects anything VLC can reflect over HTTP, even the muxes VLC has problems reflecting itself (in particular, FLV). .IP \[bu] IPv4 support. Yes, Cubemap even supports (some) legacy protocols. .SH OPTIONS .TP \fB\-\-test\-config\fR, \fB\-t\fR tests the config and exits .TP \fBFILE\fR configuration file (defaults to cubemap.config in the current directory) .SH AUTHOR .B cubemap was written by Steinar H. Gunderson . .SH LICENSE cubemap is licensed under the GNU General Public License, version 2. .SH SEE ALSO .BR vlc (1) cubemap-1.0.4/cubemap.config.sample000066400000000000000000000026301231360650400172140ustar00rootroot00000000000000# Uncomment to run in the background. Note that in daemonized mode, all filenames # are relative to an undefined directory, so you should use absolute paths for # error_log, stats_file, etc. #daemonize # For low-traffic servers (less than a gigabit or two), num_servers 1 is fine. # For best performance in high-traffic situations, you want one for each CPU. num_servers 1 # # All input ports are treated exactly the same, but you may use multiple ones nevertheless. # port 9094 # listen 127.0.0.1:9095 # listen [::1]:9095 stats_file cubemap.stats stats_interval 60 input_stats_file cubemap-input.stats input_stats_interval 60 # Logging of clients as they disconnect (and as such as no longer visible in the stats file). # You can only have zero or one of these. access_log access.log # Logging of various informational and error messages. You can have as many of these as you want. error_log type=file filename=cubemap.log error_log type=syslog error_log type=console # # now the streams! # stream /test.flv src=http://gruessi.zrh.sesse.net:4013/test.flv mark=1000-5000 stream /test.flv.metacube src=http://gruessi.zrh.sesse.net:4013/test.flv encoding=metacube stream /udp.ts src=udp://@:1234 backlog_size=1048576 pacing_rate_kbit=2000 udpstream [2001:67c:29f4::50]:5000 src=http://pannekake.samfundet.no:9094/frikanalen.ts.metacube udpstream 193.35.52.50:5001 src=http://pannekake.samfundet.no:9094/frikanalen.ts.metacube cubemap-1.0.4/cubemap.service.in000066400000000000000000000003201231360650400165260ustar00rootroot00000000000000[Unit] Description=Cubemap stream relay [Service] Type=simple ExecStart=@prefix@/bin/cubemap @sysconfdir@/cubemap.config User=cubemap ExecReload=/bin/kill -HUP $MAINPID [Install] WantedBy=multi-user.target cubemap-1.0.4/httpinput.cpp000066400000000000000000000410271231360650400156770ustar00rootroot00000000000000#include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "httpinput.h" #include "log.h" #include "metacube2.h" #include "mutexlock.h" #include "parse.h" #include "serverpool.h" #include "state.pb.h" #include "stream.h" #include "util.h" #include "version.h" using namespace std; extern ServerPool *servers; namespace { // Compute b-a. timespec clock_diff(const timespec &a, const timespec &b) { timespec ret; ret.tv_sec = b.tv_sec - a.tv_sec; ret.tv_nsec = b.tv_nsec - a.tv_nsec; if (ret.tv_nsec < 0) { ret.tv_sec--; ret.tv_nsec += 1000000000; } assert(ret.tv_nsec >= 0); return ret; } } // namespace HTTPInput::HTTPInput(const string &url) : state(NOT_CONNECTED), url(url), has_metacube_header(false), sock(-1) { pthread_mutex_init(&stats_mutex, NULL); stats.url = url; stats.bytes_received = 0; stats.data_bytes_received = 0; stats.connect_time = -1; } HTTPInput::HTTPInput(const InputProto &serialized) : state(State(serialized.state())), url(serialized.url()), request(serialized.request()), request_bytes_sent(serialized.request_bytes_sent()), response(serialized.response()), http_header(serialized.http_header()), stream_header(serialized.stream_header()), has_metacube_header(serialized.has_metacube_header()), sock(serialized.sock()) { pending_data.resize(serialized.pending_data().size()); memcpy(&pending_data[0], serialized.pending_data().data(), serialized.pending_data().size()); string protocol; parse_url(url, &protocol, &host, &port, &path); // Don't care if it fails. pthread_mutex_init(&stats_mutex, NULL); stats.url = url; stats.bytes_received = serialized.bytes_received(); stats.data_bytes_received = serialized.data_bytes_received(); if (serialized.has_connect_time()) { stats.connect_time = serialized.connect_time(); } else { stats.connect_time = time(NULL); } } void HTTPInput::close_socket() { if (sock != -1) { safe_close(sock); } MutexLock lock(&stats_mutex); stats.connect_time = -1; } InputProto HTTPInput::serialize() const { InputProto serialized; serialized.set_state(state); serialized.set_url(url); serialized.set_request(request); serialized.set_request_bytes_sent(request_bytes_sent); serialized.set_response(response); serialized.set_http_header(http_header); serialized.set_stream_header(stream_header); serialized.set_pending_data(string(pending_data.begin(), pending_data.end())); serialized.set_has_metacube_header(has_metacube_header); serialized.set_sock(sock); serialized.set_bytes_received(stats.bytes_received); serialized.set_data_bytes_received(stats.data_bytes_received); serialized.set_connect_time(stats.connect_time); return serialized; } int HTTPInput::lookup_and_connect(const string &host, const string &port) { addrinfo *ai; int err = getaddrinfo(host.c_str(), port.c_str(), NULL, &ai); if (err != 0) { log(WARNING, "[%s] Lookup of '%s' failed (%s).", url.c_str(), host.c_str(), gai_strerror(err)); return -1; } addrinfo *base_ai = ai; // Connect to everything in turn until we have a socket. for ( ; ai && !should_stop(); ai = ai->ai_next) { int sock = socket(ai->ai_family, SOCK_STREAM, IPPROTO_TCP); if (sock == -1) { // Could be e.g. EPROTONOSUPPORT. The show must go on. continue; } // Now do a non-blocking connect. This is important because we want to be able to be // woken up, even though it's rather cumbersome. // Set the socket as nonblocking. int one = 1; if (ioctl(sock, FIONBIO, &one) == -1) { log_perror("ioctl(FIONBIO)"); safe_close(sock); return -1; } // Do a non-blocking connect. do { err = connect(sock, ai->ai_addr, ai->ai_addrlen); } while (err == -1 && errno == EINTR); if (err == -1 && errno != EINPROGRESS) { log_perror("connect"); safe_close(sock); continue; } // Wait for the connect to complete, or an error to happen. for ( ;; ) { bool complete = wait_for_activity(sock, POLLIN | POLLOUT, NULL); if (should_stop()) { safe_close(sock); return -1; } if (complete) { break; } } // Check whether it ended in an error or not. socklen_t err_size = sizeof(err); if (getsockopt(sock, SOL_SOCKET, SO_ERROR, &err, &err_size) == -1) { log_perror("getsockopt"); safe_close(sock); continue; } errno = err; if (err == 0) { // Successful connect. freeaddrinfo(base_ai); return sock; } safe_close(sock); } // Give the last one as error. log(WARNING, "[%s] Connect to '%s' failed (%s)", url.c_str(), host.c_str(), strerror(errno)); freeaddrinfo(base_ai); return -1; } bool HTTPInput::parse_response(const std::string &request) { vector lines = split_lines(response); if (lines.empty()) { log(WARNING, "[%s] Empty HTTP response from input.", url.c_str()); return false; } vector first_line_tokens = split_tokens(lines[0]); if (first_line_tokens.size() < 2) { log(WARNING, "[%s] Malformed response line '%s' from input.", url.c_str(), lines[0].c_str()); return false; } int response = atoi(first_line_tokens[1].c_str()); if (response != 200) { log(WARNING, "[%s] Non-200 response '%s' from input.", url.c_str(), lines[0].c_str()); return false; } multimap parameters; for (size_t i = 1; i < lines.size(); ++i) { size_t split = lines[i].find(":"); if (split == string::npos) { log(WARNING, "[%s] Ignoring malformed HTTP response line '%s'", url.c_str(), lines[i].c_str()); continue; } string key(lines[i].begin(), lines[i].begin() + split); // Skip any spaces after the colon. do { ++split; } while (split < lines[i].size() && lines[i][split] == ' '); string value(lines[i].begin() + split, lines[i].end()); // Remove “Content-encoding: metacube”. // TODO: Make case-insensitive. if (key == "Content-encoding" && value == "metacube") { continue; } parameters.insert(make_pair(key, value)); } // Change “Server: foo” to “Server: metacube/0.1 (reflecting: foo)” // TODO: Make case-insensitive. // XXX: Use a Via: instead? if (parameters.count("Server") == 0) { parameters.insert(make_pair("Server", SERVER_IDENTIFICATION)); } else { for (multimap::iterator it = parameters.begin(); it != parameters.end(); ++it) { if (it->first != "Server") { continue; } it->second = SERVER_IDENTIFICATION " (reflecting: " + it->second + ")"; } } // Set “Connection: close”. // TODO: Make case-insensitive. parameters.erase("Connection"); parameters.insert(make_pair("Connection", "close")); // Construct the new HTTP header. http_header = "HTTP/1.0 200 OK\r\n"; for (multimap::iterator it = parameters.begin(); it != parameters.end(); ++it) { http_header.append(it->first + ": " + it->second + "\r\n"); } for (size_t i = 0; i < stream_indices.size(); ++i) { servers->set_header(stream_indices[i], http_header, stream_header); } return true; } void HTTPInput::do_work() { timespec last_activity; // TODO: Make the timeout persist across restarts. if (state == SENDING_REQUEST || state == RECEIVING_HEADER || state == RECEIVING_DATA) { int err = clock_gettime(CLOCK_MONOTONIC, &last_activity); assert(err != -1); } while (!should_stop()) { if (state == SENDING_REQUEST || state == RECEIVING_HEADER || state == RECEIVING_DATA) { // Give the socket 30 seconds since last activity before we time out. static const int timeout_secs = 30; timespec now; int err = clock_gettime(CLOCK_MONOTONIC, &now); assert(err != -1); timespec elapsed = clock_diff(last_activity, now); if (elapsed.tv_sec >= timeout_secs) { // Timeout! log(ERROR, "[%s] Timeout after %d seconds, closing.", url.c_str(), elapsed.tv_sec); state = CLOSING_SOCKET; continue; } // Basically calculate (30 - (now - last_activity)) = (30 + (last_activity - now)). // Add a second of slack to account for differences between clocks. timespec timeout = clock_diff(now, last_activity); timeout.tv_sec += timeout_secs + 1; assert(timeout.tv_sec > 0 || (timeout.tv_sec >= 0 && timeout.tv_nsec > 0)); bool activity = wait_for_activity(sock, (state == SENDING_REQUEST) ? POLLOUT : POLLIN, &timeout); if (activity) { err = clock_gettime(CLOCK_MONOTONIC, &last_activity); assert(err != -1); } else { // OK. Most likely, should_stop was set, or we have timed out. continue; } } switch (state) { case NOT_CONNECTED: request.clear(); request_bytes_sent = 0; response.clear(); pending_data.clear(); has_metacube_header = false; for (size_t i = 0; i < stream_indices.size(); ++i) { servers->set_header(stream_indices[i], "", ""); } { string protocol; // Thrown away. if (!parse_url(url, &protocol, &host, &port, &path)) { log(WARNING, "[%s] Failed to parse URL '%s'", url.c_str(), url.c_str()); break; } } sock = lookup_and_connect(host, port); if (sock != -1) { // Yay, successful connect. Try to set it as nonblocking. int one = 1; if (ioctl(sock, FIONBIO, &one) == -1) { log_perror("ioctl(FIONBIO)"); state = CLOSING_SOCKET; } else { state = SENDING_REQUEST; request = "GET " + path + " HTTP/1.0\r\nUser-Agent: cubemap\r\n\r\n"; request_bytes_sent = 0; } MutexLock lock(&stats_mutex); stats.connect_time = time(NULL); clock_gettime(CLOCK_MONOTONIC, &last_activity); } break; case SENDING_REQUEST: { size_t to_send = request.size() - request_bytes_sent; int ret; do { ret = write(sock, request.data() + request_bytes_sent, to_send); } while (ret == -1 && errno == EINTR); if (ret == -1) { log_perror("write"); state = CLOSING_SOCKET; continue; } assert(ret >= 0); request_bytes_sent += ret; if (request_bytes_sent == request.size()) { state = RECEIVING_HEADER; } break; } case RECEIVING_HEADER: { char buf[4096]; int ret; do { ret = read(sock, buf, sizeof(buf)); } while (ret == -1 && errno == EINTR); if (ret == -1) { log_perror("read"); state = CLOSING_SOCKET; continue; } if (ret == 0) { // This really shouldn't happen... log(ERROR, "[%s] Socket unexpectedly closed while reading header", url.c_str()); state = CLOSING_SOCKET; continue; } RequestParseStatus status = wait_for_double_newline(&response, buf, ret); if (status == RP_OUT_OF_SPACE) { log(WARNING, "[%s] Sever sent overlong HTTP response!", url.c_str()); state = CLOSING_SOCKET; continue; } else if (status == RP_NOT_FINISHED_YET) { continue; } // OK, so we're fine, but there might be some of the actual data after the response. // We'll need to deal with that separately. string extra_data; if (status == RP_EXTRA_DATA) { char *ptr = static_cast( memmem(response.data(), response.size(), "\r\n\r\n", 4)); assert(ptr != NULL); extra_data = string(ptr + 4, &response[0] + response.size()); response.resize(ptr - response.data()); } if (!parse_response(response)) { state = CLOSING_SOCKET; continue; } if (!extra_data.empty()) { process_data(&extra_data[0], extra_data.size()); } log(INFO, "[%s] Connected to '%s', receiving data.", url.c_str(), url.c_str()); state = RECEIVING_DATA; break; } case RECEIVING_DATA: { char buf[4096]; int ret; do { ret = read(sock, buf, sizeof(buf)); } while (ret == -1 && errno == EINTR); if (ret == -1) { log_perror("read"); state = CLOSING_SOCKET; continue; } if (ret == 0) { // This really shouldn't happen... log(ERROR, "[%s] Socket unexpectedly closed while reading data", url.c_str()); state = CLOSING_SOCKET; continue; } process_data(buf, ret); break; } case CLOSING_SOCKET: { close_socket(); state = NOT_CONNECTED; break; } default: assert(false); } // If we are still in NOT_CONNECTED, either something went wrong, // or the connection just got closed. // The earlier steps have already given the error message, if any. if (state == NOT_CONNECTED && !should_stop()) { log(INFO, "[%s] Waiting 0.2 second and restarting...", url.c_str()); timespec timeout_ts; timeout_ts.tv_sec = 0; timeout_ts.tv_nsec = 200000000; wait_for_wakeup(&timeout_ts); } } } void HTTPInput::process_data(char *ptr, size_t bytes) { pending_data.insert(pending_data.end(), ptr, ptr + bytes); { MutexLock mutex(&stats_mutex); stats.bytes_received += bytes; } for ( ;; ) { // If we don't have enough data (yet) for even the Metacube header, just return. if (pending_data.size() < sizeof(metacube2_block_header)) { return; } // Make sure we have the Metacube sync header at the start. // We may need to skip over junk data (it _should_ not happen, though). if (!has_metacube_header) { char *ptr = static_cast( memmem(pending_data.data(), pending_data.size(), METACUBE2_SYNC, strlen(METACUBE2_SYNC))); if (ptr == NULL) { // OK, so we didn't find the sync marker. We know then that // we do not have the _full_ marker in the buffer, but we // could have N-1 bytes. Drop everything before that, // and then give up. drop_pending_data(pending_data.size() - (strlen(METACUBE2_SYNC) - 1)); return; } else { // Yay, we found the header. Drop everything (if anything) before it. drop_pending_data(ptr - pending_data.data()); has_metacube_header = true; // Re-check that we have the entire header; we could have dropped data. if (pending_data.size() < sizeof(metacube2_block_header)) { return; } } } // Now it's safe to read the header. metacube2_block_header hdr; memcpy(&hdr, pending_data.data(), sizeof(hdr)); assert(memcmp(hdr.sync, METACUBE2_SYNC, sizeof(hdr.sync)) == 0); uint32_t size = ntohl(hdr.size); uint16_t flags = ntohs(hdr.flags); uint16_t expected_csum = metacube2_compute_crc(&hdr); if (expected_csum != ntohs(hdr.csum)) { log(WARNING, "[%s] Metacube checksum failed (expected 0x%x, got 0x%x), " "not reading block claiming to be %d bytes (flags=%x).", url.c_str(), expected_csum, ntohs(hdr.csum), size, flags); // Drop only the first byte, and let the rest of the code handle resync. pending_data.erase(pending_data.begin(), pending_data.begin() + 1); has_metacube_header = false; continue; } if (size > 262144) { log(WARNING, "[%s] Metacube block of %d bytes (flags=%x); corrupted header?", url.c_str(), size, flags); } // See if we have the entire block. If not, wait for more data. if (pending_data.size() < sizeof(metacube2_block_header) + size) { return; } // Send this block on to the servers. { MutexLock lock(&stats_mutex); stats.data_bytes_received += size; } char *inner_data = pending_data.data() + sizeof(metacube2_block_header); if (flags & METACUBE_FLAGS_HEADER) { stream_header = string(inner_data, inner_data + size); for (size_t i = 0; i < stream_indices.size(); ++i) { servers->set_header(stream_indices[i], http_header, stream_header); } } else { StreamStartSuitability suitable_for_stream_start; if (flags & METACUBE_FLAGS_NOT_SUITABLE_FOR_STREAM_START) { suitable_for_stream_start = NOT_SUITABLE_FOR_STREAM_START; } else { suitable_for_stream_start = SUITABLE_FOR_STREAM_START; } for (size_t i = 0; i < stream_indices.size(); ++i) { servers->add_data(stream_indices[i], inner_data, size, suitable_for_stream_start); } } // Consume the block. This isn't the most efficient way of dealing with things // should we have many blocks, but these routines don't need to be too efficient // anyway. pending_data.erase(pending_data.begin(), pending_data.begin() + sizeof(metacube2_block_header) + size); has_metacube_header = false; } } void HTTPInput::drop_pending_data(size_t num_bytes) { if (num_bytes == 0) { return; } log(WARNING, "[%s] Dropping %lld junk bytes from stream, maybe it is not a Metacube2 stream?", url.c_str(), (long long)num_bytes); assert(pending_data.size() >= num_bytes); pending_data.erase(pending_data.begin(), pending_data.begin() + num_bytes); } void HTTPInput::add_destination(int stream_index) { stream_indices.push_back(stream_index); servers->set_header(stream_index, http_header, stream_header); } InputStats HTTPInput::get_stats() const { MutexLock lock(&stats_mutex); return stats; } cubemap-1.0.4/httpinput.h000066400000000000000000000045441231360650400153470ustar00rootroot00000000000000#ifndef _HTTPINPUT_H #define _HTTPINPUT_H 1 #include #include #include #include #include "input.h" class InputProto; class HTTPInput : public Input { public: HTTPInput(const std::string &url); // Serialization/deserialization. HTTPInput(const InputProto &serialized); virtual InputProto serialize() const; virtual void close_socket(); virtual std::string get_url() const { return url; } virtual void add_destination(int stream_index); virtual InputStats get_stats() const; private: // Actually does the download. virtual void do_work(); // Open a socket that connects to the given host and port. Does DNS resolving. int lookup_and_connect(const std::string &host, const std::string &port); // Parses a HTTP response. Returns false if it not a 200. bool parse_response(const std::string &response); // Stores the given data, looks for Metacube blocks (skipping data if needed), // and calls process_block() for each one. void process_data(char *ptr, size_t bytes); // Drops bytes from the head of , // and outputs a warning. void drop_pending_data(size_t num_bytes); enum State { NOT_CONNECTED, SENDING_REQUEST, RECEIVING_HEADER, RECEIVING_DATA, CLOSING_SOCKET, // Due to error. }; State state; std::vector stream_indices; // The URL and its parsed components. std::string url; std::string host, port, path; // The HTTP request, with headers and all. // Only relevant for SENDING_REQUEST. std::string request; // How many bytes we've sent of the request so far. // Only relevant for SENDING_REQUEST. size_t request_bytes_sent; // The HTTP response we've received so far. Only relevant for RECEIVING_HEADER. std::string response; // The HTTP response headers we want to give clients for this input. std::string http_header; // The stream heder we want to give clients for this input. std::string stream_header; // Data we have received but not fully processed yet. std::vector pending_data; // If starts with a Metacube header, // this is true. bool has_metacube_header; // The socket we are downloading on (or -1). int sock; // Mutex protecting . mutable pthread_mutex_t stats_mutex; // The current statistics for this connection. Protected by . InputStats stats; }; #endif // !defined(_HTTPINPUT_H) cubemap-1.0.4/input.cpp000066400000000000000000000034461231360650400150020ustar00rootroot00000000000000#include #include #include "httpinput.h" #include "input.h" #include "state.pb.h" #include "udpinput.h" using namespace std; // Extremely rudimentary URL parsing. bool parse_url(const string &url, string *protocol, string *host, string *port, string *path) { size_t split = url.find("://"); if (split == string::npos) { return false; } *protocol = string(url.begin(), url.begin() + split); string rest = string(url.begin() + split + 3, url.end()); split = rest.find_first_of(":/"); if (split == string::npos) { // http://foo *host = rest; *port = *protocol; *path = "/"; return true; } *host = string(rest.begin(), rest.begin() + split); char ch = rest[split]; // Colon or slash. rest = string(rest.begin() + split + 1, rest.end()); if (ch == ':') { // Parse the port. split = rest.find_first_of('/'); if (split == string::npos) { // http://foo:1234 *port = rest; *path = "/"; return true; } else { // http://foo:1234/bar *port = string(rest.begin(), rest.begin() + split); *path = string(rest.begin() + split, rest.end()); return true; } } // http://foo/bar *port = *protocol; *path = rest; return true; } Input *create_input(const std::string &url) { string protocol, host, port, path; if (!parse_url(url, &protocol, &host, &port, &path)) { return NULL; } if (protocol == "http") { return new HTTPInput(url); } if (protocol == "udp") { return new UDPInput(url); } return NULL; } Input *create_input(const InputProto &serialized) { string protocol, host, port, path; if (!parse_url(serialized.url(), &protocol, &host, &port, &path)) { return NULL; } if (protocol == "http") { return new HTTPInput(serialized); } if (protocol == "udp") { return new UDPInput(serialized); } return NULL; } Input::~Input() {} cubemap-1.0.4/input.h000066400000000000000000000031511231360650400144400ustar00rootroot00000000000000#ifndef _INPUT_H #define _INPUT_H 1 #include #include #include #include "thread.h" class Input; class InputProto; // Extremely rudimentary URL parsing. bool parse_url(const std::string &url, std::string *protocol, std::string *host, std::string *port, std::string *path); // Figure out the right type of input based on the URL, and create a new Input of the right type. // Will return NULL if unknown. Input *create_input(const std::string &url); Input *create_input(const InputProto &serialized); // Digested statistics for writing to logs etc. struct InputStats { std::string url; // The number of bytes we have received so far, including any Metacube headers. // // Not reset across connections. size_t bytes_received; // The number of data bytes we have received so far (or more precisely, // number of data bytes we have sent on to the stream). This excludes Metacube // headers and corrupted data we've skipped. // // Not reset across connections. size_t data_bytes_received; // When the current connection was initiated. -1 if we are not currently connected. time_t connect_time; // TODO: Number of loss events might both be useful, // similar to for clients. Also, per-connection byte counters. }; class Input : public Thread { public: virtual ~Input(); virtual InputProto serialize() const = 0; virtual std::string get_url() const = 0; virtual void close_socket() = 0; virtual void add_destination(int stream_index) = 0; // Note: May be called from a different thread, so must be thread-safe. virtual InputStats get_stats() const = 0; }; #endif // !defined(_INPUT_H) cubemap-1.0.4/input_stats.cpp000066400000000000000000000040631231360650400162140ustar00rootroot00000000000000#include #include #include #include #include #include #include #include #include "input.h" #include "input_stats.h" #include "log.h" #include "util.h" using namespace std; InputStatsThread::InputStatsThread(const string &stats_file, int stats_interval, const vector &inputs) : stats_file(stats_file), stats_interval(stats_interval), inputs(inputs) { } void InputStatsThread::do_work() { while (!should_stop()) { int fd; FILE *fp; time_t now; // Open a new, temporary file. char *filename = strdup((stats_file + ".new.XXXXXX").c_str()); fd = mkostemp(filename, O_WRONLY); if (fd == -1) { log_perror(filename); free(filename); goto sleep; } fp = fdopen(fd, "w"); if (fp == NULL) { log_perror("fdopen"); safe_close(fd); if (unlink(filename) == -1) { log_perror(filename); } free(filename); goto sleep; } now = time(NULL); for (size_t i = 0; i < inputs.size(); ++i) { InputStats stats = inputs[i]->get_stats(); if (stats.connect_time == -1) { fprintf(fp, "%s %llu %llu -\n", stats.url.c_str(), (long long unsigned)(stats.bytes_received), (long long unsigned)(stats.data_bytes_received)); } else { fprintf(fp, "%s %llu %llu %d\n", stats.url.c_str(), (long long unsigned)(stats.bytes_received), (long long unsigned)(stats.data_bytes_received), int(now - stats.connect_time)); } } if (fclose(fp) == EOF) { log_perror("fclose"); if (unlink(filename) == -1) { log_perror(filename); } free(filename); goto sleep; } if (rename(filename, stats_file.c_str()) == -1) { log_perror("rename"); if (unlink(filename) == -1) { log_perror(filename); } } free(filename); sleep: // Wait until we are asked to quit, stats_interval timeout, // or a spurious signal. (The latter will cause us to write stats // too often, but that's okay.) timespec timeout_ts; timeout_ts.tv_sec = stats_interval; timeout_ts.tv_nsec = 0; wait_for_wakeup(&timeout_ts); } } cubemap-1.0.4/input_stats.h000066400000000000000000000012201231360650400156510ustar00rootroot00000000000000#ifndef _INPUT_STATS_H #define _INPUT_STATS_H 1 #include #include #include "thread.h" class Input; // A thread that regularly writes out input statistics, ie. a list of all inputs // with some information about each. Very similar to StatsThread, but for inputs instead // of clients. class InputStatsThread : public Thread { public: // Does not take ownership of the inputs. InputStatsThread(const std::string &stats_file, int stats_interval, const std::vector &inputs); private: virtual void do_work(); std::string stats_file; int stats_interval; std::vector inputs; }; #endif // !defined(_INPUT_STATS_H) cubemap-1.0.4/log.cpp000066400000000000000000000052631231360650400144230ustar00rootroot00000000000000#include #include #include #include #include #include #include #include #include #include #include "log.h" using namespace std; // Yes, it's a bit ugly. #define SYSLOG_FAKE_FILE (static_cast(NULL)) bool logging_started = false; std::vector log_destinations; void add_log_destination_file(const std::string &filename) { FILE *fp = fopen(filename.c_str(), "a"); if (fp == NULL) { perror(filename.c_str()); return; } log_destinations.push_back(fp); } void add_log_destination_console() { log_destinations.push_back(stderr); } void add_log_destination_syslog() { openlog("cubemap", LOG_PID, LOG_DAEMON); log_destinations.push_back(SYSLOG_FAKE_FILE); } void start_logging() { logging_started = true; } void shut_down_logging() { for (size_t i = 0; i < log_destinations.size(); ++i) { if (log_destinations[i] == SYSLOG_FAKE_FILE) { closelog(); } else if (log_destinations[i] != stderr) { if (fclose(log_destinations[i]) != 0) { perror("fclose"); } } } log_destinations.clear(); logging_started = false; } void log(LogLevel log_level, const char *fmt, ...) { char formatted_msg[4096]; va_list ap; va_start(ap, fmt); vsnprintf(formatted_msg, sizeof(formatted_msg), fmt, ap); va_end(ap); time_t now = time(NULL); struct tm lt; struct tm *ltime = localtime_r(&now, <); char timestamp[1024]; if (ltime == NULL) { strcpy(timestamp, "???"); } else { strftime(timestamp, sizeof(timestamp), "%a, %d %b %Y %T %z", ltime); } const char *log_level_str; int syslog_level; switch (log_level) { case INFO: log_level_str = "INFO: "; syslog_level = LOG_INFO; break; case WARNING: log_level_str = "WARNING: "; syslog_level = LOG_WARNING; break; case ERROR: log_level_str = "ERROR: "; syslog_level = LOG_ERR; break; default: assert(false); } // Log to stderr if logging hasn't been set up yet. Note that this means // that such messages will come even if there are no “error_log” lines. if (!logging_started) { fprintf(stderr, "[%s] %s%s\n", timestamp, log_level_str, formatted_msg); return; } for (size_t i = 0; i < log_destinations.size(); ++i) { if (log_destinations[i] == SYSLOG_FAKE_FILE) { syslog(syslog_level, "%s", formatted_msg); } else { int err = fprintf(log_destinations[i], "[%s] %s%s\n", timestamp, log_level_str, formatted_msg); if (err < 0) { perror("fprintf"); } if (log_destinations[i] != stderr) { fflush(log_destinations[i]); } } } } void log_perror(const char *msg) { char errbuf[4096]; log(ERROR, "%s: %s", msg, strerror_r(errno, errbuf, sizeof(errbuf))); } cubemap-1.0.4/log.h000066400000000000000000000006761231360650400140730ustar00rootroot00000000000000#ifndef _LOG_H #define _LOG_H 1 // Functions for common logging to file and syslog. #include enum LogLevel { INFO, WARNING, ERROR, }; void add_log_destination_file(const std::string &filename); void add_log_destination_console(); void add_log_destination_syslog(); void start_logging(); void shut_down_logging(); void log(LogLevel log_level, const char *fmt, ...); void log_perror(const char *msg); #endif // !defined(_LOG_H) cubemap-1.0.4/main.cpp000066400000000000000000000414621231360650400145670ustar00rootroot00000000000000#include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "acceptor.h" #include "accesslog.h" #include "config.h" #include "input.h" #include "input_stats.h" #include "log.h" #include "markpool.h" #include "sa_compare.h" #include "serverpool.h" #include "state.pb.h" #include "stats.h" #include "stream.h" #include "util.h" #include "version.h" using namespace std; AccessLogThread *access_log = NULL; ServerPool *servers = NULL; vector mark_pools; volatile bool hupped = false; volatile bool stopped = false; struct InputWithRefcount { Input *input; int refcount; }; void hup(int signum) { hupped = true; if (signum == SIGINT) { stopped = true; } } void do_nothing(int signum) { } CubemapStateProto collect_state(const timeval &serialize_start, const vector acceptors, const multimap inputs, ServerPool *servers) { CubemapStateProto state = servers->serialize(); // Fills streams() and clients(). state.set_serialize_start_sec(serialize_start.tv_sec); state.set_serialize_start_usec(serialize_start.tv_usec); for (size_t i = 0; i < acceptors.size(); ++i) { state.add_acceptors()->MergeFrom(acceptors[i]->serialize()); } for (multimap::const_iterator input_it = inputs.begin(); input_it != inputs.end(); ++input_it) { state.add_inputs()->MergeFrom(input_it->second.input->serialize()); } return state; } // Find all port statements in the configuration file, and create acceptors for htem. vector create_acceptors( const Config &config, map *deserialized_acceptors) { vector acceptors; for (unsigned i = 0; i < config.acceptors.size(); ++i) { const AcceptorConfig &acceptor_config = config.acceptors[i]; Acceptor *acceptor = NULL; map::iterator deserialized_acceptor_it = deserialized_acceptors->find(acceptor_config.addr); if (deserialized_acceptor_it != deserialized_acceptors->end()) { acceptor = deserialized_acceptor_it->second; deserialized_acceptors->erase(deserialized_acceptor_it); } else { int server_sock = create_server_socket(acceptor_config.addr, TCP_SOCKET); acceptor = new Acceptor(server_sock, acceptor_config.addr); } acceptor->run(); acceptors.push_back(acceptor); } // Close all acceptors that are no longer in the configuration file. for (map::iterator acceptor_it = deserialized_acceptors->begin(); acceptor_it != deserialized_acceptors->end(); ++acceptor_it) { acceptor_it->second->close_socket(); delete acceptor_it->second; } return acceptors; } void create_config_input(const string &src, multimap *inputs) { if (src.empty()) { return; } if (inputs->count(src) != 0) { return; } InputWithRefcount iwr; iwr.input = create_input(src); if (iwr.input == NULL) { log(ERROR, "did not understand URL '%s', clients will not get any data.", src.c_str()); return; } iwr.refcount = 0; inputs->insert(make_pair(src, iwr)); } // Find all streams in the configuration file, and create inputs for them. void create_config_inputs(const Config &config, multimap *inputs) { for (unsigned i = 0; i < config.streams.size(); ++i) { const StreamConfig &stream_config = config.streams[i]; if (stream_config.src != "delete") { create_config_input(stream_config.src, inputs); } } for (unsigned i = 0; i < config.udpstreams.size(); ++i) { const UDPStreamConfig &udpstream_config = config.udpstreams[i]; create_config_input(udpstream_config.src, inputs); } } void create_streams(const Config &config, const set &deserialized_urls, multimap *inputs) { for (unsigned i = 0; i < config.mark_pools.size(); ++i) { const MarkPoolConfig &mp_config = config.mark_pools[i]; mark_pools.push_back(new MarkPool(mp_config.from, mp_config.to)); } // HTTP streams. set expecting_urls = deserialized_urls; for (unsigned i = 0; i < config.streams.size(); ++i) { const StreamConfig &stream_config = config.streams[i]; int stream_index; expecting_urls.erase(stream_config.url); // Special-case deleted streams; they were never deserialized in the first place, // so just ignore them. if (stream_config.src == "delete") { continue; } if (deserialized_urls.count(stream_config.url) == 0) { stream_index = servers->add_stream(stream_config.url, stream_config.backlog_size, Stream::Encoding(stream_config.encoding)); } else { stream_index = servers->lookup_stream_by_url(stream_config.url); assert(stream_index != -1); servers->set_backlog_size(stream_index, stream_config.backlog_size); servers->set_encoding(stream_index, Stream::Encoding(stream_config.encoding)); } if (stream_config.mark_pool != -1) { servers->set_mark_pool(stream_index, mark_pools[stream_config.mark_pool]); } servers->set_pacing_rate(stream_index, stream_config.pacing_rate); string src = stream_config.src; if (!src.empty()) { multimap::iterator input_it = inputs->find(src); if (input_it != inputs->end()) { input_it->second.input->add_destination(stream_index); ++input_it->second.refcount; } } } // Warn about any streams servers we've lost. for (set::const_iterator stream_it = expecting_urls.begin(); stream_it != expecting_urls.end(); ++stream_it) { string url = *stream_it; log(WARNING, "stream '%s' disappeared from the configuration file. " "It will not be deleted, but clients will not get any new inputs. " "If you really meant to delete it, set src=delete and reload.", url.c_str()); } // UDP streams. for (unsigned i = 0; i < config.udpstreams.size(); ++i) { const UDPStreamConfig &udpstream_config = config.udpstreams[i]; MarkPool *mark_pool = NULL; if (udpstream_config.mark_pool != -1) { mark_pool = mark_pools[udpstream_config.mark_pool]; } int stream_index = servers->add_udpstream(udpstream_config.dst, mark_pool, udpstream_config.pacing_rate); string src = udpstream_config.src; if (!src.empty()) { multimap::iterator input_it = inputs->find(src); assert(input_it != inputs->end()); input_it->second.input->add_destination(stream_index); ++input_it->second.refcount; } } } void open_logs(const vector &log_destinations) { for (size_t i = 0; i < log_destinations.size(); ++i) { if (log_destinations[i].type == LogConfig::LOG_TYPE_FILE) { add_log_destination_file(log_destinations[i].filename); } else if (log_destinations[i].type == LogConfig::LOG_TYPE_CONSOLE) { add_log_destination_console(); } else if (log_destinations[i].type == LogConfig::LOG_TYPE_SYSLOG) { add_log_destination_syslog(); } else { assert(false); } } start_logging(); } bool dry_run_config(const std::string &argv0, const std::string &config_filename) { char *argv0_copy = strdup(argv0.c_str()); char *config_filename_copy = strdup(config_filename.c_str()); pid_t pid = fork(); switch (pid) { case -1: log_perror("fork()"); free(argv0_copy); free(config_filename_copy); return false; case 0: // Child. execlp(argv0_copy, argv0_copy, "--test-config", config_filename_copy, NULL); log_perror(argv0_copy); _exit(1); default: // Parent. break; } free(argv0_copy); free(config_filename_copy); int status; pid_t err; do { err = waitpid(pid, &status, 0); } while (err == -1 && errno == EINTR); if (err == -1) { log_perror("waitpid()"); return false; } return (WIFEXITED(status) && WEXITSTATUS(status) == 0); } void find_deleted_streams(const Config &config, set *deleted_urls) { for (unsigned i = 0; i < config.streams.size(); ++i) { const StreamConfig &stream_config = config.streams[i]; if (stream_config.src == "delete") { log(INFO, "Deleting stream '%s'.", stream_config.url.c_str()); deleted_urls->insert(stream_config.url); } } } int main(int argc, char **argv) { signal(SIGHUP, hup); signal(SIGINT, hup); signal(SIGUSR1, do_nothing); // Used in internal signalling. signal(SIGPIPE, SIG_IGN); // Parse options. int state_fd = -1; bool test_config = false; for ( ;; ) { static const option long_options[] = { { "state", required_argument, 0, 's' }, { "test-config", no_argument, 0, 't' }, { 0, 0, 0, 0 } }; int option_index = 0; int c = getopt_long(argc, argv, "s:t", long_options, &option_index); if (c == -1) { break; } switch (c) { case 's': state_fd = atoi(optarg); break; case 't': test_config = true; break; default: fprintf(stderr, "Unknown option '%s'\n", argv[option_index]); exit(1); } } string config_filename = "cubemap.config"; if (optind < argc) { config_filename = argv[optind++]; } // Canonicalize argv[0] and config_filename. char argv0_canon[PATH_MAX]; char config_filename_canon[PATH_MAX]; if (realpath(argv[0], argv0_canon) == NULL) { log_perror(argv[0]); exit(1); } if (realpath(config_filename.c_str(), config_filename_canon) == NULL) { log_perror(config_filename.c_str()); exit(1); } // Now parse the configuration file. Config config; if (!parse_config(config_filename_canon, &config)) { exit(1); } if (test_config) { exit(0); } // Ideally we'd like to daemonize only when we've started up all threads etc., // but daemon() forks, which is not good in multithreaded software, so we'll // have to do it here. if (config.daemonize) { if (daemon(0, 0) == -1) { log_perror("daemon"); exit(1); } } start: // Open logs as soon as possible. open_logs(config.log_destinations); log(INFO, "Cubemap " SERVER_VERSION " starting."); if (config.access_log_file.empty()) { // Create a dummy logger. access_log = new AccessLogThread(); } else { access_log = new AccessLogThread(config.access_log_file); } access_log->run(); servers = new ServerPool(config.num_servers); // Find all the streams that are to be deleted. set deleted_urls; find_deleted_streams(config, &deleted_urls); CubemapStateProto loaded_state; struct timeval serialize_start; set deserialized_urls; map deserialized_acceptors; multimap inputs; // multimap due to older versions without deduplication. if (state_fd != -1) { log(INFO, "Deserializing state from previous process..."); string serialized; if (!read_tempfile(state_fd, &serialized)) { exit(1); } if (!loaded_state.ParseFromString(serialized)) { log(ERROR, "Failed deserialization of state."); exit(1); } serialize_start.tv_sec = loaded_state.serialize_start_sec(); serialize_start.tv_usec = loaded_state.serialize_start_usec(); // Deserialize the streams. map stream_headers_for_url; // See below. for (int i = 0; i < loaded_state.streams_size(); ++i) { const StreamProto &stream = loaded_state.streams(i); if (deleted_urls.count(stream.url()) != 0) { // Delete the stream backlogs. for (int j = 0; j < stream.data_fds_size(); ++j) { safe_close(stream.data_fds(j)); } } else { vector data_fds; for (int j = 0; j < stream.data_fds_size(); ++j) { data_fds.push_back(stream.data_fds(j)); } servers->add_stream_from_serialized(stream, data_fds); deserialized_urls.insert(stream.url()); stream_headers_for_url.insert(make_pair(stream.url(), stream.stream_header())); } } // Deserialize the inputs. Note that we don't actually add them to any stream yet. for (int i = 0; i < loaded_state.inputs_size(); ++i) { InputProto serialized_input = loaded_state.inputs(i); InputWithRefcount iwr; iwr.input = create_input(serialized_input); iwr.refcount = 0; inputs.insert(make_pair(serialized_input.url(), iwr)); } // Deserialize the acceptors. for (int i = 0; i < loaded_state.acceptors_size(); ++i) { sockaddr_in6 sin6 = ExtractAddressFromAcceptorProto(loaded_state.acceptors(i)); deserialized_acceptors.insert(make_pair( sin6, new Acceptor(loaded_state.acceptors(i)))); } log(INFO, "Deserialization done."); } // Add any new inputs coming from the config. create_config_inputs(config, &inputs); // Find all streams in the configuration file, create them, and connect to the inputs. create_streams(config, deserialized_urls, &inputs); vector acceptors = create_acceptors(config, &deserialized_acceptors); // Put back the existing clients. It doesn't matter which server we // allocate them to, so just do round-robin. However, we need to add // them after the mark pools have been set up. for (int i = 0; i < loaded_state.clients_size(); ++i) { if (deleted_urls.count(loaded_state.clients(i).url()) != 0) { safe_close(loaded_state.clients(i).sock()); } else { servers->add_client_from_serialized(loaded_state.clients(i)); } } servers->run(); // Now delete all inputs that are longer in use, and start the others. for (multimap::iterator input_it = inputs.begin(); input_it != inputs.end(); ) { if (input_it->second.refcount == 0) { log(WARNING, "Input '%s' no longer in use, closing.", input_it->first.c_str()); input_it->second.input->close_socket(); delete input_it->second.input; inputs.erase(input_it++); } else { input_it->second.input->run(); ++input_it; } } // Start writing statistics. StatsThread *stats_thread = NULL; if (!config.stats_file.empty()) { stats_thread = new StatsThread(config.stats_file, config.stats_interval); stats_thread->run(); } InputStatsThread *input_stats_thread = NULL; if (!config.input_stats_file.empty()) { vector inputs_no_refcount; for (multimap::iterator input_it = inputs.begin(); input_it != inputs.end(); ++input_it) { inputs_no_refcount.push_back(input_it->second.input); } input_stats_thread = new InputStatsThread(config.input_stats_file, config.input_stats_interval, inputs_no_refcount); input_stats_thread->run(); } struct timeval server_start; gettimeofday(&server_start, NULL); if (state_fd != -1) { // Measure time from we started deserializing (below) to now, when basically everything // is up and running. This is, in other words, a conservative estimate of how long our // “glitch” period was, not counting of course reconnects if the configuration changed. double glitch_time = server_start.tv_sec - serialize_start.tv_sec + 1e-6 * (server_start.tv_usec - serialize_start.tv_usec); log(INFO, "Re-exec happened in approx. %.0f ms.", glitch_time * 1000.0); } while (!hupped) { usleep(100000); } // OK, we've been HUPed. Time to shut down everything, serialize, and re-exec. gettimeofday(&serialize_start, NULL); if (input_stats_thread != NULL) { input_stats_thread->stop(); delete input_stats_thread; } if (stats_thread != NULL) { stats_thread->stop(); delete stats_thread; } for (size_t i = 0; i < acceptors.size(); ++i) { acceptors[i]->stop(); } for (multimap::iterator input_it = inputs.begin(); input_it != inputs.end(); ++input_it) { input_it->second.input->stop(); } servers->stop(); CubemapStateProto state; if (stopped) { log(INFO, "Shutting down."); } else { log(INFO, "Serializing state and re-execing..."); state = collect_state( serialize_start, acceptors, inputs, servers); string serialized; state.SerializeToString(&serialized); state_fd = make_tempfile(serialized); if (state_fd == -1) { exit(1); } } delete servers; for (unsigned i = 0; i < mark_pools.size(); ++i) { delete mark_pools[i]; } mark_pools.clear(); access_log->stop(); delete access_log; shut_down_logging(); if (stopped) { exit(0); } // OK, so the signal was SIGHUP. Check that the new config is okay, then exec the new binary. if (!dry_run_config(argv0_canon, config_filename_canon)) { open_logs(config.log_destinations); log(ERROR, "%s --test-config failed. Restarting old version instead of new.", argv[0]); hupped = false; shut_down_logging(); goto start; } char buf[16]; sprintf(buf, "%d", state_fd); for ( ;; ) { execlp(argv0_canon, argv0_canon, config_filename_canon, "--state", buf, NULL); open_logs(config.log_destinations); log_perror("execlp"); log(ERROR, "re-exec of %s failed. Waiting 0.2 seconds and trying again...", argv0_canon); shut_down_logging(); usleep(200000); } } cubemap-1.0.4/markpool.cpp000066400000000000000000000015061231360650400154620ustar00rootroot00000000000000#include "log.h" #include "markpool.h" #include "mutexlock.h" #include #include #include #include MarkPool::MarkPool(int start, int end) : start(start), end(end) { assert(start > 0 && start < 65536); assert(end > 0 && end < 65536); for (int i = start; i < end; ++i) { free_marks.push(i); } pthread_mutex_init(&mutex, NULL); } int MarkPool::get_mark() { MutexLock lock(&mutex); if (free_marks.empty()) { log(WARNING, "Out of free marks in mark pool %d-%d, session will not be marked. " "To fix, increase the pool size and HUP the server.", start, end); return 0; } int mark = free_marks.front(); free_marks.pop(); return mark; } void MarkPool::release_mark(int mark) { if (mark == 0) { return; } MutexLock lock(&mutex); free_marks.push(mark); } cubemap-1.0.4/markpool.h000066400000000000000000000007721231360650400151330ustar00rootroot00000000000000#ifndef _MARKPOOL_H #define _MARKPOOL_H // A class that hands out fwmarks from a given range in a thread-safe fashion. // If the range is empty, it returns 0. #include #include class MarkPool { public: // Limits are [start, end>. Numbers are 16-bit, so above 65535 do not make sense. MarkPool(int start, int end); int get_mark(); void release_mark(int mark); private: int start, end; pthread_mutex_t mutex; std::queue free_marks; }; #endif // !defined(_MARKPOOL_H) cubemap-1.0.4/metacube2.cpp000066400000000000000000000022061231360650400155030ustar00rootroot00000000000000/* * Implementation of Metacube2 utility functions. * * Note: This file is meant to compile as both C and C++, for easier inclusion * in other projects. */ #include "metacube2.h" /* * https://www.ece.cmu.edu/~koopman/pubs/KoopmanCRCWebinar9May2012.pdf * recommends this for messages as short as ours (see table at page 34). */ #define METACUBE2_CRC_POLYNOMIAL 0x8FDB /* Semi-random starting value to make sure all-zero won't pass. */ #define METACUBE2_CRC_START 0x1234 /* This code is based on code generated by pycrc. */ uint16_t metacube2_compute_crc(const struct metacube2_block_header *hdr) { static const int data_len = sizeof(hdr->size) + sizeof(hdr->flags); const uint8_t *data = (uint8_t *)&hdr->size; uint16_t crc = METACUBE2_CRC_START; int i, j; for (i = 0; i < data_len; ++i) { uint8_t c = data[i]; for (j = 0; j < 8; j++) { int bit = crc & 0x8000; crc = (crc << 1) | ((c >> (7 - j)) & 0x01); if (bit) { crc ^= METACUBE2_CRC_POLYNOMIAL; } } } /* Finalize. */ for (i = 0; i < 16; i++) { int bit = crc & 0x8000; crc = crc << 1; if (bit) { crc ^= METACUBE2_CRC_POLYNOMIAL; } } return crc; } cubemap-1.0.4/metacube2.h000066400000000000000000000014221231360650400151470ustar00rootroot00000000000000#ifndef _METACUBE2_H #define _METACUBE2_H /* * Definitions for the Metacube2 protocol, used to communicate with Cubemap. * * Note: This file is meant to compile as both C and C++, for easier inclusion * in other projects. */ #include #define METACUBE2_SYNC "cube!map" /* 8 bytes long. */ #define METACUBE_FLAGS_HEADER 0x1 #define METACUBE_FLAGS_NOT_SUITABLE_FOR_STREAM_START 0x2 struct metacube2_block_header { char sync[8]; /* METACUBE2_SYNC */ uint32_t size; /* Network byte order. Does not include header. */ uint16_t flags; /* Network byte order. METACUBE_FLAGS_*. */ uint16_t csum; /* Network byte order. CRC16 of size and flags. */ }; uint16_t metacube2_compute_crc(const struct metacube2_block_header *hdr); #endif /* !defined(_METACUBE_H) */ cubemap-1.0.4/munin/000077500000000000000000000000001231360650400142565ustar00rootroot00000000000000cubemap-1.0.4/munin/cubemap000077500000000000000000000030211231360650400156140ustar00rootroot00000000000000#! /usr/bin/perl use strict; use warnings; use Munin::Plugin; my $config_filename = $ENV{"cubemap_config"} // "/etc/cubemap.config"; my $stats_filename = $ENV{"cubemap_stats"} // "/var/lib/cubemap/cubemap.stats"; my $mode = $ARGV[0] // "print"; if ($mode eq 'config') { print "graph_title Cubemap viewers\n"; print "graph_category network\n"; print "graph_vlabel viewers\n"; } my %streams = (); open my $config, "<", $config_filename or die "$config_filename: $!"; while (<$config>) { chomp; /^stream (\S+) / or next; my $stream = $1; $streams{$stream} = 0; my $stream_name = stream_name($stream); if ($mode eq 'config') { print "${stream_name}.label Number of viewers of $stream\n"; print "${stream_name}.type GAUGE\n"; print "${stream_name}.min 0\n"; } } close $config; my $total = 0; if ($mode eq 'config') { print "total.label Total number of viewers\n"; print "total.type GAUGE\n"; print "total.min 0\n"; } open my $stats, "<", $stats_filename or die "$stats_filename: $!"; while (<$stats>) { chomp; my ($ip, $fd, $mark, $stream, $connected_time, $bytes_sent, $loss_bytes, $loss_events) = /^(\S+) (\d+) (\d+) (\S+) (\d+) (\d+) (\d+) (\d+)/ or die "Invalid stats format"; ++$streams{$stream}; ++$total; } close $stats; if ($mode ne 'config') { for my $stream (sort keys %streams) { my $stream_name = stream_name($stream); printf "${stream_name}.value %d\n", $streams{$stream}; } printf "total.value %d\n", $total; } sub stream_name { my $stream = shift; $stream =~ y/a-z0-9/_/c; return $stream; } cubemap-1.0.4/munin/cubemap_input000077500000000000000000000016641231360650400170460ustar00rootroot00000000000000#! /usr/bin/perl use strict; use warnings; use Munin::Plugin; my $input_stats_filename = $ENV{"cubemap_input_stats"} // "/var/lib/cubemap/cubemap-input.stats"; my $mode = $ARGV[0] // "print"; if ($mode eq 'config') { print "graph_title Cubemap inputs\n"; print "graph_category network\n"; print "graph_vlabel bits/sec\n"; } open my $stats, "<", $input_stats_filename or die "$input_stats_filename: $!"; while (<$stats>) { chomp; my ($url, $bytes_received, $data_bytes_received, $connection_time) = /^(\S+) (\d+) (\d+) (-|\d+)/ or die "Invalid stats format"; my $stream_name = stream_name($url); if ($mode eq 'config') { print "${stream_name}.label Data input bitrate of $url\n"; print "${stream_name}.type DERIVE\n"; print "${stream_name}.min 0\n"; } else { printf "${stream_name}.value %d\n", $data_bytes_received * 8; } } close $stats; sub stream_name { my $stream = shift; $stream =~ y/a-z0-9/_/c; return $stream; } cubemap-1.0.4/mutexlock.cpp000066400000000000000000000002751231360650400156530ustar00rootroot00000000000000#include "mutexlock.h" MutexLock::MutexLock(pthread_mutex_t *mutex) : mutex(mutex) { pthread_mutex_lock(this->mutex); } MutexLock::~MutexLock() { pthread_mutex_unlock(this->mutex); } cubemap-1.0.4/mutexlock.h000066400000000000000000000003771231360650400153230ustar00rootroot00000000000000#ifndef _MUTEXLOCK_H #define _MUTEXLOCK_H 1 #include // Locks a pthread mutex, RAII-style. class MutexLock { public: MutexLock(pthread_mutex_t *mutex); ~MutexLock(); private: pthread_mutex_t *mutex; }; #endif // !defined(_MUTEXLOCK_H) cubemap-1.0.4/parse.cpp000066400000000000000000000037771231360650400147640ustar00rootroot00000000000000#include #include #include #include #include "parse.h" using namespace std; vector split_tokens(const string &line) { vector ret; string current_token; for (size_t i = 0; i < line.size(); ++i) { if (isspace(line[i])) { if (!current_token.empty()) { ret.push_back(current_token); } current_token.clear(); } else { current_token.push_back(line[i]); } } if (!current_token.empty()) { ret.push_back(current_token); } return ret; } vector split_lines(const string &str) { vector ret; string current_line; for (size_t i = 0; i < str.size(); ++i) { // Skip \r if followed by an \n. if (str[i] == '\r' && i < str.size() - 1 && str[i + 1] == '\n') { continue; } // End of the current line? if (str[i] == '\n') { if (!current_line.empty()) { ret.push_back(current_line); } current_line.clear(); } else { current_line.push_back(str[i]); } } if (!current_line.empty()) { ret.push_back(current_line); } return ret; } #define MAX_REQUEST_SIZE 16384 /* 16 kB. */ RequestParseStatus wait_for_double_newline(string *existing_data, const char *new_data, size_t new_data_size) { // Guard against overlong requests gobbling up all of our space. if (existing_data->size() + new_data_size > MAX_REQUEST_SIZE) { return RP_OUT_OF_SPACE; } // See if we have \r\n\r\n anywhere in the request. We start three bytes // before what we just appended, in case we just got the final character. size_t existing_data_bytes = existing_data->size(); existing_data->append(string(new_data, new_data + new_data_size)); const size_t start_at = (existing_data_bytes >= 3 ? existing_data_bytes - 3 : 0); const char *ptr = reinterpret_cast( memmem(existing_data->data() + start_at, existing_data->size() - start_at, "\r\n\r\n", 4)); if (ptr == NULL) { return RP_NOT_FINISHED_YET; } if (ptr != existing_data->data() + existing_data->size() - 4) { return RP_EXTRA_DATA; } return RP_FINISHED; } cubemap-1.0.4/parse.h000066400000000000000000000023051231360650400144130ustar00rootroot00000000000000#ifndef _PARSE_H #define _PARSE_H // Various routines that deal with parsing; both HTTP requests and more generic text. #include #include #include // Split a line on whitespace, e.g. "foo bar baz" -> {"foo", "bar", "baz"}. std::vector split_tokens(const std::string &line); // Split a string on \n or \r\n, e.g. "foo\nbar\r\n\nbaz\r\n\r\n" -> {"foo", "bar", "baz"}. std::vector split_lines(const std::string &str); // Add the new data to an existing string, looking for \r\n\r\n // (typical of HTTP requests and/or responses). Will return one // of the given statuses. // // Note that if you give too much data in new_data_size, you could // get an RP_OUT_OF_SPACE even if you expected RP_EXTRA_DATA. // Be careful about how large reads you give in. enum RequestParseStatus { RP_OUT_OF_SPACE, // If larger than 16 kB. RP_NOT_FINISHED_YET, // Did not get \r\n\r\n yet. RP_EXTRA_DATA, // Got \r\n\r\n, but there was extra data behind it. RP_FINISHED, // Ended exactly in \r\n\r\n. }; RequestParseStatus wait_for_double_newline(std::string *existing_data, const char *new_data, size_t new_data_size); #endif // !defined(_PARSE_H) cubemap-1.0.4/sa_compare.cpp000066400000000000000000000006651231360650400157540ustar00rootroot00000000000000#include "sa_compare.h" #include #include #include bool Sockaddr6Compare::operator() (const sockaddr_in6 &a, const sockaddr_in6 &b) const { assert(a.sin6_family == AF_INET6); assert(b.sin6_family == AF_INET6); int addr_cmp = memcmp(&a.sin6_addr, &b.sin6_addr, sizeof(a.sin6_addr)); if (addr_cmp == 0) { return (ntohs(a.sin6_port) < ntohs(b.sin6_port)); } else { return (addr_cmp < 0); } } cubemap-1.0.4/sa_compare.h000066400000000000000000000004161231360650400154130ustar00rootroot00000000000000#ifndef _SA_COMPARE_H #define _SA_COMPARE_H #include // A utility functor to help use sockaddr_in6 as keys in a map. struct Sockaddr6Compare { bool operator() (const sockaddr_in6 &a, const sockaddr_in6 &b) const; }; #endif // !defined(_SA_COMPARE_H) cubemap-1.0.4/server.cpp000066400000000000000000000442161231360650400151510ustar00rootroot00000000000000#include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "accesslog.h" #include "log.h" #include "markpool.h" #include "metacube2.h" #include "mutexlock.h" #include "parse.h" #include "server.h" #include "state.pb.h" #include "stream.h" #include "util.h" #ifndef SO_MAX_PACING_RATE #define SO_MAX_PACING_RATE 47 #endif using namespace std; extern AccessLogThread *access_log; Server::Server() { pthread_mutex_init(&mutex, NULL); pthread_mutex_init(&queued_clients_mutex, NULL); epoll_fd = epoll_create(1024); // Size argument is ignored. if (epoll_fd == -1) { log_perror("epoll_fd"); exit(1); } } Server::~Server() { for (size_t i = 0; i < streams.size(); ++i) { delete streams[i]; } safe_close(epoll_fd); } vector Server::get_client_stats() const { vector ret; MutexLock lock(&mutex); for (map::const_iterator client_it = clients.begin(); client_it != clients.end(); ++client_it) { ret.push_back(client_it->second.get_stats()); } return ret; } void Server::do_work() { while (!should_stop()) { // Wait until there's activity on at least one of the fds, // or 20 ms (about one frame at 50 fps) has elapsed. // // We could in theory wait forever and rely on wakeup() // from add_client_deferred() and add_data_deferred(), // but wakeup is a pretty expensive operation, and the // two threads might end up fighting over a lock, so it's // seemingly (much) more efficient to just have a timeout here. int nfds = epoll_pwait(epoll_fd, events, EPOLL_MAX_EVENTS, EPOLL_TIMEOUT_MS, &sigset_without_usr1_block); if (nfds == -1 && errno != EINTR) { log_perror("epoll_wait"); exit(1); } MutexLock lock(&mutex); // We release the mutex between iterations. process_queued_data(); for (int i = 0; i < nfds; ++i) { Client *client = reinterpret_cast(events[i].data.u64); if (events[i].events & (EPOLLERR | EPOLLRDHUP | EPOLLHUP)) { close_client(client); continue; } process_client(client); } for (size_t i = 0; i < streams.size(); ++i) { vector to_process; swap(streams[i]->to_process, to_process); for (size_t i = 0; i < to_process.size(); ++i) { process_client(to_process[i]); } } } } CubemapStateProto Server::serialize() { // We don't serialize anything queued, so empty the queues. process_queued_data(); // Set all clients in a consistent state before serializing // (ie., they have no remaining lost data). Otherwise, increasing // the backlog could take clients into a newly valid area of the backlog, // sending a stream of zeros instead of skipping the data as it should. // // TODO: Do this when clients are added back from serialized state instead; // it would probably be less wasteful. for (map::iterator client_it = clients.begin(); client_it != clients.end(); ++client_it) { skip_lost_data(&client_it->second); } CubemapStateProto serialized; for (map::const_iterator client_it = clients.begin(); client_it != clients.end(); ++client_it) { serialized.add_clients()->MergeFrom(client_it->second.serialize()); } for (size_t i = 0; i < streams.size(); ++i) { serialized.add_streams()->MergeFrom(streams[i]->serialize()); } return serialized; } void Server::add_client_deferred(int sock) { MutexLock lock(&queued_clients_mutex); queued_add_clients.push_back(sock); } void Server::add_client(int sock) { pair::iterator, bool> ret = clients.insert(make_pair(sock, Client(sock))); assert(ret.second == true); // Should not already exist. Client *client_ptr = &ret.first->second; // Start listening on data from this socket. epoll_event ev; ev.events = EPOLLIN | EPOLLET | EPOLLRDHUP; ev.data.u64 = reinterpret_cast(client_ptr); if (epoll_ctl(epoll_fd, EPOLL_CTL_ADD, sock, &ev) == -1) { log_perror("epoll_ctl(EPOLL_CTL_ADD)"); exit(1); } process_client(client_ptr); } void Server::add_client_from_serialized(const ClientProto &client) { MutexLock lock(&mutex); Stream *stream; int stream_index = lookup_stream_by_url(client.url()); if (stream_index == -1) { assert(client.state() != Client::SENDING_DATA); stream = NULL; } else { stream = streams[stream_index]; } pair::iterator, bool> ret = clients.insert(make_pair(client.sock(), Client(client, stream))); assert(ret.second == true); // Should not already exist. Client *client_ptr = &ret.first->second; // Start listening on data from this socket. epoll_event ev; if (client.state() == Client::READING_REQUEST) { ev.events = EPOLLIN | EPOLLET | EPOLLRDHUP; } else { // If we don't have more data for this client, we'll be putting it into // the sleeping array again soon. ev.events = EPOLLOUT | EPOLLET | EPOLLRDHUP; } ev.data.u64 = reinterpret_cast(client_ptr); if (epoll_ctl(epoll_fd, EPOLL_CTL_ADD, client.sock(), &ev) == -1) { log_perror("epoll_ctl(EPOLL_CTL_ADD)"); exit(1); } if (client_ptr->state == Client::WAITING_FOR_KEYFRAME || (client_ptr->state == Client::SENDING_DATA && client_ptr->stream_pos == client_ptr->stream->bytes_received)) { client_ptr->stream->put_client_to_sleep(client_ptr); } else { process_client(client_ptr); } } int Server::lookup_stream_by_url(const std::string &url) const { map::const_iterator url_it = url_map.find(url); if (url_it == url_map.end()) { return -1; } return url_it->second; } int Server::add_stream(const string &url, size_t backlog_size, Stream::Encoding encoding) { MutexLock lock(&mutex); url_map.insert(make_pair(url, streams.size())); streams.push_back(new Stream(url, backlog_size, encoding)); return streams.size() - 1; } int Server::add_stream_from_serialized(const StreamProto &stream, int data_fd) { MutexLock lock(&mutex); url_map.insert(make_pair(stream.url(), streams.size())); streams.push_back(new Stream(stream, data_fd)); return streams.size() - 1; } void Server::set_backlog_size(int stream_index, size_t new_size) { MutexLock lock(&mutex); assert(stream_index >= 0 && stream_index < ssize_t(streams.size())); streams[stream_index]->set_backlog_size(new_size); } void Server::set_encoding(int stream_index, Stream::Encoding encoding) { MutexLock lock(&mutex); assert(stream_index >= 0 && stream_index < ssize_t(streams.size())); streams[stream_index]->encoding = encoding; } void Server::set_header(int stream_index, const string &http_header, const string &stream_header) { MutexLock lock(&mutex); assert(stream_index >= 0 && stream_index < ssize_t(streams.size())); streams[stream_index]->http_header = http_header; streams[stream_index]->stream_header = stream_header; } void Server::set_mark_pool(int stream_index, MarkPool *mark_pool) { MutexLock lock(&mutex); assert(clients.empty()); assert(stream_index >= 0 && stream_index < ssize_t(streams.size())); streams[stream_index]->mark_pool = mark_pool; } void Server::set_pacing_rate(int stream_index, uint32_t pacing_rate) { MutexLock lock(&mutex); assert(clients.empty()); assert(stream_index >= 0 && stream_index < ssize_t(streams.size())); streams[stream_index]->pacing_rate = pacing_rate; } void Server::add_data_deferred(int stream_index, const char *data, size_t bytes, StreamStartSuitability suitable_for_stream_start) { assert(stream_index >= 0 && stream_index < ssize_t(streams.size())); streams[stream_index]->add_data_deferred(data, bytes, suitable_for_stream_start); } // See the .h file for postconditions after this function. void Server::process_client(Client *client) { switch (client->state) { case Client::READING_REQUEST: { read_request_again: // Try to read more of the request. char buf[1024]; int ret; do { ret = read(client->sock, buf, sizeof(buf)); } while (ret == -1 && errno == EINTR); if (ret == -1 && errno == EAGAIN) { // No more data right now. Nothing to do. // This is postcondition #2. return; } if (ret == -1) { log_perror("read"); close_client(client); return; } if (ret == 0) { // OK, the socket is closed. close_client(client); return; } RequestParseStatus status = wait_for_double_newline(&client->request, buf, ret); switch (status) { case RP_OUT_OF_SPACE: log(WARNING, "[%s] Client sent overlong request!", client->remote_addr.c_str()); close_client(client); return; case RP_NOT_FINISHED_YET: // OK, we don't have the entire header yet. Fine; we'll get it later. // See if there's more data for us. goto read_request_again; case RP_EXTRA_DATA: log(WARNING, "[%s] Junk data after request!", client->remote_addr.c_str()); close_client(client); return; case RP_FINISHED: break; } assert(status == RP_FINISHED); int error_code = parse_request(client); if (error_code == 200) { construct_header(client); } else { construct_error(client, error_code); } // We've changed states, so fall through. assert(client->state == Client::SENDING_ERROR || client->state == Client::SENDING_HEADER); } case Client::SENDING_ERROR: case Client::SENDING_HEADER: { sending_header_or_error_again: int ret; do { ret = write(client->sock, client->header_or_error.data() + client->header_or_error_bytes_sent, client->header_or_error.size() - client->header_or_error_bytes_sent); } while (ret == -1 && errno == EINTR); if (ret == -1 && errno == EAGAIN) { // We're out of socket space, so now we're at the “low edge” of epoll's // edge triggering. epoll will tell us when there is more room, so for now, // just return. // This is postcondition #4. return; } if (ret == -1) { // Error! Postcondition #1. log_perror("write"); close_client(client); return; } client->header_or_error_bytes_sent += ret; assert(client->header_or_error_bytes_sent <= client->header_or_error.size()); if (client->header_or_error_bytes_sent < client->header_or_error.size()) { // We haven't sent all yet. Fine; go another round. goto sending_header_or_error_again; } // We're done sending the header or error! Clear it to release some memory. client->header_or_error.clear(); if (client->state == Client::SENDING_ERROR) { // We're done sending the error, so now close. // This is postcondition #1. close_client(client); return; } // Start sending from the first keyframe we get. In other // words, we won't send any of the backlog, but we'll start // sending immediately as we get the next keyframe block. // This is postcondition #3. if (client->stream_pos == size_t(-2)) { client->stream_pos = std::min( client->stream->bytes_received - client->stream->backlog_size, 0); client->state = Client::SENDING_DATA; } else { // client->stream_pos should be -1, but it might not be, // if we have clients from an older version. client->stream_pos = client->stream->bytes_received; client->state = Client::WAITING_FOR_KEYFRAME; } client->stream->put_client_to_sleep(client); return; } case Client::WAITING_FOR_KEYFRAME: { Stream *stream = client->stream; if (ssize_t(client->stream_pos) > stream->last_suitable_starting_point) { // We haven't received a keyframe since this stream started waiting, // so keep on waiting for one. // This is postcondition #3. stream->put_client_to_sleep(client); return; } client->stream_pos = stream->last_suitable_starting_point; client->state = Client::SENDING_DATA; // Fall through. } case Client::SENDING_DATA: { skip_lost_data(client); Stream *stream = client->stream; sending_data_again: size_t bytes_to_send = stream->bytes_received - client->stream_pos; assert(bytes_to_send <= stream->backlog_size); if (bytes_to_send == 0) { return; } // See if we need to split across the circular buffer. bool more_data = false; if ((client->stream_pos % stream->backlog_size) + bytes_to_send > stream->backlog_size) { bytes_to_send = stream->backlog_size - (client->stream_pos % stream->backlog_size); more_data = true; } ssize_t ret; do { off_t offset = client->stream_pos % stream->backlog_size; ret = sendfile(client->sock, stream->data_fd, &offset, bytes_to_send); } while (ret == -1 && errno == EINTR); if (ret == -1 && errno == EAGAIN) { // We're out of socket space, so return; epoll will wake us up // when there is more room. // This is postcondition #4. return; } if (ret == -1) { // Error, close; postcondition #1. log_perror("sendfile"); close_client(client); return; } client->stream_pos += ret; client->bytes_sent += ret; if (client->stream_pos == stream->bytes_received) { // We don't have any more data for this client, so put it to sleep. // This is postcondition #3. stream->put_client_to_sleep(client); } else if (more_data && size_t(ret) == bytes_to_send) { goto sending_data_again; } break; } default: assert(false); } } // See if there's some data we've lost. Ideally, we should drop to a block boundary, // but resync will be the mux's problem. void Server::skip_lost_data(Client *client) { Stream *stream = client->stream; if (stream == NULL) { return; } size_t bytes_to_send = stream->bytes_received - client->stream_pos; if (bytes_to_send > stream->backlog_size) { size_t bytes_lost = bytes_to_send - stream->backlog_size; client->stream_pos = stream->bytes_received - stream->backlog_size; client->bytes_lost += bytes_lost; ++client->num_loss_events; double loss_fraction = double(client->bytes_lost) / double(client->bytes_lost + client->bytes_sent); log(WARNING, "[%s] Client lost %lld bytes (total loss: %.2f%%), maybe too slow connection", client->remote_addr.c_str(), (long long int)(bytes_lost), 100.0 * loss_fraction); } } int Server::parse_request(Client *client) { vector lines = split_lines(client->request); if (lines.empty()) { return 400; // Bad request (empty). } vector request_tokens = split_tokens(lines[0]); if (request_tokens.size() < 2) { return 400; // Bad request (empty). } if (request_tokens[0] != "GET") { return 400; // Should maybe be 405 instead? } string url = request_tokens[1]; if (url.find("?backlog") == url.size() - 8) { client->stream_pos = -2; url = url.substr(0, url.size() - 8); } else { client->stream_pos = -1; } map::const_iterator url_map_it = url_map.find(url); if (url_map_it == url_map.end()) { return 404; // Not found. } Stream *stream = streams[url_map_it->second]; if (stream->http_header.empty()) { return 503; // Service unavailable. } client->url = request_tokens[1]; client->stream = stream; if (client->stream->mark_pool != NULL) { client->fwmark = client->stream->mark_pool->get_mark(); } else { client->fwmark = 0; // No mark. } if (setsockopt(client->sock, SOL_SOCKET, SO_MARK, &client->fwmark, sizeof(client->fwmark)) == -1) { if (client->fwmark != 0) { log_perror("setsockopt(SO_MARK)"); } } if (setsockopt(client->sock, SOL_SOCKET, SO_MAX_PACING_RATE, &client->stream->pacing_rate, sizeof(client->stream->pacing_rate)) == -1) { if (client->stream->pacing_rate != ~0U) { log_perror("setsockopt(SO_MAX_PACING_RATE)"); } } client->request.clear(); return 200; // OK! } void Server::construct_header(Client *client) { Stream *stream = client->stream; if (stream->encoding == Stream::STREAM_ENCODING_RAW) { client->header_or_error = stream->http_header + "\r\n" + stream->stream_header; } else if (stream->encoding == Stream::STREAM_ENCODING_METACUBE) { client->header_or_error = stream->http_header + "Content-encoding: metacube\r\n" + "\r\n"; if (!stream->stream_header.empty()) { metacube2_block_header hdr; memcpy(hdr.sync, METACUBE2_SYNC, sizeof(hdr.sync)); hdr.size = htonl(stream->stream_header.size()); hdr.flags = htons(METACUBE_FLAGS_HEADER); hdr.csum = htons(metacube2_compute_crc(&hdr)); client->header_or_error.append( string(reinterpret_cast(&hdr), sizeof(hdr))); } client->header_or_error.append(stream->stream_header); } else { assert(false); } // Switch states. client->state = Client::SENDING_HEADER; epoll_event ev; ev.events = EPOLLOUT | EPOLLET | EPOLLRDHUP; ev.data.u64 = reinterpret_cast(client); if (epoll_ctl(epoll_fd, EPOLL_CTL_MOD, client->sock, &ev) == -1) { log_perror("epoll_ctl(EPOLL_CTL_MOD)"); exit(1); } } void Server::construct_error(Client *client, int error_code) { char error[256]; snprintf(error, 256, "HTTP/1.0 %d Error\r\nContent-type: text/plain\r\n\r\nSomething went wrong. Sorry.\r\n", error_code); client->header_or_error = error; // Switch states. client->state = Client::SENDING_ERROR; epoll_event ev; ev.events = EPOLLOUT | EPOLLET | EPOLLRDHUP; ev.data.u64 = reinterpret_cast(client); if (epoll_ctl(epoll_fd, EPOLL_CTL_MOD, client->sock, &ev) == -1) { log_perror("epoll_ctl(EPOLL_CTL_MOD)"); exit(1); } } template void delete_from(vector *v, T elem) { typename vector::iterator new_end = remove(v->begin(), v->end(), elem); v->erase(new_end, v->end()); } void Server::close_client(Client *client) { if (epoll_ctl(epoll_fd, EPOLL_CTL_DEL, client->sock, NULL) == -1) { log_perror("epoll_ctl(EPOLL_CTL_DEL)"); exit(1); } // This client could be sleeping, so we'll need to fix that. (Argh, O(n).) if (client->stream != NULL) { delete_from(&client->stream->sleeping_clients, client); delete_from(&client->stream->to_process, client); if (client->stream->mark_pool != NULL) { int fwmark = client->fwmark; client->stream->mark_pool->release_mark(fwmark); } } // Log to access_log. access_log->write(client->get_stats()); // Bye-bye! safe_close(client->sock); clients.erase(client->sock); } void Server::process_queued_data() { { MutexLock lock(&queued_clients_mutex); for (size_t i = 0; i < queued_add_clients.size(); ++i) { add_client(queued_add_clients[i]); } queued_add_clients.clear(); } for (size_t i = 0; i < streams.size(); ++i) { streams[i]->process_queued_data(); } } cubemap-1.0.4/server.h000066400000000000000000000111411231360650400146050ustar00rootroot00000000000000#ifndef _SERVER_H #define _SERVER_H 1 #include #include #include #include #include #include #include #include #include #include "client.h" #include "stream.h" #include "thread.h" class ClientProto; struct Stream; #define EPOLL_MAX_EVENTS 8192 #define EPOLL_TIMEOUT_MS 20 #define MAX_CLIENT_REQUEST 16384 class CubemapStateProto; class MarkPool; class StreamProto; class Server : public Thread { public: Server(); ~Server(); // Get the list of all currently connected clients. std::vector get_client_stats() const; // Set header (both HTTP header and any stream headers) for the given stream. void set_header(int stream_index, const std::string &http_header, const std::string &stream_header); // Set that the given stream should use the given mark pool from now on. // NOTE: This should be set before any clients are connected! void set_mark_pool(int stream_index, MarkPool *mark_pool); // Set that the given stream should use the given max pacing rate from now on. // NOTE: This should be set before any clients are connected! void set_pacing_rate(int stream_index, uint32_t pacing_rate); // These will be deferred until the next time an iteration in do_work() happens, // and the order between them are undefined. // XXX: header should ideally be ordered with respect to data. void add_client_deferred(int sock); void add_data_deferred(int stream_index, const char *data, size_t bytes, StreamStartSuitability suitable_for_stream_start); // These should not be called while running, since that would violate // threading assumptions (ie., that epoll is only called from one thread // at the same time). CubemapStateProto serialize(); void add_client_from_serialized(const ClientProto &client); int add_stream(const std::string &url, size_t bytes_received, Stream::Encoding encoding); int add_stream_from_serialized(const StreamProto &stream, int data_fd); int lookup_stream_by_url(const std::string &url) const; void set_backlog_size(int stream_index, size_t new_size); void set_encoding(int stream_index, Stream::Encoding encoding); private: // Mutex protecting queued_add_clients. // Note that if you want to hold both this and below, // you will need to take before this one. mutable pthread_mutex_t queued_clients_mutex; // Deferred commands that should be run from the do_work() thread as soon as possible. // We defer these for two reasons: // // - We only want to fiddle with epoll from one thread at any given time, // and doing add_client() from the acceptor thread would violate that. // - We don't want the input thread(s) hanging on when doing // add_data(), since they want to do add_data() rather often, and // can be taken a lot of the time. // // Protected by . std::vector queued_add_clients; // All variables below this line are protected by the mutex. mutable pthread_mutex_t mutex; // All streams. std::vector streams; // Map from URL to index into . std::map url_map; // Map from file descriptor to client. std::map clients; // Used for epoll implementation (obviously). int epoll_fd; epoll_event events[EPOLL_MAX_EVENTS]; // The actual worker thread. virtual void do_work(); // Process a client; read and write data as far as we can. // After this call, one of these four is true: // // 1. The socket is closed, and the client deleted. // 2. We are still waiting for more data from the client. // 3. We've sent all the data we have to the client, // and put it in . // 4. The socket buffer is full (which means we still have // data outstanding). // // For #2, we listen for EPOLLIN events. For #3 and #4, we listen // for EPOLLOUT in edge-triggered mode; it will never fire for #3, // but it's cheaper than taking it in and out all the time. void process_client(Client *client); // Close a given client socket, and clean up after it. void close_client(Client *client); // Parse the HTTP request. Returns a HTTP status code (200/400/404). int parse_request(Client *client); // Construct the HTTP header, and set the client into // the SENDING_HEADER state. void construct_header(Client *client); // Construct a generic error with the given line, and set the client into // the SENDING_ERROR state. void construct_error(Client *client, int error_code); void process_queued_data(); void skip_lost_data(Client *client); void add_client(int sock); }; #endif // !defined(_SERVER_H) cubemap-1.0.4/serverpool.cpp000066400000000000000000000131571231360650400160430ustar00rootroot00000000000000#include #include #include #include "client.h" #include "log.h" #include "server.h" #include "serverpool.h" #include "state.pb.h" #include "udpstream.h" #include "util.h" struct sockaddr_in6; using namespace std; ServerPool::ServerPool(int size) : servers(new Server[size]), num_servers(size), clients_added(0), num_http_streams(0) { } ServerPool::~ServerPool() { delete[] servers; for (size_t i = 0; i < udp_streams.size(); ++i) { delete udp_streams[i]; } } CubemapStateProto ServerPool::serialize() { CubemapStateProto state; for (int i = 0; i < num_servers; ++i) { CubemapStateProto local_state = servers[i].serialize(); // The stream state should be identical between the servers, so we only store it once, // save for the fds, which we keep around to distribute to the servers after re-exec. if (i == 0) { state.mutable_streams()->MergeFrom(local_state.streams()); } else { assert(state.streams_size() == local_state.streams_size()); for (int j = 0; j < local_state.streams_size(); ++j) { assert(local_state.streams(j).data_fds_size() == 1); state.mutable_streams(j)->add_data_fds(local_state.streams(j).data_fds(0)); } } for (int j = 0; j < local_state.clients_size(); ++j) { state.add_clients()->MergeFrom(local_state.clients(j)); } } return state; } void ServerPool::add_client(int sock) { servers[clients_added++ % num_servers].add_client_deferred(sock); } void ServerPool::add_client_from_serialized(const ClientProto &client) { servers[clients_added++ % num_servers].add_client_from_serialized(client); } int ServerPool::lookup_stream_by_url(const std::string &url) const { assert(servers != NULL); return servers[0].lookup_stream_by_url(url); } int ServerPool::add_stream(const string &url, size_t backlog_size, Stream::Encoding encoding) { // Adding more HTTP streams after UDP streams would cause the UDP stream // indices to move around, which is obviously not good. assert(udp_streams.empty()); for (int i = 0; i < num_servers; ++i) { int stream_index = servers[i].add_stream(url, backlog_size, encoding); assert(stream_index == num_http_streams); } return num_http_streams++; } int ServerPool::add_stream_from_serialized(const StreamProto &stream, const vector &data_fds) { // Adding more HTTP streams after UDP streams would cause the UDP stream // indices to move around, which is obviously not good. assert(udp_streams.empty()); assert(!data_fds.empty()); string contents; for (int i = 0; i < num_servers; ++i) { int data_fd; if (i < int(data_fds.size())) { // Reuse one of the existing file descriptors. data_fd = data_fds[i]; } else { // Clone the first one. if (contents.empty()) { if (!read_tempfile(data_fds[0], &contents)) { exit(1); } } data_fd = make_tempfile(contents); } int stream_index = servers[i].add_stream_from_serialized(stream, data_fd); assert(stream_index == num_http_streams); } // Close and delete any leftovers, if the number of servers was reduced. for (size_t i = num_servers; i < data_fds.size(); ++i) { safe_close(data_fds[i]); // Implicitly deletes the file. } return num_http_streams++; } int ServerPool::add_udpstream(const sockaddr_in6 &dst, MarkPool *mark_pool, int pacing_rate) { udp_streams.push_back(new UDPStream(dst, mark_pool, pacing_rate)); return num_http_streams + udp_streams.size() - 1; } void ServerPool::set_header(int stream_index, const string &http_header, const string &stream_header) { assert(stream_index >= 0 && stream_index < ssize_t(num_http_streams + udp_streams.size())); if (stream_index >= num_http_streams) { // UDP stream. TODO: Log which stream this is. if (!stream_header.empty()) { log(WARNING, "Trying to send stream format with headers to a UDP destination. This is unlikely to work well."); } // Ignore the HTTP header. return; } // HTTP stream. for (int i = 0; i < num_servers; ++i) { servers[i].set_header(stream_index, http_header, stream_header); } } void ServerPool::add_data(int stream_index, const char *data, size_t bytes, StreamStartSuitability suitable_for_stream_start) { assert(stream_index >= 0 && stream_index < ssize_t(num_http_streams + udp_streams.size())); if (stream_index >= num_http_streams) { // UDP stream. udp_streams[stream_index - num_http_streams]->send(data, bytes); return; } // HTTP stream. for (int i = 0; i < num_servers; ++i) { servers[i].add_data_deferred(stream_index, data, bytes, suitable_for_stream_start); } } void ServerPool::run() { for (int i = 0; i < num_servers; ++i) { servers[i].run(); } } void ServerPool::stop() { for (int i = 0; i < num_servers; ++i) { servers[i].stop(); } } vector ServerPool::get_client_stats() const { vector ret; for (int i = 0; i < num_servers; ++i) { vector stats = servers[i].get_client_stats(); ret.insert(ret.end(), stats.begin(), stats.end()); } return ret; } void ServerPool::set_mark_pool(int stream_index, MarkPool *mark_pool) { for (int i = 0; i < num_servers; ++i) { servers[i].set_mark_pool(stream_index, mark_pool); } } void ServerPool::set_pacing_rate(int stream_index, uint32_t pacing_rate) { for (int i = 0; i < num_servers; ++i) { servers[i].set_pacing_rate(stream_index, pacing_rate); } } void ServerPool::set_backlog_size(int stream_index, size_t new_size) { for (int i = 0; i < num_servers; ++i) { servers[i].set_backlog_size(stream_index, new_size); } } void ServerPool::set_encoding(int stream_index, Stream::Encoding encoding) { for (int i = 0; i < num_servers; ++i) { servers[i].set_encoding(stream_index, encoding); } } cubemap-1.0.4/serverpool.h000066400000000000000000000047741231360650400155150ustar00rootroot00000000000000#ifndef _SERVERPOOL_H #define _SERVERPOOL_H 1 #include #include #include #include "server.h" #include "state.pb.h" #include "stream.h" #include "udpstream.h" class MarkPool; class Server; class UDPStream; struct ClientStats; struct sockaddr_in6; // Provides services such as load-balancing between a number of Server instances. class ServerPool { public: ServerPool(int num_servers); ~ServerPool(); // Fills streams() and clients(). CubemapStateProto serialize(); // Picks a server (round-robin) and allocates the given client to it. void add_client(int sock); void add_client_from_serialized(const ClientProto &client); // Adds the given stream to all the servers. Returns the stream index. int add_stream(const std::string &url, size_t backlog_size, Stream::Encoding encoding); int add_stream_from_serialized(const StreamProto &stream, const std::vector &data_fds); void delete_stream(const std::string &url); int add_udpstream(const sockaddr_in6 &dst, MarkPool *mark_pool, int pacing_rate); // Returns the stream index for the given URL (e.g. /foo.ts). Returns -1 on failure. int lookup_stream_by_url(const std::string &url) const; // Adds the given data to all the servers. void set_header(int stream_index, const std::string &http_header, const std::string &stream_header); void add_data(int stream_index, const char *data, size_t bytes, StreamStartSuitability suitable_for_stream_start); // Connects the given stream to the given mark pool for all the servers. void set_mark_pool(int stream_index, MarkPool *mark_pool); // Sets the max pacing rate for all the servers. void set_pacing_rate(int stream_index, uint32_t pacing_rate); // Changes the given stream's backlog size on all the servers. void set_backlog_size(int stream_index, size_t new_size); // Changes the given stream's encoding type on all the servers. void set_encoding(int stream_index, Stream::Encoding encoding); // Starts all the servers. void run(); // Stops all the servers. void stop(); std::vector get_client_stats() const; private: Server *servers; int num_servers, clients_added; // Our indexing is currently rather primitive; every stream_index in // [0, num_http_streams) maps to a HTTP stream (of which every Server // has exactly one copy), and after that, it's mapping directly into // . int num_http_streams; std::vector udp_streams; ServerPool(const ServerPool &); }; #endif // !defined(_SERVERPOOL_H) cubemap-1.0.4/state.proto000066400000000000000000000032671231360650400153450ustar00rootroot00000000000000// Corresponds to struct Client. message ClientProto { optional int32 sock = 1; optional string remote_addr = 8; optional int64 connect_time = 9; optional int32 state = 2; optional bytes request = 3; optional string url = 4; optional bytes header_or_error = 5; optional int64 header_or_error_bytes_sent = 6; optional int64 stream_pos = 7; optional int64 bytes_sent = 10; optional int64 bytes_lost = 11; optional int64 num_loss_events = 12; }; // Corresponds to struct Stream. message StreamProto { optional bytes http_header = 6; optional bytes stream_header = 7; repeated int32 data_fds = 8; optional int64 backlog_size = 5 [default=1048576]; optional int64 bytes_received = 3; optional int64 last_suitable_starting_point = 9; optional string url = 4; }; // Corresponds to class Input. message InputProto { optional int32 state = 1; optional string url = 3; optional bytes request = 4; optional int32 request_bytes_sent = 5; optional bytes response = 6; optional bytes http_header = 10; optional bytes stream_header = 14; optional bytes pending_data = 7; optional bool has_metacube_header = 8; optional int32 sock = 9; optional int64 bytes_received = 11; optional int64 data_bytes_received = 12; optional int64 connect_time = 13; }; // Corresponds to class Acceptor. message AcceptorProto { optional int32 server_sock = 1; optional int32 port = 2; optional string addr = 3; // As a string. Empty is equivalent to "::". }; message CubemapStateProto { optional int64 serialize_start_sec = 6; optional int64 serialize_start_usec = 7; repeated ClientProto clients = 1; repeated StreamProto streams = 2; repeated InputProto inputs = 5; repeated AcceptorProto acceptors = 8; }; cubemap-1.0.4/stats.cpp000066400000000000000000000041271231360650400147760ustar00rootroot00000000000000#include #include #include #include #include #include #include #include #include "client.h" #include "log.h" #include "serverpool.h" #include "stats.h" #include "util.h" using namespace std; extern ServerPool *servers; StatsThread::StatsThread(const std::string &stats_file, int stats_interval) : stats_file(stats_file), stats_interval(stats_interval) { } void StatsThread::do_work() { while (!should_stop()) { int fd; FILE *fp; time_t now; vector client_stats; // Open a new, temporary file. char *filename = strdup((stats_file + ".new.XXXXXX").c_str()); fd = mkostemp(filename, O_WRONLY); if (fd == -1) { log_perror(filename); free(filename); goto sleep; } fp = fdopen(fd, "w"); if (fp == NULL) { log_perror("fdopen"); safe_close(fd); if (unlink(filename) == -1) { log_perror(filename); } free(filename); goto sleep; } now = time(NULL); client_stats = servers->get_client_stats(); for (size_t i = 0; i < client_stats.size(); ++i) { fprintf(fp, "%s %d %d %s %d %llu %llu %llu\n", client_stats[i].remote_addr.c_str(), client_stats[i].sock, client_stats[i].fwmark, client_stats[i].url.c_str(), int(now - client_stats[i].connect_time), (long long unsigned)(client_stats[i].bytes_sent), (long long unsigned)(client_stats[i].bytes_lost), (long long unsigned)(client_stats[i].num_loss_events)); } if (fclose(fp) == EOF) { log_perror("fclose"); if (unlink(filename) == -1) { log_perror(filename); } free(filename); goto sleep; } if (rename(filename, stats_file.c_str()) == -1) { log_perror("rename"); if (unlink(filename) == -1) { log_perror(filename); } } free(filename); sleep: // Wait until we are asked to quit, stats_interval timeout, // or a spurious signal. (The latter will cause us to write stats // too often, but that's okay.) timespec timeout_ts; timeout_ts.tv_sec = stats_interval; timeout_ts.tv_nsec = 0; wait_for_wakeup(&timeout_ts); } } cubemap-1.0.4/stats.h000066400000000000000000000006521231360650400144420ustar00rootroot00000000000000#ifndef _STATS_H #define _STATS_H 1 #include "thread.h" #include // A thread that regularly writes out statistics, ie. a list of all connected clients // with some information about each. class StatsThread : public Thread { public: StatsThread(const std::string &stats_file, int stats_interval); private: virtual void do_work(); std::string stats_file; int stats_interval; }; #endif // !defined(_STATS_H) cubemap-1.0.4/stream.cpp000066400000000000000000000172631231360650400151400ustar00rootroot00000000000000#include #include #include #include #include #include #include #include #include #include #include #include "log.h" #include "metacube2.h" #include "mutexlock.h" #include "state.pb.h" #include "stream.h" #include "util.h" using namespace std; Stream::Stream(const string &url, size_t backlog_size, Encoding encoding) : url(url), encoding(encoding), data_fd(make_tempfile("")), backlog_size(backlog_size), bytes_received(0), last_suitable_starting_point(-1), mark_pool(NULL), pacing_rate(~0U), queued_data_last_starting_point(-1) { if (data_fd == -1) { exit(1); } pthread_mutex_init(&queued_data_mutex, NULL); } Stream::~Stream() { if (data_fd != -1) { safe_close(data_fd); } } Stream::Stream(const StreamProto &serialized, int data_fd) : url(serialized.url()), http_header(serialized.http_header()), stream_header(serialized.stream_header()), encoding(Stream::STREAM_ENCODING_RAW), // Will be changed later. data_fd(data_fd), backlog_size(serialized.backlog_size()), bytes_received(serialized.bytes_received()), mark_pool(NULL), pacing_rate(~0U), queued_data_last_starting_point(-1) { if (data_fd == -1) { exit(1); } assert(serialized.has_last_suitable_starting_point()); last_suitable_starting_point = serialized.last_suitable_starting_point(); pthread_mutex_init(&queued_data_mutex, NULL); } StreamProto Stream::serialize() { StreamProto serialized; serialized.set_http_header(http_header); serialized.set_stream_header(stream_header); serialized.add_data_fds(data_fd); serialized.set_backlog_size(backlog_size); serialized.set_bytes_received(bytes_received); serialized.set_last_suitable_starting_point(last_suitable_starting_point); serialized.set_url(url); data_fd = -1; return serialized; } void Stream::set_backlog_size(size_t new_size) { if (backlog_size == new_size) { return; } string existing_data; if (!read_tempfile_and_close(data_fd, &existing_data)) { exit(1); } // Unwrap the data so it's no longer circular. if (bytes_received <= backlog_size) { existing_data.resize(bytes_received); } else { size_t pos = bytes_received % backlog_size; existing_data = existing_data.substr(pos, string::npos) + existing_data.substr(0, pos); } // See if we need to discard data. if (new_size < existing_data.size()) { size_t to_discard = existing_data.size() - new_size; existing_data = existing_data.substr(to_discard, string::npos); } // Create a new, empty data file. data_fd = make_tempfile(""); if (data_fd == -1) { exit(1); } backlog_size = new_size; // Now cheat a bit by rewinding, and adding all the old data back. bytes_received -= existing_data.size(); iovec iov; iov.iov_base = const_cast(existing_data.data()); iov.iov_len = existing_data.size(); vector iovs; iovs.push_back(iov); add_data_raw(iovs); } void Stream::put_client_to_sleep(Client *client) { sleeping_clients.push_back(client); } // Return a new set of iovecs that contains only the first bytes of . vector collect_iovecs(const vector &data, size_t bytes_wanted) { vector ret; size_t max_iovecs = std::min(data.size(), IOV_MAX); for (size_t i = 0; i < max_iovecs && bytes_wanted > 0; ++i) { if (data[i].iov_len <= bytes_wanted) { // Consume the entire iovec. ret.push_back(data[i]); bytes_wanted -= data[i].iov_len; } else { // Take only parts of this iovec. iovec iov; iov.iov_base = data[i].iov_base; iov.iov_len = bytes_wanted; ret.push_back(iov); bytes_wanted = 0; } } return ret; } // Return a new set of iovecs that contains all of except the first bytes. vector remove_iovecs(const vector &data, size_t bytes_wanted) { vector ret; size_t i; for (i = 0; i < data.size() && bytes_wanted > 0; ++i) { if (data[i].iov_len <= bytes_wanted) { // Consume the entire iovec. bytes_wanted -= data[i].iov_len; } else { // Take only parts of this iovec. iovec iov; iov.iov_base = reinterpret_cast(data[i].iov_base) + bytes_wanted; iov.iov_len = data[i].iov_len - bytes_wanted; ret.push_back(iov); bytes_wanted = 0; } } // Add the rest of the iovecs unchanged. ret.insert(ret.end(), data.begin() + i, data.end()); return ret; } void Stream::add_data_raw(const vector &orig_data) { vector data = orig_data; while (!data.empty()) { size_t pos = bytes_received % backlog_size; // Collect as many iovecs as we can before we hit the point // where the circular buffer wraps around. vector to_write = collect_iovecs(data, backlog_size - pos); ssize_t ret; do { ret = pwritev(data_fd, to_write.data(), to_write.size(), pos); } while (ret == -1 && errno == EINTR); if (ret == -1) { log_perror("pwritev"); // Dazed and confused, but trying to continue... return; } bytes_received += ret; // Remove the data that was actually written from the set of iovecs. data = remove_iovecs(data, ret); } } void Stream::add_data_deferred(const char *data, size_t bytes, StreamStartSuitability suitable_for_stream_start) { MutexLock lock(&queued_data_mutex); assert(suitable_for_stream_start == SUITABLE_FOR_STREAM_START || suitable_for_stream_start == NOT_SUITABLE_FOR_STREAM_START); if (suitable_for_stream_start == SUITABLE_FOR_STREAM_START) { queued_data_last_starting_point = queued_data.size(); } if (encoding == Stream::STREAM_ENCODING_METACUBE) { // Add a Metacube block header before the data. metacube2_block_header hdr; memcpy(hdr.sync, METACUBE2_SYNC, sizeof(hdr.sync)); hdr.size = htonl(bytes); hdr.flags = htons(0); if (suitable_for_stream_start == NOT_SUITABLE_FOR_STREAM_START) { hdr.flags |= htons(METACUBE_FLAGS_NOT_SUITABLE_FOR_STREAM_START); } hdr.csum = htons(metacube2_compute_crc(&hdr)); iovec iov; iov.iov_base = new char[bytes + sizeof(hdr)]; iov.iov_len = bytes + sizeof(hdr); memcpy(iov.iov_base, &hdr, sizeof(hdr)); memcpy(reinterpret_cast(iov.iov_base) + sizeof(hdr), data, bytes); queued_data.push_back(iov); } else if (encoding == Stream::STREAM_ENCODING_RAW) { // Just add the data itself. iovec iov; iov.iov_base = new char[bytes]; memcpy(iov.iov_base, data, bytes); iov.iov_len = bytes; queued_data.push_back(iov); } else { assert(false); } } void Stream::process_queued_data() { std::vector queued_data_copy; int queued_data_last_starting_point_copy = -1; // Hold the lock for as short as possible, since add_data_raw() can possibly // write to disk, which might disturb the input thread. { MutexLock lock(&queued_data_mutex); if (queued_data.empty()) { return; } swap(queued_data, queued_data_copy); swap(queued_data_last_starting_point, queued_data_last_starting_point_copy); } // Update the last suitable starting point for the stream, // if the queued data contains such a starting point. assert(queued_data_last_starting_point_copy < ssize_t(queued_data_copy.size())); if (queued_data_last_starting_point_copy >= 0) { last_suitable_starting_point = bytes_received; for (int i = 0; i < queued_data_last_starting_point_copy; ++i) { last_suitable_starting_point += queued_data_copy[i].iov_len; } } add_data_raw(queued_data_copy); for (size_t i = 0; i < queued_data_copy.size(); ++i) { char *data = reinterpret_cast(queued_data_copy[i].iov_base); delete[] data; } // We have more data, so wake up all clients. if (to_process.empty()) { swap(sleeping_clients, to_process); } else { to_process.insert(to_process.end(), sleeping_clients.begin(), sleeping_clients.end()); sleeping_clients.clear(); } } cubemap-1.0.4/stream.h000066400000000000000000000076601231360650400146050ustar00rootroot00000000000000#ifndef _STREAM_H #define _STREAM_H 1 // Representation of a single, muxed (we only really care about bytes/blocks) stream. // Fed by Input, sent out by Server (to Client). #include #include #include #include #include #include class MarkPool; class StreamProto; struct Client; enum StreamStartSuitability { NOT_SUITABLE_FOR_STREAM_START, SUITABLE_FOR_STREAM_START, }; struct Stream { // Must be in sync with StreamConfig::Encoding. enum Encoding { STREAM_ENCODING_RAW = 0, STREAM_ENCODING_METACUBE }; Stream(const std::string &stream_id, size_t backlog_size, Encoding encoding); ~Stream(); // Serialization/deserialization. Stream(const StreamProto &serialized, int data_fd); StreamProto serialize(); // Changes the backlog size, restructuring the data as needed. void set_backlog_size(size_t new_size); // Mutex protecting and . // Note that if you want to hold both this and the owning server's // you will need to take before this one. mutable pthread_mutex_t queued_data_mutex; std::string url; // The HTTP response header, without the trailing double newline. std::string http_header; // The video stream header (if any). std::string stream_header; // What encoding we apply to the outgoing data (usually raw, but can also // be Metacube, for reflecting to another Cubemap instance). Encoding encoding; // The stream data itself, stored in a circular buffer. // // We store our data in a file, so that we can send the data to the // kernel only once (with write()). We then use sendfile() for each // client, which effectively zero-copies it out of the kernel's buffer // cache. This is significantly more efficient than doing write() from // a userspace memory buffer, since the latter makes the kernel copy // the same data from userspace many times. int data_fd; // How many bytes can hold (the buffer size). size_t backlog_size; // How many bytes this stream have received. Can very well be larger // than , since the buffer wraps. size_t bytes_received; // The last point in the stream that is suitable to start new clients at // (after having sent the header). -1 if no such point exists yet. ssize_t last_suitable_starting_point; // Clients that are in SENDING_DATA, but that we don't listen on, // because we currently don't have any data for them. // See put_client_to_sleep() and wake_up_all_clients(). std::vector sleeping_clients; // Clients that we recently got data for (when they were in // ). std::vector to_process; // What pool to fetch marks from, or NULL. MarkPool *mark_pool; // Maximum pacing rate for the stream. uint32_t pacing_rate; // Queued data, if any. Protected by . // The data pointers in the iovec are owned by us. std::vector queued_data; // Index of the last element in queued_data that is suitable to start streaming at. // -1 if none. Protected by . int queued_data_last_starting_point; // Put client to sleep, since there is no more data for it; we will on // longer listen on POLLOUT until we get more data. Also, it will be put // in the list of clients to wake up when we do. void put_client_to_sleep(Client *client); // Add more data to , adding Metacube headers if needed. // Does not take ownership of . void add_data_deferred(const char *data, size_t bytes, StreamStartSuitability suitable_for_stream_start); // Add queued data to the stream, if any. // You should hold the owning Server's . void process_queued_data(); private: Stream(const Stream& other); // Adds data directly to the stream file descriptor, without adding headers or // going through . // You should hold the owning Server's . void add_data_raw(const std::vector &data); }; #endif // !defined(_STREAM_H) cubemap-1.0.4/thread.cpp000066400000000000000000000036361231360650400151130ustar00rootroot00000000000000#include #include #include #include #include #include #include #include "log.h" #include "mutexlock.h" #include "thread.h" Thread::~Thread() {} void Thread::run() { pthread_mutex_init(&should_stop_mutex, NULL); should_stop_status = false; pthread_create(&worker_thread, NULL, &Thread::do_work_thunk, this); } void Thread::stop() { { MutexLock lock(&should_stop_mutex); should_stop_status = true; } wakeup(); if (pthread_join(worker_thread, NULL) == -1) { log_perror("pthread_join"); exit(1); } } void *Thread::do_work_thunk(void *arg) { Thread *thread = reinterpret_cast(arg); // Block SIGHUP; only the main thread should see that. // (This isn't strictly required, but it makes it easier to debug that indeed // SIGUSR1 was what woke us up.) sigset_t set; sigemptyset(&set); sigaddset(&set, SIGHUP); int err = pthread_sigmask(SIG_BLOCK, &set, NULL); if (err != 0) { errno = err; log_perror("pthread_sigmask"); exit(1); } // Block SIGUSR1, and store the old signal mask. sigemptyset(&set); sigaddset(&set, SIGUSR1); err = pthread_sigmask(SIG_BLOCK, &set, &thread->sigset_without_usr1_block); if (err != 0) { errno = err; log_perror("pthread_sigmask"); exit(1); } // Call the right thunk. thread->do_work(); return NULL; } bool Thread::wait_for_activity(int fd, short events, const struct timespec *timeout_ts) { pollfd pfd; pfd.fd = fd; pfd.events = events; for ( ;; ) { int nfds = ppoll(&pfd, (fd == -1) ? 0 : 1, timeout_ts, &sigset_without_usr1_block); if (nfds == -1 && errno == EINTR) { return false; } if (nfds == -1) { log_perror("poll"); usleep(100000); continue; } assert(nfds <= 1); return (nfds == 1); } } void Thread::wakeup() { pthread_kill(worker_thread, SIGUSR1); } bool Thread::should_stop() { MutexLock lock(&should_stop_mutex); return should_stop_status; } cubemap-1.0.4/thread.h000066400000000000000000000033241231360650400145520ustar00rootroot00000000000000#ifndef _THREAD_H #define _THREAD_H #include #include struct timespec; // A thread class with start/stop and signal functionality. // // SIGUSR1 is blocked during execution of do_work(), so that you are guaranteed // to receive it when doing wait_for_activity(), and never elsewhere. This means // that you can test whatever status flags you'd want before calling // wait_for_activity(), and then be sure that it actually returns immediately // if a SIGUSR1 (ie., wakeup()) happened, even if it were sent between your test // and the wait_for_activity() call. class Thread { public: virtual ~Thread(); void run(); void stop(); protected: // Recovers the this pointer, blocks SIGUSR1, and calls do_work(). static void *do_work_thunk(void *arg); virtual void do_work() = 0; // Waits until there is activity of the given type on (or an error), // or until a wakeup. Returns true if there was actually activity on // the file descriptor. // // If fd is -1, wait until a wakeup or timeout. // if timeout_ts is NULL, there is no timeout. bool wait_for_activity(int fd, short events, const timespec *timeout_ts); // Wait until a wakeup. void wait_for_wakeup(const timespec *timeout_ts) { wait_for_activity(-1, 0, timeout_ts); } // Make wait_for_activity() return. Note that this is a relatively expensive // operation. void wakeup(); bool should_stop(); // The signal set as it were before we blocked SIGUSR1. sigset_t sigset_without_usr1_block; private: pthread_t worker_thread; // Protects should_stop_status. pthread_mutex_t should_stop_mutex; // If this is set, the thread should return as soon as possible from do_work(). bool should_stop_status; }; #endif // !defined(_THREAD_H) cubemap-1.0.4/udpinput.cpp000066400000000000000000000062441231360650400155120ustar00rootroot00000000000000#include #include #include #include #include #include #include #include #include #include "acceptor.h" #include "log.h" #include "mutexlock.h" #include "serverpool.h" #include "state.pb.h" #include "stream.h" #include "udpinput.h" #include "util.h" #include "version.h" using namespace std; extern ServerPool *servers; UDPInput::UDPInput(const string &url) : url(url), sock(-1) { // Should be verified by the caller. string protocol; bool ok = parse_url(url, &protocol, &host, &port, &path); assert(ok); construct_header(); pthread_mutex_init(&stats_mutex, NULL); stats.url = url; stats.bytes_received = 0; stats.data_bytes_received = 0; stats.connect_time = time(NULL); } UDPInput::UDPInput(const InputProto &serialized) : url(serialized.url()), sock(serialized.sock()) { // Should be verified by the caller. string protocol; bool ok = parse_url(url, &protocol, &host, &port, &path); assert(ok); construct_header(); pthread_mutex_init(&stats_mutex, NULL); stats.url = url; stats.bytes_received = serialized.bytes_received(); stats.data_bytes_received = serialized.data_bytes_received(); if (serialized.has_connect_time()) { stats.connect_time = serialized.connect_time(); } else { stats.connect_time = time(NULL); } } InputProto UDPInput::serialize() const { InputProto serialized; serialized.set_url(url); serialized.set_sock(sock); serialized.set_bytes_received(stats.bytes_received); serialized.set_data_bytes_received(stats.data_bytes_received); serialized.set_connect_time(stats.connect_time); return serialized; } void UDPInput::close_socket() { safe_close(sock); sock = -1; } void UDPInput::construct_header() { http_header = "HTTP/1.0 200 OK\r\n" "Content-type: application/octet-stream\r\n" "Cache-control: no-cache\r\n" "Server: " SERVER_IDENTIFICATION "\r\n" "Connection: close\r\n"; } void UDPInput::add_destination(int stream_index) { stream_indices.push_back(stream_index); servers->set_header(stream_index, http_header, ""); } void UDPInput::do_work() { while (!should_stop()) { if (sock == -1) { int port_num = atoi(port.c_str()); sockaddr_in6 addr = CreateAnyAddress(port_num); sock = create_server_socket(addr, UDP_SOCKET); if (sock == -1) { log(WARNING, "[%s] UDP socket creation failed. Waiting 0.2 seconds and trying again...", url.c_str()); usleep(200000); continue; } } // Wait for a packet, or a wakeup. bool activity = wait_for_activity(sock, POLLIN, NULL); if (!activity) { // Most likely, should_stop was set. continue; } int ret; do { ret = recv(sock, packet_buf, sizeof(packet_buf), 0); } while (ret == -1 && errno == EINTR); if (ret <= 0) { log_perror("recv"); close_socket(); continue; } { MutexLock lock(&stats_mutex); stats.bytes_received += ret; stats.data_bytes_received += ret; } for (size_t i = 0; i < stream_indices.size(); ++i) { servers->add_data(stream_indices[i], packet_buf, ret, SUITABLE_FOR_STREAM_START); } } } InputStats UDPInput::get_stats() const { MutexLock lock(&stats_mutex); return stats; } cubemap-1.0.4/udpinput.h000066400000000000000000000022261231360650400151530ustar00rootroot00000000000000#ifndef _UDPINPUT_H #define _UDPINPUT_H 1 #include #include #include #include "input.h" class InputProto; class UDPInput : public Input { public: UDPInput(const std::string &url); // Serialization/deserialization. UDPInput(const InputProto &serialized); virtual InputProto serialize() const; virtual std::string get_url() const { return url; } virtual void close_socket(); virtual void add_destination(int stream_index); virtual InputStats get_stats() const; private: // Actually gets the packets. virtual void do_work(); // Create the HTTP header. void construct_header(); std::vector stream_indices; // The URL and its parsed components. std::string url; std::string host, port, path; // The HTTP header we're sending to clients. std::string http_header; // The socket we are receiving on (or -1). int sock; // Temporary buffer, sized for the maximum size of an UDP packet. char packet_buf[65536]; // Mutex protecting . mutable pthread_mutex_t stats_mutex; // The current statistics for this connection. Protected by . InputStats stats; }; #endif // !defined(_UDPINPUT_H) cubemap-1.0.4/udpstream.cpp000066400000000000000000000024171231360650400156440ustar00rootroot00000000000000#include #include #include "log.h" #include "markpool.h" #include "udpstream.h" #include "util.h" #ifndef SO_MAX_PACING_RATE #define SO_MAX_PACING_RATE 47 #endif UDPStream::UDPStream(const sockaddr_in6 &dst, MarkPool *mark_pool, uint32_t pacing_rate) : dst(dst), mark_pool(mark_pool), fwmark(0), pacing_rate(pacing_rate) { sock = socket(AF_INET6, SOCK_DGRAM, IPPROTO_UDP); if (sock == -1) { // Oops. Ignore this output, then. log_perror("socket"); return; } if (mark_pool != NULL) { fwmark = mark_pool->get_mark(); if (setsockopt(sock, SOL_SOCKET, SO_MARK, &fwmark, sizeof(fwmark)) == -1) { if (fwmark != 0) { log_perror("setsockopt(SO_MARK)"); } } } if (setsockopt(sock, SOL_SOCKET, SO_MAX_PACING_RATE, &pacing_rate, sizeof(pacing_rate)) == -1) { if (pacing_rate != ~0U) { log_perror("setsockopt(SO_MAX_PACING_RATE)"); } } } UDPStream::~UDPStream() { if (sock != -1) { safe_close(sock); } if (mark_pool != NULL) { mark_pool->release_mark(fwmark); } } void UDPStream::send(const char *data, size_t bytes) { if (sock == -1) { return; } ssize_t err = sendto(sock, data, bytes, 0, reinterpret_cast(&dst), sizeof(dst)); if (err == -1) { log_perror("sendto"); } } cubemap-1.0.4/udpstream.h000066400000000000000000000016061231360650400153100ustar00rootroot00000000000000#ifndef _UDPSTREAM_H #define _UDPSTREAM_H 1 // A single UDP destination. This is done a lot less efficient than HTTP streaming // since we expect to have so few of them, which also means things are a lot simpler. // In particular, we run in the input's thread, so there is no backlog, which means // that there is no state (UDP itself is, of course, stateless). #include #include #include #include #include class MarkPool; class UDPStream { public: // can be NULL. Does not take ownership of the mark pool. UDPStream(const sockaddr_in6 &dst, MarkPool *mark_pool, uint32_t pacing_rate); ~UDPStream(); void send(const char *data, size_t bytes); private: UDPStream(const UDPStream& other); sockaddr_in6 dst; int sock; MarkPool *mark_pool; int fwmark; uint32_t pacing_rate; }; #endif // !defined(_UDPSTREAM_H) cubemap-1.0.4/util.cpp000066400000000000000000000034061231360650400146140ustar00rootroot00000000000000#include #include #include #include #include #include #include #include "log.h" #include "util.h" #ifndef O_TMPFILE #define __O_TMPFILE 020000000 #define O_TMPFILE (__O_TMPFILE | O_DIRECTORY) #endif using namespace std; int make_tempfile(const std::string &contents) { char filename[] = "/tmp/cubemap.XXXXXX"; int fd = open(filename, O_RDWR | O_TMPFILE, 0600); if (fd == -1) { fd = mkstemp(filename); if (fd == -1) { log_perror("mkstemp"); return -1; } if (unlink(filename) == -1) { log_perror("unlink"); // Can still continue. } } const char *ptr = contents.data(); size_t to_write = contents.size(); while (to_write > 0) { ssize_t ret = write(fd, ptr, to_write); if (ret == -1) { log_perror("write"); safe_close(fd); return -1; } ptr += ret; to_write -= ret; } return fd; } bool read_tempfile_and_close(int fd, std::string *contents) { bool ok = read_tempfile(fd, contents); safe_close(fd); // Implicitly deletes the file. return ok; } bool read_tempfile(int fd, std::string *contents) { ssize_t ret, has_read; off_t len = lseek(fd, 0, SEEK_END); if (len == -1) { log_perror("lseek"); return false; } contents->resize(len); if (lseek(fd, 0, SEEK_SET) == -1) { log_perror("lseek"); return false; } has_read = 0; while (has_read < len) { ret = read(fd, &((*contents)[has_read]), len - has_read); if (ret == -1) { log_perror("read"); return false; } if (ret == 0) { log(ERROR, "Unexpected EOF!"); return false; } has_read += ret; } return true; } int safe_close(int fd) { int ret; do { ret = close(fd); } while (ret == -1 && errno == EINTR); if (ret == -1) { log_perror("close()"); } return ret; } cubemap-1.0.4/util.h000066400000000000000000000012651231360650400142620ustar00rootroot00000000000000#ifndef _UTIL_H #define _UTIL_H // Some utilities for reading and writing to temporary files. #include // Make a file in /tmp, unlink it, and write to it. // Returns the opened file descriptor, or -1 on failure. int make_tempfile(const std::string &contents); // Opposite of make_tempfile(). Returns false on failure. bool read_tempfile_and_close(int fd, std::string *contents); // Same as read_tempfile_and_close(), without the close. bool read_tempfile(int fd, std::string *contents); // Close a file descriptor, taking care of EINTR on the way. // log_perror() if it fails; apart from that, behaves as close(). int safe_close(int fd); #endif // !defined(_UTIL_H cubemap-1.0.4/version.h000066400000000000000000000003371231360650400147710ustar00rootroot00000000000000#ifndef _VERSION_H #define _VERSION_H // Version number. Don't expect this to change all that often. #define SERVER_VERSION "1.0.4" #define SERVER_IDENTIFICATION "Cubemap/" SERVER_VERSION #endif // !defined(_VERSION_H)