#endif
#define MOD_KAFKA_VERSION "mod_kafka/0.1"
/* Make sure the version of proftpd is as necessary. */
#if PROFTPD_VERSION_NUMBER < 0x0001030701
# error "ProFTPD 1.3.7rc1 or later required"
#endif
/* Define if you have the rd_kafka_conf_get function. */
#undef HAVE_RD_KAFKA_CONF_GET
/* Define if you have the rd_kafka_conf_set_dr_cb function. */
#undef HAVE_RD_KAFKA_CONF_SET_DR_CB
/* Define if you have the rd_kafka_conf_set_dr_msg_cb function. */
#undef HAVE_RD_KAFKA_CONF_SET_DR_MSG_CB
/* Define if you have the rd_kafka_conf_set_log_cb function. */
#undef HAVE_RD_KAFKA_CONF_SET_LOG_CB
/* Define if you have the rd_kafka_flush function. */
#undef HAVE_RD_KAFKA_FLUSH
/* Define if you have the rd_kafka_last_error function. */
#undef HAVE_RD_KAFKA_LAST_ERROR
/* Miscellaneous */
extern int kafka_logfd;
extern module kafka_module;
extern pool *kafka_pool;
#endif /* MOD_KAFKA_H */
proftpd-mod_kafka-0.1/mod_kafka.html 0000664 0000000 0000000 00000032765 14226304270 0017561 0 ustar 00root root 0000000 0000000
ProFTPD module mod_kafka
ProFTPD module mod_kafka
The mod_kafka
module enables ProFTPD support for sending log
messages, as JSON, to Kafka brokers using the
librdkafka client library.
This module is contained in the mod_kafka
files for
ProFTPD 1.3.x, and is not compiled by default. Installation
instructions are discussed here. More examples
of mod_kafka
usage can be found here.
The most current version of mod_kafka
can be found at:
https://github.com/Castaglia/proftpd-mod_kafka
Author
Please contact TJ Saunders <tj at castaglia.org> with any
questions, concerns, or suggestions regarding this module.
Directives
Syntax: Kafka host[:port] ...
Default: None
Context: server config, <VirtualHost>
, <Global>
Module: mod_kafka
Compatibility: 1.3.6rc5 and later
The KafkaServer
directive is used to configure the addresses/ports
of the initial Kafka brokers contacted by mod_kafka
. For example:
KafkaBroker 1.2.3.4 5.6.7.8:19092
or, for an IPv6 address, make sure the IPv6 address is enclosed in square
brackets:
KafkaBroker [::ffff:1.2.3.4]:9092
KafkaProperty broker.address.family any
Syntax: KafkaEngine on|off
Default: KafkaEngine off
Context: server config, <VirtualHost>
, <Global>
Module: mod_kafka
Compatibility: 1.3.6rc5 and later
The KafkaEngine
directive enables or disables the
mod_kafka
module, and thus the configuration of Kafka support for
the proftpd
daemon.
Syntax: KafkaLog path|"none"
Default: None
Context: server config, <VirtualHost>
, <Global>
Module: mod_kafka
Compatibility: 1.3.6rc5 and later
The KafkaLog
directive is used to specify a log file for
mod_kafka
's reporting on a per-server basis. The
file parameter given must be the full path to the file to use for
logging.
Note that this path must not be to a world-writable directory and,
unless AllowLogSymlinks
is explicitly set to on
(generally a bad idea), the path must not be a symbolic link.
Syntax: KafkaLogOnEvent "none"|events format-name [topic ...]
Default: None
Context: server config, <VirtualHost>
, <Global>
, <Anonymous>
, <Directory>
Module: mod_kafka
Compatibility: 1.3.7rc1 and later
The KafkaLogOnEvent
directive configures the use of Kafka for
logging. Whenever one of the comma-separated list of events
occurs, mod_kafka
will compose a JSON object, using the
LogFormat
named by
format-name as a template for the fields to include in the
JSON object. The JSON object of that event will then be published to a
Kafka topic. Multiple KafkaLogOnEvent
directives can be
used, for different log formats for different events and different topics.
The optional topic parameter, if present, specifies the value to use
as the topic name. If the topic name is not provided explicitly, the configured
format-name is used as the topic name.
More on the use of Kafka logging, including a table showing how
LogFormat
variables are mapped to JSON object keys can be found
here.
Example:
LogFormat sessions "%{iso8601} %a"
KafkaLogOnEvent CONNECT,DISCONNECT sessions
In addition to specific FTP commands, the events list can specify
"ALL", for logging on all commands. Or it can include the
"CONNECT" and "DISCONNECT" events, which can be useful for logging the
start and end times of a session. Note that
KafkaLogOnEvent
does support the logging classes
that the ExtendedLog
directive supports.
Syntax: KafkaProperty name value
Default: None
Context: server config, <VirtualHost>
, <Global>
Module: mod_kafka
Compatibility: 1.3.7rc1 and later
The KafkaProperty
directive is used to configure the property
name and value of the common Kafka properties; see
here.
Example:
KafkaProperty socket.timeout.ms 30000
KafkaProperty socket.keepalive.enable true
KafkaProperty socket.nagle.disable true
To install mod_kafka
, copy the mod_kafka
files into:
proftpd-dir/contrib/
after unpacking the latest proftpd-1.3.x source code. For including
mod_kafka
as a staticly linked module:
$ ./configure --with-modules=mod_kafka
To build mod_kafka
as a DSO module:
$ ./configure --enable-dso --with-shared=mod_kafka
Then follow the usual steps:
$ make
$ make install
You may also need to tell configure
how to find the
librdkafka
header and library files:
$ ./configure --with-modules=mod_kafka \
--with-includes=/path/to/librdkafka/include \
--with-libraries=/path/to/librdkafka/lib
This example shows the use of Kafka logging for all commands:
<IfModule mod_kafka.c>
KafkaEngine on
KafkaLog /var/log/ftpd/kafka.log
KafkaBroker kafka:9092
LogFormat kafka "%h %l %u %t \"%r\" %s %b"
KafkaLogOnEvent ALL kafka
</IfModule>
For cases where you need to use TLS when talking to your Kafka brokers, you
configure the necessary TLS files via the KafkaProperty
directive:
<IfModule mod_kafka.c>
KafkaEngine on
KafkaLog /var/log/ftpd/kafka.log
KafkaProperty ssl.ca.location /usr/local/etc/kafka/ca.pem
KafkaProperty ssl.certificate.location /usr/local/etc/kafka/client.pem
KafkaProperty ssl.key.location /usr/local/etc/kafka/client.pem
# Set this to false if necessary
KafkaProperty enable.ssl.certificate.verification true
# Necessary for telling librdkafka to use TLS for the broker
KafkaProperty security.protocol ssl
# Kafka uses TLS on port 9093
KafkaBroker ssl://kafka:9093
LogFormat kafka "%h %l %u %t \"%r\" %s %b"
KafkaLogOnEvent ALL kafka
</IfModule>
Kafka Logging
When using Kafka logging, the following table shows how mod_kafka
converts a LogFormat
variable into the key names in the JSON
logging objects:
LogFormat Variable |
Key |
%A |
anon_password |
%a |
remote_ip |
%b |
bytes_sent |
%c |
connection_class |
%D |
dir_path |
%d |
dir_name |
%E |
session_end_reason |
%{epoch} |
Unix timestamp, in seconds since Jan 1 1970. |
%{name}e |
ENV:name |
%F |
transfer_path |
%f |
file |
%{file-modified} |
file_modified |
%g |
group |
%{gid} |
gid |
%H |
server_ip |
%h |
remote_dns |
%I |
session_bytes_rcvd |
%{iso8601} |
timestamp |
%J |
command_params |
%L |
local_ip |
%l |
identd_user |
%m |
command |
%{microsecs} |
microsecs |
%{millisecs} |
millisecs |
%{note:name} |
NOTE:name |
%O |
session_bytes_sent |
%P |
pid |
%p |
local_port |
%{protocol} |
protocol |
%r |
raw_command |
%S |
response_msg |
%s |
response_code |
%T |
transfer_secs |
%t |
local_time |
%{transfer-failure} |
transfer_failure |
%{transfer-status} |
transfer_status |
%U |
original_user |
%u |
user |
%{uid} |
uid |
%V |
server_dns |
%v |
server_name |
%{version} |
server_version |
%w |
rename_from |
In addition to the standard LogFormat
variables, the
mod_kafka
module also adds a "connecting" key for events
generated when a client first connects, and a "disconnecting" key for events
generated when a client disconnects. These keys can be used for determining
the start/finish events for a given session.
Here is an example of the JSON-formatted records generated, using the above
example configuration:
{"connecting":true,"timestamp":"2013-08-21 23:08:22,171"}
{"command":"USER","timestamp":"2013-08-21 23:08:22,278"}
{"user":"proftpd","command":"PASS","timestamp":"2013-08-21 23:08:22,305"}
{"user":"proftpd","command":"PASV","timestamp":"2013-08-21 23:08:22,317"}
{"user":"proftpd","command":"LIST","bytes_sent":432,"transfer_secs":4.211,"timestamp":"2013-08-21 23:08:22,329"}
{"user":"proftpd","command":"QUIT","timestamp":"2013-08-21 23:08:22,336"}
{"disconnecting":true,"user":"proftpd","timestamp":"2013-08-21 23:08:22,348"}
Notice that for a given event, not all of the LogFormat
variables are filled in. If mod_kafka
determines that a given
LogFormat
variable has no value for the logged event, it will
simply omit that variable from the JSON object.
Another thing to notice is that the generated JSON object ignores the textual
delimiters configured by the LogFormat
directive; all that
matters are the LogFormat
variables which appear in the directive.
© Copyright 2017-2022 TJ Saunders
All Rights Reserved
proftpd-mod_kafka-0.1/t/ 0000775 0000000 0000000 00000000000 14226304270 0015205 5 ustar 00root root 0000000 0000000 proftpd-mod_kafka-0.1/t/Makefile.in 0000664 0000000 0000000 00000003210 14226304270 0017246 0 ustar 00root root 0000000 0000000 CC=@CC@
@SET_MAKE@
top_builddir=../../..
top_srcdir=../../..
module_srcdir=..
srcdir=@srcdir@
VPATH=@srcdir@
include $(top_srcdir)/Make.rules
# Necessary redefinitions
INCLUDES=-I. -I.. -I$(module_srcdir)/include -I../../.. -I../../../include @INCLUDES@
TEST_CPPFLAGS=$(ADDL_CPPFLAGS) -DHAVE_CONFIG_H $(DEFAULT_PATHS) $(PLATFORM) $(INCLUDES)
TEST_LDFLAGS=-L$(top_srcdir)/lib @LIBDIRS@
EXEEXT=@EXEEXT@
TEST_API_DEPS=\
$(top_srcdir)/lib/prbase.a \
$(top_srcdir)/src/pool.o \
$(top_srcdir)/src/privs.o \
$(top_srcdir)/src/str.o \
$(top_srcdir)/src/sets.o \
$(top_srcdir)/src/table.o \
$(top_srcdir)/src/netacl.o \
$(top_srcdir)/src/class.o \
$(top_srcdir)/src/event.o \
$(top_srcdir)/src/timers.o \
$(top_srcdir)/src/stash.o \
$(top_srcdir)/src/modules.o \
$(top_srcdir)/src/cmd.o \
$(top_srcdir)/src/configdb.o \
$(top_srcdir)/src/parser.o \
$(top_srcdir)/src/regexp.o \
$(top_srcdir)/src/fsio.o \
$(top_srcdir)/src/netio.o \
$(top_srcdir)/src/inet.o \
$(top_srcdir)/src/netaddr.o \
$(top_srcdir)/src/response.o \
$(top_srcdir)/src/auth.o \
$(top_srcdir)/src/env.o \
$(top_srcdir)/src/trace.o \
$(top_srcdir)/src/support.o \
$(top_srcdir)/src/json.o \
$(top_srcdir)/src/error.o
TEST_API_LIBS=-lcheck -lm
TEST_API_OBJS=\
api/kafka.o \
api/stubs.o \
api/tests.o
dummy:
api/.c.o:
$(CC) $(CPPFLAGS) $(TEST_CPPFLAGS) $(CFLAGS) -c $<
api-tests$(EXEEXT): $(TEST_API_OBJS) $(TEST_API_DEPS)
$(LIBTOOL) --mode=link --tag=CC $(CC) $(LDFLAGS) $(TEST_LDFLAGS) -o $@ $(TEST_API_DEPS) $(TEST_API_OBJS) $(TEST_API_LIBS) $(LIBS)
./$@
clean:
$(LIBTOOL) --mode=clean $(RM) *.o api/*.o api-tests$(EXEEXT) api-tests.log
proftpd-mod_kafka-0.1/t/api/ 0000775 0000000 0000000 00000000000 14226304270 0015756 5 ustar 00root root 0000000 0000000 proftpd-mod_kafka-0.1/t/api/kafka.c 0000664 0000000 0000000 00000003502 14226304270 0017177 0 ustar 00root root 0000000 0000000 /*
* ProFTPD - mod_kafka testsuite
* Copyright (c) 2017 TJ Saunders
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Suite 500, Boston, MA 02110-1335, USA.
*
* As a special exemption, TJ Saunders and other respective copyright holders
* give permission to link this program with OpenSSL, and distribute the
* resulting executable, without including the source code for OpenSSL in the
* source distribution.
*/
/* Kafka API tests */
#include "tests.h"
static pool *p = NULL;
static void set_up(void) {
if (p == NULL) {
p = permanent_pool = make_sub_pool(NULL);
session.c = NULL;
session.notes = NULL;
}
if (getenv("TEST_VERBOSE") != NULL) {
pr_trace_set_levels("kafka", 1, 20);
}
}
static void tear_down(void) {
if (getenv("TEST_VERBOSE") != NULL) {
pr_trace_set_levels("kafka", 0, 0);
}
if (p) {
destroy_pool(p);
p = permanent_pool = NULL;
session.c = NULL;
session.notes = NULL;
}
}
Suite *tests_get_kafka_suite(void) {
Suite *suite;
TCase *testcase;
suite = suite_create("kafka");
testcase = tcase_create("base");
tcase_add_checked_fixture(testcase, set_up, tear_down);
suite_add_tcase(suite, testcase);
return suite;
}
proftpd-mod_kafka-0.1/t/api/stubs.c 0000664 0000000 0000000 00000014124 14226304270 0017264 0 ustar 00root root 0000000 0000000 /*
* ProFTPD - mod_kafka API testsuite
* Copyright (c) 2017-2021 TJ Saunders
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Suite 500, Boston, MA 02110-1335, USA.
*
* As a special exemption, TJ Saunders and other respective copyright holders
* give permission to link this program with OpenSSL, and distribute the
* resulting executable, without including the source code for OpenSSL in the
* source distribution.
*/
#include "tests.h"
/* Stubs */
session_t session;
int ServerUseReverseDNS = FALSE;
server_rec *main_server = NULL;
pid_t mpid = 1;
unsigned char is_master = TRUE;
volatile unsigned int recvd_signal_flags = 0;
module *static_modules[] = { NULL };
module *loaded_modules = NULL;
xaset_t *server_list = NULL;
int kafka_logfd = -1;
module kafka_module;
pool *kafka_pool = NULL;
static cmd_rec *next_cmd = NULL;
int tests_rmpath(pool *p, const char *path) {
DIR *dirh;
struct dirent *dent;
int res, xerrno = 0;
if (path == NULL) {
errno = EINVAL;
return -1;
}
dirh = opendir(path);
if (dirh == NULL) {
xerrno = errno;
/* Change the permissions in the directory, and try again. */
if (chmod(path, (mode_t) 0755) == 0) {
dirh = opendir(path);
}
if (dirh == NULL) {
pr_trace_msg("testsuite", 9,
"error opening '%s': %s", path, strerror(xerrno));
errno = xerrno;
return -1;
}
}
while ((dent = readdir(dirh)) != NULL) {
struct stat st;
char *file;
pr_signals_handle();
if (strncmp(dent->d_name, ".", 2) == 0 ||
strncmp(dent->d_name, "..", 3) == 0) {
continue;
}
file = pdircat(p, path, dent->d_name, NULL);
if (stat(file, &st) < 0) {
pr_trace_msg("testsuite", 9,
"unable to stat '%s': %s", file, strerror(errno));
continue;
}
if (S_ISDIR(st.st_mode)) {
res = tests_rmpath(p, file);
if (res < 0) {
pr_trace_msg("testsuite", 9,
"error removing directory '%s': %s", file, strerror(errno));
}
} else {
res = unlink(file);
if (res < 0) {
pr_trace_msg("testsuite", 9,
"error removing file '%s': %s", file, strerror(errno));
}
}
}
closedir(dirh);
res = rmdir(path);
if (res < 0) {
xerrno = errno;
pr_trace_msg("testsuite", 9,
"error removing directory '%s': %s", path, strerror(xerrno));
errno = xerrno;
}
return res;
}
int tests_stubs_set_next_cmd(cmd_rec *cmd) {
next_cmd = cmd;
return 0;
}
int login_check_limits(xaset_t *set, int recurse, int and, int *found) {
return TRUE;
}
int xferlog_open(const char *path) {
return 0;
}
int pr_cmd_read(cmd_rec **cmd) {
if (next_cmd != NULL) {
*cmd = next_cmd;
next_cmd = NULL;
} else {
errno = ENOENT;
*cmd = NULL;
}
return 0;
}
int pr_config_get_server_xfer_bufsz(int direction) {
int bufsz = -1;
switch (direction) {
case PR_NETIO_IO_RD:
bufsz = PR_TUNABLE_DEFAULT_RCVBUFSZ;
break;
case PR_NETIO_IO_WR:
bufsz = PR_TUNABLE_DEFAULT_SNDBUFSZ;
break;
default:
errno = EINVAL;
return -1;
}
return bufsz;
}
void pr_log_auth(int priority, const char *fmt, ...) {
if (getenv("TEST_VERBOSE") != NULL) {
va_list msg;
fprintf(stderr, "AUTH: ");
va_start(msg, fmt);
vfprintf(stderr, fmt, msg);
va_end(msg);
fprintf(stderr, "\n");
}
}
void pr_log_debug(int level, const char *fmt, ...) {
if (getenv("TEST_VERBOSE") != NULL) {
va_list msg;
fprintf(stderr, "DEBUG%d: ", level);
va_start(msg, fmt);
vfprintf(stderr, fmt, msg);
va_end(msg);
fprintf(stderr, "\n");
}
}
int pr_log_event_generate(unsigned int log_type, int log_fd, int log_level,
const char *log_msg, size_t log_msglen) {
errno = ENOSYS;
return -1;
}
int pr_log_event_listening(unsigned int log_type) {
return FALSE;
}
int pr_log_openfile(const char *log_file, int *log_fd, mode_t log_mode) {
int res;
struct stat st;
if (log_file == NULL ||
log_fd == NULL) {
errno = EINVAL;
return -1;
}
res = stat(log_file, &st);
if (res < 0) {
if (errno != ENOENT) {
return -1;
}
} else {
if (S_ISDIR(st.st_mode)) {
errno = EISDIR;
return -1;
}
}
*log_fd = STDERR_FILENO;
return 0;
}
void pr_log_pri(int prio, const char *fmt, ...) {
if (getenv("TEST_VERBOSE") != NULL) {
va_list msg;
fprintf(stderr, "PRI%d: ", prio);
va_start(msg, fmt);
vfprintf(stderr, fmt, msg);
va_end(msg);
fprintf(stderr, "\n");
}
}
void pr_log_stacktrace(int fd, const char *name) {
}
int pr_log_writefile(int fd, const char *name, const char *fmt, ...) {
if (getenv("TEST_VERBOSE") != NULL) {
va_list msg;
fprintf(stderr, "%s: ", name);
va_start(msg, fmt);
vfprintf(stderr, fmt, msg);
va_end(msg);
fprintf(stderr, "\n");
}
return 0;
}
int pr_scoreboard_entry_update(pid_t pid, ...) {
return 0;
}
void pr_session_disconnect(module *m, int reason_code, const char *details) {
}
const char *pr_session_get_protocol(int flags) {
return "ftp";
}
void pr_signals_handle(void) {
}
/* Module-specific stubs */
module kafka_module = {
/* Always NULL */
NULL, NULL,
/* Module API version */
0x20,
/* Module name */
"kafka",
/* Module configuration handler table */
NULL,
/* Module command handler table */
NULL,
/* Module authentication handler table */
NULL,
/* Module initialization */
NULL,
/* Session initialization */
NULL,
/* Module version */
MOD_KAFKA_VERSION
};
proftpd-mod_kafka-0.1/t/api/tests.c 0000664 0000000 0000000 00000006557 14226304270 0017301 0 ustar 00root root 0000000 0000000 /*
* ProFTPD - mod_kafka API testsuite
* Copyright (c) 2017 TJ Saunders
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Suite 500, Boston, MA 02110-1335, USA.
*
* As a special exemption, TJ Saunders and other respective copyright holders
* give permission to link this program with OpenSSL, and distribute the
* resulting executable, without including the source code for OpenSSL in the
* source distribution.
*/
#include "tests.h"
struct testsuite_info {
const char *name;
Suite *(*get_suite)(void);
};
static struct testsuite_info suites[] = {
{ "kafka", tests_get_kafka_suite },
{ NULL, NULL }
};
static Suite *tests_get_suite(const char *suite) {
register unsigned int i;
for (i = 0; suites[i].name != NULL; i++) {
if (strcmp(suite, suites[i].name) == 0) {
return (*suites[i].get_suite)();
}
}
errno = ENOENT;
return NULL;
}
int main(int argc, char *argv[]) {
const char *log_file = "api-tests.log";
int nfailed = 0;
SRunner *runner = NULL;
char *requested = NULL;
runner = srunner_create(NULL);
/* XXX This log name should be set outside this code, e.g. via environment
* variable or command-line option.
*/
srunner_set_log(runner, log_file);
requested = getenv("PROXY_TEST_SUITE");
if (requested) {
Suite *suite;
suite = tests_get_suite(requested);
if (suite) {
srunner_add_suite(runner, suite);
} else {
fprintf(stderr,
"No such test suite ('%s') requested via PROXY_TEST_SUITE\n",
requested);
return EXIT_FAILURE;
}
} else {
register unsigned int i;
for (i = 0; suites[i].name; i++) {
Suite *suite;
suite = (suites[i].get_suite)();
if (suite) {
srunner_add_suite(runner, suite);
}
}
}
/* Configure the Trace API to write to stderr. */
pr_trace_use_stderr(TRUE);
requested = getenv("PROXY_TEST_NOFORK");
if (requested) {
srunner_set_fork_status(runner, CK_NOFORK);
} else {
requested = getenv("CK_DEFAULT_TIMEOUT");
if (requested == NULL) {
setenv("CK_DEFAULT_TIMEOUT", "60", 1);
}
}
srunner_run_all(runner, CK_NORMAL);
nfailed = srunner_ntests_failed(runner);
if (runner)
srunner_free(runner);
if (nfailed != 0) {
fprintf(stderr, "-------------------------------------------------\n");
fprintf(stderr, " FAILED %d %s\n\n", nfailed,
nfailed != 1 ? "tests" : "test");
fprintf(stderr, " Please send email to:\n\n");
fprintf(stderr, " tj@castaglia.org\n\n");
fprintf(stderr, " containing the `%s' file (in the t/ directory)\n", log_file);
fprintf(stderr, " and the output from running `proftpd -V'\n");
fprintf(stderr, "-------------------------------------------------\n");
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
proftpd-mod_kafka-0.1/t/api/tests.h 0000664 0000000 0000000 00000003036 14226304270 0017273 0 ustar 00root root 0000000 0000000 /*
* ProFTPD - mod_kafka API testsuite
* Copyright (c) 2017-2020 TJ Saunders
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Suite 500, Boston, MA 02110-1335, USA.
*
* As a special exemption, TJ Saunders and other respective copyright holders
* give permission to link this program with OpenSSL, and distribute the
* resulting executable, without including the source code for OpenSSL in the
* source distribution.
*/
/* Testsuite management */
#ifndef MOD_KAFKA_TESTS_H
#define MOD_KAFKA_TESTS_H
#include "mod_kafka.h"
#ifdef HAVE_CHECK_H
# include
#else
# error "Missing Check installation; necessary for ProFTPD testsuite"
#endif
int tests_rmpath(pool *p, const char *path);
int tests_stubs_set_next_cmd(cmd_rec *cmd);
Suite *tests_get_kafka_suite(void);
extern volatile unsigned int recvd_signal_flags;
extern pid_t mpid;
extern server_rec *main_server;
#endif /* MOD_KAFKA_TESTS_H */
proftpd-mod_kafka-0.1/t/etc/ 0000775 0000000 0000000 00000000000 14226304270 0015760 5 ustar 00root root 0000000 0000000 proftpd-mod_kafka-0.1/t/etc/modules/ 0000775 0000000 0000000 00000000000 14226304270 0017430 5 ustar 00root root 0000000 0000000 proftpd-mod_kafka-0.1/t/etc/modules/mod_kafka/ 0000775 0000000 0000000 00000000000 14226304270 0021344 5 ustar 00root root 0000000 0000000 proftpd-mod_kafka-0.1/t/etc/modules/mod_kafka/NOTES 0000664 0000000 0000000 00000000257 14226304270 0022163 0 ustar 00root root 0000000 0000000
See:
https://stackoverflow.com/questions/65870378/kafka-wont-start-with-pem-certificate
openssl pkcs8 -in server-key.pem -topk8 -v1 PBE-SHA1-3DES -out server-key.pem.new
proftpd-mod_kafka-0.1/t/etc/modules/mod_kafka/server-cert.pem 0000664 0000000 0000000 00000007251 14226304270 0024315 0 ustar 00root root 0000000 0000000 -----BEGIN CERTIFICATE-----
MIIFgDCCBGigAwIBAgIBCzANBgkqhkiG9w0BAQsFADCBnjEQMA4GA1UEAxMHY2Et
Y2VydDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCldhc2hpbmd0b24xEDAOBgNVBAcT
B1NlYXR0bGUxEjAQBgNVBAoTCUNhc3RhZ2xpYTEhMB8GA1UECxMYUmVzZWFyY2gg
YW5kIERldmVsb3BtZW50MR8wHQYJKoZIhvcNAQkBFhB0akBjYXN0YWdsaWEub3Jn
MB4XDTIxMDYyNzIwMzQzOFoXDTMxMDYyNTIwMzQzOFowgZAxFDASBgNVBAMTC3Nl
cnZlci1jZXJ0MQswCQYDVQQGEwJVUzEfMB0GCSqGSIb3DQEJARYQdGpAY2FzdGFn
bGlhLm9yZzESMBAGA1UEChMJQ2FzdGFnbGlhMSEwHwYDVQQLExhSZXNlYXJjaCBh
bmQgRGV2ZWxvcG1lbnQxEzARBgNVBAgTCldhc2hpbmd0b24wggEiMA0GCSqGSIb3
DQEBAQUAA4IBDwAwggEKAoIBAQC0DPELyW8cokB8PAwOtMIrdCOm4wd4cvYiFMnr
VmUc7x0eEO5Y4NoRz21UGRNNcUjaNtYJy2LW/TMqnJPaayD9j2J7c97lcsH7aohc
x7ex7MajF8FHlhDes1v+RTumuR63JhCy7cUtMKSy/VH8/69Gm7yNi8YQZIkamT+H
ej4ZDOathVfodkkK/ghxH023sbvqLXuaUfCprFhF677IgcEJlbrsgdJeAS14tNTF
3a5JDdYNoI3P6d3fRgswz61nuvGOo2SfltosHTkvKYBBWE5S0QHHDR0dgtUAsGqz
OBh1dT4qeCa40sthbZ1SaxYLiSbXMkPLNvdYP0cjhtVy5uTNAgMBAAGjggHTMIIB
zzAJBgNVHRMEAjAAMCMGCWCGSAGG+EIBDQQWFhRDZXJ0VG9vbCBDZXJ0aWZpY2F0
ZTBTBgNVHREETDBKghZmYW1pbGlhci5jYXN0YWdsaWEub3JngRB0akBjYXN0YWds
aWEub3JnhwR/AAABhhhodHRwOi8vd3d3LmNhc3RhZ2xpYS5vcmcwHQYDVR0OBBYE
FHNu8eJTWfVZ5LsLGyUvIr+2MQjSMIHLBgNVHSMEgcMwgcCAFDAjqdUIPrVWe6Nb
5qwSdDowNnoFoYGkpIGhMIGeMRAwDgYDVQQDEwdjYS1jZXJ0MQswCQYDVQQGEwJV
UzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHU2VhdHRsZTESMBAGA1UE
ChMJQ2FzdGFnbGlhMSEwHwYDVQQLExhSZXNlYXJjaCBhbmQgRGV2ZWxvcG1lbnQx
HzAdBgkqhkiG9w0BCQEWEHRqQGNhc3RhZ2xpYS5vcmeCAQAwNgYIKwYBBQUHAQEE
KjAoMCYGCCsGAQUFBzABhhpodHRwOi8vb2NzcC5jYXN0YWdsaWEub3JnLzAOBgNV
HQ8BAf8EBAMCBaAwEwYDVR0lBAwwCgYIKwYBBQUHAwEwDQYJKoZIhvcNAQELBQAD
ggEBABOtZpWZQX+d+crz+/rvfBk0kodKlDGA2J+ohYETmTTCZvV/Vp7RCrEnxXPZ
/cL/3Lb3VTs/wSNjAe7KjvKGNj7t4rwbwtH1xgTYRVX7SJWSfRyxtjJn51ieNnZo
LUEOHNsTVplmfRtyULgyuJ0iJ7Vy08hxSIra/nX0XuiU4JR/0PBcGIeKddO/JZjj
QLBuQGebtUVGEflsmUMf769JoKiZ+Hm9t8w0naTv6v1DlLEJ5XxJna/E4mGRg/kN
DySpm/l9u5sVamkik+dnDTTAuEfBItosofUFgpWKOnuuE43H3eu509llivoHTvZo
Gd1+JhbpFmDAPvvMoPuQWSf4I+s=
-----END CERTIFICATE-----
-----BEGIN ENCRYPTED PRIVATE KEY-----
MIIE6jAcBgoqhkiG9w0BDAEDMA4ECHeEd3NeS3tCAgIIAASCBMh/OV98Asi9s5mL
VYu0pAeBehwoBpdVBHSW2dRJjPac+++bWmCnkh0CEoqW3J/81tiYYVKIUblc8IwM
9sKs57uf8ozq2nSiVxLOPrQtQQi+/XHQ5o9Gbah1szykCah3qELDTQH0iTeRFYTn
l5Blj077VXsyAA7Hp+7D5GmV7VQocEEM/9wyNM1UwcekxT4vWvlibr3jdHWYigA3
YKtcAjpyKcpfd69JmS8/Rw+TM/OB966P3FRV05DsKkC/MzubKoa9A5ugnjyQAOI3
g3AIcqB8ClCNyPgSYbnKVBcwmkEcBbiB5Jb9W4Nukt2p1KN9zPaBhLx40SOOk/zf
IXkgDDjqfo4woNeryGcnhhMKNV1NePIfOgJS+yw+x4LRnBRPcR1KGnkaeqeKUXcc
nMzhyaKvSmbXQfm5qUZVLlG4gHiqeUR4M5rsIhdUhEsN6wiR0sBj9Evg/tBXyqO6
mKN7cAnf3Ox5zjPK4SiglDGEiMmbuC1W8Etl/X33IOdUMdMIfkx7jHuAW5ymRHWF
jnxFiviKh8StJL3/0gqX3rFYlPgT1st0jJUJcwYS7qnZkCSHWFYc/lHm//ZE7GQC
awq5KRaCBvESRXOPAVXDjwi70KYnzzyidE1u5/eo79VrlBUFWw3JBYIZVSynURdi
Pb062SR9vvIKHlhavprSVS+Pu1ZdTQxkgPjC8rdZ4u45BKv4VAlWRXvkJYdMOfTM
2nHkXqN9MJ4W9kFKFoL46app7PrJ/AtUPR8jwN8YXWGI9eOmLZjIbUe7TXe2B2vk
6RN/tB/NbvB+BRUQ2NVRbKNb7mlZ169hAt93aHRTING6jcbm0T4SjW4fYJBrTDdO
pL8juN1c372+v7Zk8V5rwlX+IDM/3fXM1VzWuU1Z6qgUYeydgR8wfyZE8oz1GLjN
ojlN5ZXnB5tarjL0jZSl9CsIPHL+Pa6W8cw135i3purrY8kgYTp4PM+VTjQteE/W
pUNWo2Ty6wvJW/IFF7tPj6Xwh1eTxRVk5XZnjYGHSiZKKy0+TTPun7mi+qh+apJh
/f/cLFJwDm45kp817J4BOjym0Z4nYwgYYTxMoHfrYwj2zkeYDlkQBH8//6jX43vM
clFXlcAu6UHPTIdZdhIjPyG2RXNr9BCNQ7zoZdSiyY+tqTCqGuVJEuhKSW96I3Pa
sBQA3tXP/domIa0bM50LtiBO/W0vHsdaetrtYCJO/73IqF6t0RsjbUCMpOOdDlpS
lbyqT3XKYqzwo++jbK5j8pz3VnrKISSmuQdNVShE+j5dpHa2tCNFeorx+lD203ds
4RfFIU3vWzFBZ0ODGlfQq6Q4VNQlnNSZ1D7g/dbtZAVveYopi9/fqcl7ot0QU3yN
J5ev2A74vVBkLXCRNYAawkB9BlnJr1Qs2r8VbIpuTGaidHW5+K7ZwV+2AO04jiss
MNQrsqSrZhMv74OHrUrT3mcCaA+cHS7yp+TH5TQ2arrfLEJJowr9k2W0ZcVUqZWe
iheDtDUyXN1WGAOAbMTFKN+Pb/evmcldV4TX8f0MlxCb0dy12NJPmyRCByQSVKZD
zq8m04zRkuCpYSVe4Hrwmi19XQ/buqCyT9EweLfsRq8jCjhRkRXyBrNItPqOw4zA
Bl8P0IWas1xYFqxigNU=
-----END ENCRYPTED PRIVATE KEY-----
proftpd-mod_kafka-0.1/t/etc/modules/mod_kafka/server-key.pem 0000664 0000000 0000000 00000003371 14226304270 0024147 0 ustar 00root root 0000000 0000000 -----BEGIN ENCRYPTED PRIVATE KEY-----
MIIE6jAcBgoqhkiG9w0BDAEDMA4ECHeEd3NeS3tCAgIIAASCBMh/OV98Asi9s5mL
VYu0pAeBehwoBpdVBHSW2dRJjPac+++bWmCnkh0CEoqW3J/81tiYYVKIUblc8IwM
9sKs57uf8ozq2nSiVxLOPrQtQQi+/XHQ5o9Gbah1szykCah3qELDTQH0iTeRFYTn
l5Blj077VXsyAA7Hp+7D5GmV7VQocEEM/9wyNM1UwcekxT4vWvlibr3jdHWYigA3
YKtcAjpyKcpfd69JmS8/Rw+TM/OB966P3FRV05DsKkC/MzubKoa9A5ugnjyQAOI3
g3AIcqB8ClCNyPgSYbnKVBcwmkEcBbiB5Jb9W4Nukt2p1KN9zPaBhLx40SOOk/zf
IXkgDDjqfo4woNeryGcnhhMKNV1NePIfOgJS+yw+x4LRnBRPcR1KGnkaeqeKUXcc
nMzhyaKvSmbXQfm5qUZVLlG4gHiqeUR4M5rsIhdUhEsN6wiR0sBj9Evg/tBXyqO6
mKN7cAnf3Ox5zjPK4SiglDGEiMmbuC1W8Etl/X33IOdUMdMIfkx7jHuAW5ymRHWF
jnxFiviKh8StJL3/0gqX3rFYlPgT1st0jJUJcwYS7qnZkCSHWFYc/lHm//ZE7GQC
awq5KRaCBvESRXOPAVXDjwi70KYnzzyidE1u5/eo79VrlBUFWw3JBYIZVSynURdi
Pb062SR9vvIKHlhavprSVS+Pu1ZdTQxkgPjC8rdZ4u45BKv4VAlWRXvkJYdMOfTM
2nHkXqN9MJ4W9kFKFoL46app7PrJ/AtUPR8jwN8YXWGI9eOmLZjIbUe7TXe2B2vk
6RN/tB/NbvB+BRUQ2NVRbKNb7mlZ169hAt93aHRTING6jcbm0T4SjW4fYJBrTDdO
pL8juN1c372+v7Zk8V5rwlX+IDM/3fXM1VzWuU1Z6qgUYeydgR8wfyZE8oz1GLjN
ojlN5ZXnB5tarjL0jZSl9CsIPHL+Pa6W8cw135i3purrY8kgYTp4PM+VTjQteE/W
pUNWo2Ty6wvJW/IFF7tPj6Xwh1eTxRVk5XZnjYGHSiZKKy0+TTPun7mi+qh+apJh
/f/cLFJwDm45kp817J4BOjym0Z4nYwgYYTxMoHfrYwj2zkeYDlkQBH8//6jX43vM
clFXlcAu6UHPTIdZdhIjPyG2RXNr9BCNQ7zoZdSiyY+tqTCqGuVJEuhKSW96I3Pa
sBQA3tXP/domIa0bM50LtiBO/W0vHsdaetrtYCJO/73IqF6t0RsjbUCMpOOdDlpS
lbyqT3XKYqzwo++jbK5j8pz3VnrKISSmuQdNVShE+j5dpHa2tCNFeorx+lD203ds
4RfFIU3vWzFBZ0ODGlfQq6Q4VNQlnNSZ1D7g/dbtZAVveYopi9/fqcl7ot0QU3yN
J5ev2A74vVBkLXCRNYAawkB9BlnJr1Qs2r8VbIpuTGaidHW5+K7ZwV+2AO04jiss
MNQrsqSrZhMv74OHrUrT3mcCaA+cHS7yp+TH5TQ2arrfLEJJowr9k2W0ZcVUqZWe
iheDtDUyXN1WGAOAbMTFKN+Pb/evmcldV4TX8f0MlxCb0dy12NJPmyRCByQSVKZD
zq8m04zRkuCpYSVe4Hrwmi19XQ/buqCyT9EweLfsRq8jCjhRkRXyBrNItPqOw4zA
Bl8P0IWas1xYFqxigNU=
-----END ENCRYPTED PRIVATE KEY-----
proftpd-mod_kafka-0.1/t/lib/ 0000775 0000000 0000000 00000000000 14226304270 0015753 5 ustar 00root root 0000000 0000000 proftpd-mod_kafka-0.1/t/lib/ProFTPD/ 0000775 0000000 0000000 00000000000 14226304270 0017171 5 ustar 00root root 0000000 0000000 proftpd-mod_kafka-0.1/t/lib/ProFTPD/Tests/ 0000775 0000000 0000000 00000000000 14226304270 0020273 5 ustar 00root root 0000000 0000000 proftpd-mod_kafka-0.1/t/lib/ProFTPD/Tests/Modules/ 0000775 0000000 0000000 00000000000 14226304270 0021703 5 ustar 00root root 0000000 0000000 proftpd-mod_kafka-0.1/t/lib/ProFTPD/Tests/Modules/mod_kafka.pm 0000664 0000000 0000000 00000034643 14226304270 0024167 0 ustar 00root root 0000000 0000000 package ProFTPD::Tests::Modules::mod_kafka;
use lib qw(t/lib);
use base qw(ProFTPD::TestSuite::Child);
use strict;
use File::Path qw(mkpath);
use File::Spec;
use IO::Handle;
use ProFTPD::TestSuite::FTP;
use ProFTPD::TestSuite::Utils qw(:auth :config :features :running :test :testsuite);
$| = 1;
my $order = 0;
my $TESTS = {
kafka_log_on_event => {
order => ++$order,
test_class => [qw(forking)],
},
kafka_log_on_event_custom_topic => {
order => ++$order,
test_class => [qw(forking)],
},
kafka_log_on_event_per_dir => {
order => ++$order,
test_class => [qw(forking)],
},
kafka_log_on_event_per_dir_none => {
order => ++$order,
test_class => [qw(forking)],
},
};
sub new {
return shift()->SUPER::new(@_);
}
sub list_tests {
# Check for the required Perl modules:
#
# Kafka
my $required = [qw(
JSON
Kafka
)];
foreach my $req (@$required) {
eval "use $req";
if ($@) {
print STDERR "\nWARNING:\n + Module '$req' not found, skipping all tests\n";
if ($ENV{TEST_VERBOSE}) {
print STDERR "Unable to load $req: $@\n";
}
return qw(testsuite_empty_test);
}
}
return testsuite_get_runnable_tests($TESTS);
}
sub get_kafka_host {
my $kafka_host = 'localhost';
if (defined($ENV{KAFKA_HOST})) {
$kafka_host = $ENV{KAFKA_HOST};
}
return $kafka_host;
}
sub kafka_topic_getall {
my $name = shift;
require Kafka;
require Kafka::Connection;
require Kafka::Consumer;
my $kafka_host = get_kafka_host();
my $kafka = Kafka::Connection->new(host => $kafka_host);
my $consumer = Kafka::Consumer->new(Connection => $kafka);
my $msgs = $consumer->fetch($name, 0, 0, $Kafka::DEFAULT_MAX_BYTES);
$consumer = undef;
$kafka->close;
$kafka = undef;
return $msgs;
}
# There is no easy way to purge a topic in Kafka; we thus need to generate
# unique topic names for each test.
sub get_topic_name {
my $name = '';
for (1..16) {
# Add 96 to get into the ASCII range, past punctuation
$name .= chr(int(rand(26) + 97));
}
return $name;
}
# Tests
sub kafka_log_on_event {
my $self = shift;
my $tmpdir = $self->{tmpdir};
my $setup = test_setup($tmpdir, 'kafka');
my $fmt_name = 'mod_kafka';
my $topic = $fmt_name;
kafka_topic_getall($topic);
my $kafka_host = get_kafka_host();
my $config = {
PidFile => $setup->{pid_file},
ScoreboardFile => $setup->{scoreboard_file},
SystemLog => $setup->{log_file},
TraceLog => $setup->{log_file},
Trace => 'jot:20 kafka:20',
AuthUserFile => $setup->{auth_user_file},
AuthGroupFile => $setup->{auth_group_file},
AuthOrder => 'mod_auth_file.c',
IfModules => {
'mod_delay.c' => {
DelayEngine => 'off',
},
# Note: we need to use arrays here, since order of directives matters.
'mod_kafka.c' => [
'KafkaEngine on',
"KafkaBroker $kafka_host",
"KafkaLog $setup->{log_file}",
"LogFormat $fmt_name \"%A %a %b %c %D %d %E %{epoch} %F %f %{gid} %g %H %h %I %{iso8601} %J %L %l %m %O %P %p %{protocol} %R %r %{remote-port} %S %s %T %t %U %u %{uid} %V %v %{version}\"",
"KafkaLogOnEvent ALL $fmt_name",
],
},
};
my ($port, $config_user, $config_group) = config_write($setup->{config_file},
$config);
# Open pipes, for use between the parent and child processes. Specifically,
# the child will indicate when it's done with its test by writing a message
# to the parent.
my ($rfh, $wfh);
unless (pipe($rfh, $wfh)) {
die("Can't open pipe: $!");
}
my $ex;
# Fork child
$self->handle_sigchld();
defined(my $pid = fork()) or die("Can't fork: $!");
if ($pid) {
eval {
# Allow for server startup
sleep(1);
my $client = ProFTPD::TestSuite::FTP->new('127.0.0.1', $port);
$client->login($setup->{user}, $setup->{passwd});
my $resp_code = $client->response_code();
my $resp_msg = $client->response_msg(0);
my $expected = 230;
$self->assert($expected == $resp_code,
"Expected response code $expected, got $resp_code");
$expected = "User $setup->{user} logged in";
$self->assert($expected eq $resp_msg,
"Expected response message '$expected', got '$resp_msg'");
$client->quit();
};
if ($@) {
$ex = $@;
}
$wfh->print("done\n");
$wfh->flush();
} else {
eval { server_wait($setup->{config_file}, $rfh) };
if ($@) {
warn($@);
exit 1;
}
exit 0;
}
# Stop server
server_stop($setup->{pid_file});
$self->assert_child_ok($pid);
eval {
# Allow for propagation time
sleep(2);
my $data = kafka_topic_getall($topic);
my $nrecords = scalar(@$data);
$self->assert($nrecords >= 4 || $nrecords >= 5,
"Expected at least 4-5 records, got $nrecords");
require JSON;
my $json = $data->[3]->{payload};
my $record = decode_json($json);
my $expected = $setup->{user};
$self->assert($record->{user} eq $expected,
"Expected user '$expected', got '$record->{user}'");
$expected = '127.0.0.1';
$self->assert($record->{remote_ip} eq $expected,
"Expected remote IP '$expected', got '$record->{remote_ip}'");
};
if ($@) {
$ex = $@;
}
test_cleanup($setup->{log_file}, $ex);
}
sub kafka_log_on_event_custom_topic {
my $self = shift;
my $tmpdir = $self->{tmpdir};
my $setup = test_setup($tmpdir, 'kafka');
my $fmt_name = 'mod_kafka';
my $topic = get_topic_name();
if ($ENV{TEST_VERBOSE}) {
print STDERR "# Using generated topic name: $topic\n";
}
kafka_topic_getall($topic);
my $kafka_host = get_kafka_host();
my $config = {
PidFile => $setup->{pid_file},
ScoreboardFile => $setup->{scoreboard_file},
SystemLog => $setup->{log_file},
TraceLog => $setup->{log_file},
Trace => 'jot:20 kafka:20',
AuthUserFile => $setup->{auth_user_file},
AuthGroupFile => $setup->{auth_group_file},
AuthOrder => 'mod_auth_file.c',
IfModules => {
'mod_delay.c' => {
DelayEngine => 'off',
},
# Note: we need to use arrays here, since order of directives matters.
'mod_kafka.c' => [
'KafkaEngine on',
"KafkaBroker $kafka_host",
"KafkaLog $setup->{log_file}",
"LogFormat $fmt_name \"%A %a %b %c %D %d %E %{epoch} %F %f %{gid} %g %H %h %I %{iso8601} %J %L %l %m %O %P %p %{protocol} %R %r %{remote-port} %S %s %T %t %U %u %{uid} %V %v %{version}\"",
"KafkaLogOnEvent ALL $fmt_name topic $topic",
],
},
};
my ($port, $config_user, $config_group) = config_write($setup->{config_file},
$config);
# Open pipes, for use between the parent and child processes. Specifically,
# the child will indicate when it's done with its test by writing a message
# to the parent.
my ($rfh, $wfh);
unless (pipe($rfh, $wfh)) {
die("Can't open pipe: $!");
}
my $ex;
# Fork child
$self->handle_sigchld();
defined(my $pid = fork()) or die("Can't fork: $!");
if ($pid) {
eval {
my $client = ProFTPD::TestSuite::FTP->new('127.0.0.1', $port);
$client->login($setup->{user}, $setup->{passwd});
my $resp_code = $client->response_code();
my $resp_msg = $client->response_msg(0);
my $expected = 230;
$self->assert($expected == $resp_code,
"Expected response code $expected, got $resp_code");
$expected = "User $setup->{user} logged in";
$self->assert($expected eq $resp_msg,
"Expected response message '$expected', got '$resp_msg'");
$client->quit();
};
if ($@) {
$ex = $@;
}
$wfh->print("done\n");
$wfh->flush();
} else {
eval { server_wait($setup->{config_file}, $rfh) };
if ($@) {
warn($@);
exit 1;
}
exit 0;
}
# Stop server
server_stop($setup->{pid_file});
$self->assert_child_ok($pid);
eval {
# Allow for propagation time
sleep(2);
my $data = kafka_topic_getall($topic);
my $nrecords = scalar(@$data);
$self->assert($nrecords >= 4 || $nrecords >= 5,
"Expected at least 4-5 records, got $nrecords");
require JSON;
my $json = $data->[3]->{payload};
my $record = decode_json($json);
my $expected = $setup->{user};
$self->assert($record->{user} eq $expected,
"Expected user '$expected', got '$record->{user}'");
$expected = '127.0.0.1';
$self->assert($record->{remote_ip} eq $expected,
"Expected remote IP '$expected', got '$record->{remote_ip}'");
};
if ($@) {
$ex = $@;
}
test_cleanup($setup->{log_file}, $ex);
}
sub kafka_log_on_event_per_dir {
my $self = shift;
my $tmpdir = $self->{tmpdir};
my $setup = test_setup($tmpdir, 'kafka');
my $sub_dir = File::Spec->rel2abs("$tmpdir/test.d");
mkpath($sub_dir);
my $fmt_name = 'mod_kafka';
my $topic = get_topic_name();
if ($ENV{TEST_VERBOSE}) {
print STDERR "# Using generated topic name: $topic\n";
}
kafka_topic_getall($topic);
my $kafka_host = get_kafka_host();
my $config = {
PidFile => $setup->{pid_file},
ScoreboardFile => $setup->{scoreboard_file},
SystemLog => $setup->{log_file},
TraceLog => $setup->{log_file},
Trace => 'jot:20 kafka:20',
AuthUserFile => $setup->{auth_user_file},
AuthGroupFile => $setup->{auth_group_file},
AuthOrder => 'mod_auth_file.c',
IfModules => {
'mod_delay.c' => {
DelayEngine => 'off',
},
},
};
my ($port, $config_user, $config_group) = config_write($setup->{config_file},
$config);
if (open(my $fh, ">> $setup->{config_file}")) {
if ($^O eq 'darwin') {
# Mac OSX hack
$sub_dir = '/private' . $sub_dir;
}
print $fh <
KafkaEngine on
KafkaBroker $kafka_host
KafkaLog $setup->{log_file}
LogFormat $fmt_name "%a %u"
KafkaLogOnEvent PWD $fmt_name topic $topic
EOC
unless (close($fh)) {
die("Can't write $setup->{config_file}: $!");
}
} else {
die("Can't open $setup->{config_file}: $!");
}
# Open pipes, for use between the parent and child processes. Specifically,
# the child will indicate when it's done with its test by writing a message
# to the parent.
my ($rfh, $wfh);
unless (pipe($rfh, $wfh)) {
die("Can't open pipe: $!");
}
my $ex;
# Fork child
$self->handle_sigchld();
defined(my $pid = fork()) or die("Can't fork: $!");
if ($pid) {
eval {
# Allow for server startup
sleep(1);
my $client = ProFTPD::TestSuite::FTP->new('127.0.0.1', $port);
$client->login($setup->{user}, $setup->{passwd});
$client->pwd();
$client->cwd('test.d');
$client->pwd();
$client->quit();
};
if ($@) {
$ex = $@;
}
$wfh->print("done\n");
$wfh->flush();
} else {
eval { server_wait($setup->{config_file}, $rfh) };
if ($@) {
warn($@);
exit 1;
}
exit 0;
}
# Stop server
server_stop($setup->{pid_file});
$self->assert_child_ok($pid);
eval {
# Allow for propagation time
sleep(2);
my $data = kafka_topic_getall($topic);
my $nrecords = scalar(@$data);
$self->assert($nrecords == 1, "Expected 1 record, got $nrecords");
require JSON;
my $json = $data->[0]->{payload};
my $record = decode_json($json);
my $expected = $setup->{user};
$self->assert($record->{user} eq $expected,
"Expected user '$expected', got '$record->{user}'");
$expected = '127.0.0.1';
$self->assert($record->{remote_ip} eq $expected,
"Expected remote IP '$expected', got '$record->{remote_ip}'");
};
if ($@) {
$ex = $@;
}
test_cleanup($setup->{log_file}, $ex);
}
sub kafka_log_on_event_per_dir_none {
my $self = shift;
my $tmpdir = $self->{tmpdir};
my $setup = test_setup($tmpdir, 'kafka');
my $sub_dir = File::Spec->rel2abs("$tmpdir/test.d");
mkpath($sub_dir);
my $fmt_name = 'mod_kafka';
my $topic = get_topic_name();
if ($ENV{TEST_VERBOSE}) {
print STDERR "# Using generated topic name: $topic\n";
}
kafka_topic_getall($topic);
my $kafka_host = get_kafka_host();
my $config = {
PidFile => $setup->{pid_file},
ScoreboardFile => $setup->{scoreboard_file},
SystemLog => $setup->{log_file},
TraceLog => $setup->{log_file},
Trace => 'jot:20 kafka:20',
AuthUserFile => $setup->{auth_user_file},
AuthGroupFile => $setup->{auth_group_file},
AuthOrder => 'mod_auth_file.c',
IfModules => {
'mod_delay.c' => {
DelayEngine => 'off',
},
},
};
my ($port, $config_user, $config_group) = config_write($setup->{config_file},
$config);
if (open(my $fh, ">> $setup->{config_file}")) {
if ($^O eq 'darwin') {
# Mac OSX hack
$sub_dir = '/private' . $sub_dir;
}
print $fh <
KafkaEngine on
KafkaBroker $kafka_host
KafkaLog $setup->{log_file}
LogFormat $fmt_name "%a %u"
{home_dir}>
KafkaLogOnEvent PWD $fmt_name topic $topic
KafkaLogOnEvent none
EOC
unless (close($fh)) {
die("Can't write $setup->{config_file}: $!");
}
} else {
die("Can't open $setup->{config_file}: $!");
}
# Open pipes, for use between the parent and child processes. Specifically,
# the child will indicate when it's done with its test by writing a message
# to the parent.
my ($rfh, $wfh);
unless (pipe($rfh, $wfh)) {
die("Can't open pipe: $!");
}
my $ex;
# Fork child
$self->handle_sigchld();
defined(my $pid = fork()) or die("Can't fork: $!");
if ($pid) {
eval {
# Allow for server startup
sleep(1);
my $client = ProFTPD::TestSuite::FTP->new('127.0.0.1', $port);
$client->login($setup->{user}, $setup->{passwd});
$client->pwd();
$client->cwd('test.d');
$client->pwd();
$client->quit();
};
if ($@) {
$ex = $@;
}
$wfh->print("done\n");
$wfh->flush();
} else {
eval { server_wait($setup->{config_file}, $rfh) };
if ($@) {
warn($@);
exit 1;
}
exit 0;
}
# Stop server
server_stop($setup->{pid_file});
$self->assert_child_ok($pid);
eval {
# Allow for propagation time
sleep(2);
my $data = kafka_topic_getall($topic);
if ($ENV{TEST_VERBOSE}) {
use Data::Dumper;
print STDERR "# ", Dumper($data), "\n";
}
my $nrecords = scalar(@$data);
$self->assert($nrecords == 1, "Expected 1 record, got $nrecords");
};
if ($@) {
$ex = $@;
}
test_cleanup($setup->{log_file}, $ex);
}
1;
proftpd-mod_kafka-0.1/t/lib/ProFTPD/Tests/Modules/mod_kafka/ 0000775 0000000 0000000 00000000000 14226304270 0023617 5 ustar 00root root 0000000 0000000 proftpd-mod_kafka-0.1/t/lib/ProFTPD/Tests/Modules/mod_kafka/tls.pm 0000664 0000000 0000000 00000012511 14226304270 0024757 0 ustar 00root root 0000000 0000000 package ProFTPD::Tests::Modules::mod_kafka::tls;
use lib qw(t/lib);
use base qw(ProFTPD::TestSuite::Child);
use strict;
use File::Path qw(mkpath);
use File::Spec;
use IO::Handle;
use ProFTPD::TestSuite::FTP;
use ProFTPD::TestSuite::Utils qw(:auth :config :features :running :test :testsuite);
$| = 1;
my $order = 0;
my $TESTS = {
kafka_tls_log_on_event => {
order => ++$order,
test_class => [qw(forking)],
},
};
sub new {
return shift()->SUPER::new(@_);
}
sub list_tests {
# Check for the required Perl modules:
#
# Kafka
my $required = [qw(
JSON
Kafka
)];
foreach my $req (@$required) {
eval "use $req";
if ($@) {
print STDERR "\nWARNING:\n + Module '$req' not found, skipping all tests\n";
if ($ENV{TEST_VERBOSE}) {
print STDERR "Unable to load $req: $@\n";
}
return qw(testsuite_empty_test);
}
}
return testsuite_get_runnable_tests($TESTS);
}
sub get_kafka_host {
my $kafka_host = 'localhost';
if (defined($ENV{KAFKA_HOST})) {
$kafka_host = $ENV{KAFKA_HOST};
}
return $kafka_host;
}
sub kafka_topic_getall {
my $name = shift;
require Kafka;
require Kafka::Connection;
require Kafka::Consumer;
my $kafka_host = get_kafka_host();
my $kafka = Kafka::Connection->new(host => $kafka_host);
my $consumer = Kafka::Consumer->new(Connection => $kafka);
my $msgs = $consumer->fetch($name, 0, 0, $Kafka::DEFAULT_MAX_BYTES);
$consumer = undef;
$kafka->close;
$kafka = undef;
return $msgs;
}
# There is no easy way to purge a topic in Kafka; we thus need to generate
# unique topic names for each test.
sub get_topic_name {
my $name = '';
for (1..16) {
# Add 96 to get into the ASCII range, past punctuation
$name .= chr(int(rand(26) + 97));
}
return $name;
}
# Tests
sub kafka_tls_log_on_event {
my $self = shift;
my $tmpdir = $self->{tmpdir};
my $setup = test_setup($tmpdir, 'kafka');
my $fmt_name = 'mod_kafka';
my $topic = $fmt_name;
kafka_topic_getall($topic);
my $kafka_host = get_kafka_host();
my $client_cert = File::Spec->rel2abs("$ENV{PROFTPD_TEST_DIR}/tests/t/etc/modules/mod_tls/client-cert.pem");
my $ca_cert = File::Spec->rel2abs("$ENV{PROFTPD_TEST_DIR}/tests/t/etc/modules/mod_tls/ca-cert.pem");
my $config = {
PidFile => $setup->{pid_file},
ScoreboardFile => $setup->{scoreboard_file},
SystemLog => $setup->{log_file},
TraceLog => $setup->{log_file},
Trace => 'jot:20 kafka:20',
AuthUserFile => $setup->{auth_user_file},
AuthGroupFile => $setup->{auth_group_file},
AuthOrder => 'mod_auth_file.c',
IfModules => {
'mod_delay.c' => {
DelayEngine => 'off',
},
# Note: we need to use arrays here, since order of directives matters.
'mod_kafka.c' => [
'KafkaEngine on',
"KafkaBroker $kafka_host:9093",
"KafkaProperty ssl.ca.location $ca_cert",
"KafkaProperty ssl.certification.location $client_cert",
"KafkaProperty ssl.key.location $client_cert",
"KafkaLog $setup->{log_file}",
"LogFormat $fmt_name \"%A %a %b %c %D %d %E %{epoch} %F %f %{gid} %g %H %h %I %{iso8601} %J %L %l %m %O %P %p %{protocol} %R %r %{remote-port} %S %s %T %t %U %u %{uid} %V %v %{version}\"",
"KafkaLogOnEvent ALL $fmt_name",
],
},
};
my ($port, $config_user, $config_group) = config_write($setup->{config_file},
$config);
# Open pipes, for use between the parent and child processes. Specifically,
# the child will indicate when it's done with its test by writing a message
# to the parent.
my ($rfh, $wfh);
unless (pipe($rfh, $wfh)) {
die("Can't open pipe: $!");
}
my $ex;
# Fork child
$self->handle_sigchld();
defined(my $pid = fork()) or die("Can't fork: $!");
if ($pid) {
eval {
# Allow for server startup
sleep(1);
my $client = ProFTPD::TestSuite::FTP->new('127.0.0.1', $port);
$client->login($setup->{user}, $setup->{passwd});
my $resp_code = $client->response_code();
my $resp_msg = $client->response_msg(0);
my $expected = 230;
$self->assert($expected == $resp_code,
"Expected response code $expected, got $resp_code");
$expected = "User $setup->{user} logged in";
$self->assert($expected eq $resp_msg,
"Expected response message '$expected', got '$resp_msg'");
$client->quit();
};
if ($@) {
$ex = $@;
}
$wfh->print("done\n");
$wfh->flush();
} else {
eval { server_wait($setup->{config_file}, $rfh) };
if ($@) {
warn($@);
exit 1;
}
exit 0;
}
# Stop server
server_stop($setup->{pid_file});
$self->assert_child_ok($pid);
eval {
# Allow for propagation time
sleep(2);
my $data = kafka_topic_getall($topic);
my $nrecords = scalar(@$data);
$self->assert($nrecords >= 4 || $nrecords >= 5,
"Expected at least 4-5 records, got $nrecords");
require JSON;
my $json = $data->[3]->{payload};
my $record = decode_json($json);
my $expected = $setup->{user};
$self->assert($record->{user} eq $expected,
"Expected user '$expected', got '$record->{user}'");
$expected = '127.0.0.1';
$self->assert($record->{remote_ip} eq $expected,
"Expected remote IP '$expected', got '$record->{remote_ip}'");
};
if ($@) {
$ex = $@;
}
test_cleanup($setup->{log_file}, $ex);
}
1;
proftpd-mod_kafka-0.1/t/modules/ 0000775 0000000 0000000 00000000000 14226304270 0016655 5 ustar 00root root 0000000 0000000 proftpd-mod_kafka-0.1/t/modules/mod_kafka.t 0000664 0000000 0000000 00000000265 14226304270 0020761 0 ustar 00root root 0000000 0000000 #!/usr/bin/env perl
use lib qw(t/lib);
use strict;
use Test::Unit::HarnessUnit;
$| = 1;
my $r = Test::Unit::HarnessUnit->new();
$r->start("ProFTPD::Tests::Modules::mod_kafka");
proftpd-mod_kafka-0.1/t/modules/mod_kafka/ 0000775 0000000 0000000 00000000000 14226304270 0020571 5 ustar 00root root 0000000 0000000 proftpd-mod_kafka-0.1/t/modules/mod_kafka/tls.t 0000664 0000000 0000000 00000000272 14226304270 0021561 0 ustar 00root root 0000000 0000000 #!/usr/bin/env perl
use lib qw(t/lib);
use strict;
use Test::Unit::HarnessUnit;
$| = 1;
my $r = Test::Unit::HarnessUnit->new();
$r->start("ProFTPD::Tests::Modules::mod_kafka::tls");
proftpd-mod_kafka-0.1/tests.pl 0000664 0000000 0000000 00000005066 14226304270 0016450 0 ustar 00root root 0000000 0000000 #!/usr/bin/env perl
use strict;
use Cwd qw(abs_path);
use File::Spec;
use Getopt::Long;
use Test::Harness qw(&runtests $verbose);
my $opts = {};
GetOptions($opts, 'h|help', 'C|class=s@', 'K|keep-tmpfiles', 'F|file-pattern=s',
'V|verbose');
if ($opts->{h}) {
usage();
}
if ($opts->{K}) {
$ENV{KEEP_TMPFILES} = 1;
}
$verbose = 1;
if ($opts->{V}) {
$ENV{TEST_VERBOSE} = 1;
}
# We use this, rather than use(), since use() is equivalent to a BEGIN
# block, and we want the module to be loaded at run-time.
if ($ENV{PROFTPD_TEST_DIR}) {
push(@INC, "$ENV{PROFTPD_TEST_DIR}/tests/t/lib");
}
my $test_dir = (File::Spec->splitpath(abs_path(__FILE__)))[1];
push(@INC, "$test_dir/t/lib");
require ProFTPD::TestSuite::Utils;
import ProFTPD::TestSuite::Utils qw(:testsuite);
# This is to handle the case where this tests.pl script might be
# being used to run test files other than those that ship with proftpd,
# e.g. to run the tests that come with third-party modules.
unless (defined($ENV{PROFTPD_TEST_BIN})) {
$ENV{PROFTPD_TEST_BIN} = File::Spec->catfile($test_dir, '..', 'proftpd');
}
$| = 1;
my $test_files;
if (scalar(@ARGV) > 0) {
$test_files = [@ARGV];
} else {
$test_files = [qw(
t/modules/mod_kafka.t
)];
# Now interrogate the build to see which module/feature-specific test files
# should be added to the list.
my $order = 0;
my $FEATURE_TESTS = {
't/modules/mod_kafka/tls.t' => {
order => ++$order,
test_class => [qw(feat_openssl)],
}
};
my @feature_tests = testsuite_get_runnable_tests($FEATURE_TESTS);
my $feature_ntests = scalar(@feature_tests);
if ($feature_ntests > 1 ||
($feature_ntests == 1 && $feature_tests[0] ne 'testsuite_empty_test')) {
push(@$test_files, @feature_tests);
}
}
$ENV{PROFTPD_TEST} = 1;
if (defined($opts->{C})) {
$ENV{PROFTPD_TEST_ENABLE_CLASS} = join(':', @{ $opts->{C} });
} else {
# Disable all 'inprogress' and 'slow' tests by default
$ENV{PROFTPD_TEST_DISABLE_CLASS} = 'inprogress:slow';
}
if (defined($opts->{F})) {
# Using the provided string as a regex, and run only the tests whose
# files match the pattern
my $file_pattern = $opts->{F};
my $filtered_files = [];
foreach my $test_file (@$test_files) {
if ($test_file =~ /$file_pattern/) {
push(@$filtered_files, $test_file);
}
}
$test_files = $filtered_files;
}
runtests(@$test_files) if scalar(@$test_files) > 0;
exit 0;
sub usage {
print STDOUT <